Running custom Docker containers in the Google cloud
Docker is a great way to containerize applications and dependent libraries in a portable package. Using Linux’ OS level virtualization mechanisms, Docker containers run in complete isolation from each other while sharing the same Linux machine.
As such, they are a perfect match for cloud environments that offer computing services as a “utility”, abstracting from actual machines and physical locations — just drop-in your docker container and it will be run on the agreed terms without having to worry about system configurations and underlying infrastructure.
Two of the biggest cloud offerings, Google Cloud Platform (GCP) and Amazon Web Services (AWS), allow deployment of custom docker containers. This post looks at how this is done using GCP and a simple docker container implementing a web application in Java.
- After logging on to GCD, go to your projects overview page (by choosing “IAM & Admin” from the side menu and selecting the “All Projects” view) and create a new project (“DockerProject” in the following). The new project is assigned an ID of the form “dockerproject-xxxxxx”.
- From the side menu, select the “Container Engine” view and create a container cluster. There is a plethora of settings that can be adjusted. Next to a name for the cluster (“dockercluster1”), it is mandatory to select a region, the number of computing cores and memory size of a VM, and the number of VMs in the cluster (the more of it, the higher your bill will be). The other settings can be left unchanged for this example.
- You now have a cluster of interconnected VMs, each of which is also accessible from the internet by their own IP address! To check them out, go to the side menu and choose the “Computer Engine” view. Under “VM instances”, you see the details for each of the VMs, including its internal and external IP addresses.
- [Sidenote, can be skipped] In the list, there is a smart feature that opens up an SSH session to the VM straight from the browser. To check out connectivity, let’s install a HTTP server on a VM and see if it can be reached from the internet! Open a browser-based SSH session and issue the following commands that will install and start an Apache httpd daemon:
sudo apt-get update sudo apt-get install apache2
To be sure that it is in fact your web server that you’re looking at, you should modify its default page with some unique message. You can do this using the console-based Nano editor:
sudo nano /var/www/index.html
The last step is to modify the firewall rules in order to let HTTP traffic pass to the VM. To do this, go back to the list of VM instances and click on the name of the VM you have just SSHed into. This will open the settings page for this specific VM. Click “Edit” and check the box “Allow HTTP traffic” in the Firewall section. Click “Save”.
Done! You should now see your custom web page when entering the VM’s external IP address in a browser window. - Now you have to prepare your local machine to push the Docker image to your private container repository in the Google cloud. The first step is to install the Google Cloud SDK which includes the gcloud utility. After installation run
gcloud auth login
to authenticate gcloud to your account in the Google Cloud. This will open a browser session where you have to log in with your Google account and grant the required permission (an OAuth token is generated).
For the deployment of containers you need to install kubectl, the Kubernetes cluster management utility. This is done by:gcloud components install kubectl
To configure kubectl for use with your cluster, type the command that is displayed when pressing the “Connect” button in the list of Container clusters next to your cluster, something like:
gcloud container clusters get-credentials dockercluster1 --zone europe-west1-b --project dockerproject-xxxxxx
- To push your Docker image to the cloud repository, you have to add a tag to the image that includes the name of your private container repository in the Google cloud. The repository name is made up of “eu.gcr.io/” (if your container is to be hosted in the European Union, other gcr.io hostnames can be looked up here) and your GCE project ID (“dockerproject-xxxxxx”). If the ID of your local Docker image is “yyyyyy”, the command will look like this:
docker tag yyyyyy eu.gcr.io/dockerproject-xxxxxx/yyyyyy
(You can also pick a different name instead of the ID “yyyyyy” in the new tag.)
You can now push the image to the cloud using the newly created tag:gcloud docker push eu.gcr.io/dockerproject-xxxxxx/yyyyyy
If everything worked out, you will now see your Docker image in the Container Registry.
- To deploy the container, start the Kubernetes control plane on your local machine:
kubectl proxy
This starts a local proxy which can be accessed by directing your browser to the address:
http://localhost:8001/ui
In the browser frontend, choose the “Deploy App” button and enter the required information. This is essentialy a name for the app (you can just pick one), the container image (“eu.gcr.io/dockerproject-xxxxxx/yyyyyy” in this case) and the number of pods (that is, VMs) the container shall be deployed on. Another important setting is the “Service” drop-down menu. Since your container is likely to interact with the outside world, you can define a port mapping, either on the VM’s internal or external interface. Services running in your container will then be accessible on the corresponding port of the VM.
- This is it! If everything has worked as it should, your docker container is now running in the Google Cloud. If you defined a port mapping, you can access the container from the internet. To do this, you do not usually use the IP address of an individual VM, but that of the container cluster itself, which acts as a load balancer and forwards the request to one of the configured VMs. You can find all the external IP addresses (of the VMs and the container cluster), when you choose the “Networking” view from the side menu and pick the “External IP addresses” page.