Containerizing the Services - Introduction to Kubernetes (Part 2)
Building container images for each service
Kubernetes is a container orchestrator. Understandably we need containers to be able to orchestrate them. But what are containers? This is best answered from the Documentation at docker.
A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment.
It means that containers can run on any computer — even in the production server — with no differences.
For illustration purposes let’s compare how our React Application would be served using a Virtual Machine vs. a Container.
Serving React static files from a VM
The cons of using a Virtual Machine:
- Resource inefficient, each VM has the overhead of a fully-fledged OS.
- It is platform dependent. What worked on your computer might not work on the production server.
- Heavyweight and slow scaling when compared to Containers.
Serving React static files from a Container
The pros of using a Container.
- Resource efficient, use the Host OS with the help of Docker.
- Platform independent. The container that you run on your computer will work anywhere.
- Lightweight using image layers.
Those are the most prominent features and benefits of using containers. For more information continue reading on the Docker documentation.
Building the container image for the React App (Docker intro)
The basic building block for a Docker container is the .dockerfile. The Dockerfile starts with a base container image and follows up with a sequence of instructions on how to build a new container image that meets the needs of your application.
Before we get started defining the Dockerfile, let’s remember the steps we took to serve the react static files using nginx:
- Build the static files (npm run build)
- Startup the nginx server
- Copy the contents of the build folder from your sa-frontend project to nginx/html.
In the next section, you will notice parallels on how creating a Container is similar to what we did during local React setup.
Defining the Dockerfile for SA-Frontend
The instructions in the Dockerfile for the SA-Frontend is only a two-step task. That is because the Nginx Team provided us with a base image for Nginx, which we will use to build on top of. The two steps are:
- Start from the base Nginx Image
- Copy the sa-frontend/build directory to the containers nginx/html directory.
Converted into a Dockerfile it looks like:
COPY build /usr/share/nginx/html
Isn’t it amazing, it’s even humanly readable, let’s recapitulate:
tart from the nginx image. (Whatever the guys did over there). Copy the build directory to the nginx/html directory in the image. That’s it!
You may be wondering, how did I know where to copy the build files? i.e.
/usr/share/nginx/html. Quite simple: It was documented in the nginx image in Docker Hub.
Building and Pushing the container
Before we can push our image, we need a Container Registry to host our images. Docker Hub is a free cloud container service that we will use for this demonstration. You have three tasks before continuing:
- Install Docker CE
- Register to the Docker Hub.
- Login by executing the below command in your Terminal:
docker login -u="$DOCKER_USERNAME" -p="$DOCKER_PASSWORD"
After completing the above tasks navigate to the directory sa-frontend. Then execute the below command (replace $DOCKER_USER_ID with your docker hub username. For e.g. rinormaloku/sentiment-analysis-frontend)
docker build -f Dockerfile -t $DOCKER_USER_ID/sentiment-analysis-frontend .
We can drop
-f Dockerfile because we are already in the directory containing the Dockerfile.
To push the image, use the docker push command:
docker push $DOCKER_USER_ID/sentiment-analysis-frontend
Verify in your docker hub repository that the image was pushed successfully.
Running the container
Now the image in
$DOCKER_USER_ID/sentiment-analysis-frontend can be pulled and run by anyone:
docker pull $DOCKER_USER_ID/sentiment-analysis-frontend
docker run -d -p 80:80 $DOCKER_USER_ID/sentiment-analysis-frontend
Our Docker container is running!
Before we continue lets elaborate the 80:80 which I find confusing:
- The first 80 is the port of the host (i.e. my computer)
- The second 80 stands for the container port to which the calls should be forwarded.
It maps from <hostPort> to <containerPort>. Meaning that calls to host port 80 should be mapped to the port 80 of the container, as shown in figure 9.
Because the port was run in the host (your computer) in port 80 it should be accessible on the localhost:80. If you do not have native docker support, you can open the application in <docker-machine ip>:80. To find your docker-machine ip execute
Give it a try! You should be able to access the react application in that endpoint.
We saw earlier that building the image for SA-Frontend was slow, pardon me, Extremely slow. That was the case because of the build context that had to be sent to the Docker Deamon. In more detail, the build context directory is defined by the last argument in the docker build command (the trailing dot), which specifies the build context. And in our case, it included the following folders:
But the only data we need is in the build folder. Uploading anything else will be a waste of time. We can improve our build time by dropping the other directories. That’s where
.dockerignore comes into play. For you it will be familiar because it’s like
.gitignore, i.e. add all directories that you want to ignore in the
.dockerignore file, as shown below:
.dockerignore file should be in the same folder as the Dockerfile. Now building the image takes only seconds.
Let’s continue with the Java Application.
Building the container image for the Java Application
Guess what! You learned almost everything about creating container images! That’s why this part is extremely short.
Open the Dockerfile in sa-webapp, and you will find only two new keywords:
ENV SA_LOGIC_API_URL http://localhost:5000
The keyword ENV declares an Environment Variable inside the docker container. This will enable us to provide the URL for the Sentiment Analysis API when starting the Container.
Additionally, the keyword EXPOSE exposes a port that we want to access later on. But hey!!! We didn’t do that in the Dockerfile in SA-Frontend, good catch! This is for documentation purposes only, in other words it will serve as information to the person reading the Dockerfile.
You should be familiar with building and pushing the container image. If any difficulties arise read the README.md file in sa-webapp directory.
Building the container image for the Python Application
In the Dockerfile in sa-logic there are no new keywords. Now you can call yourself a Docker-Master 😉.
For building and pushing the container image read the the README.md in sa-logic directory.
Testing the Container-ized Application
Can you trust something that you didn’t test? Neither can I. Let’s give these containers a test.
- Run the sa-logic container and configure to listen on port 5050:
docker run -d -p 5050:5000 $DOCKER_USER_ID/sentiment-analysis-logic
2. Run sa-webapp container and configure to listen on port 8080, and additionally we need to change the port in which the python app listens by overriding the environment variable SA_LOGIC_API_URL.
$ docker run -d -p 8080:8080 -e SA_LOGIC_API_URL='http://<container_ip or docker machine ip>:5000' $DOCKER_USER_ID/sentiment-analysis-web-app
Checkout the README on how to get the container ip or docker machine ip.
3. Run sa-frontend container:
docker run -d -p 80:80 $DOCKER_USER_ID/sentiment-analysis-frontend
We are done. Open your browser on localhost:80.
Attention: If you changed the port for the sa-webapp, or if you are using docker-machine ip, you need to update App.js file in sa-frontend in the method analyzeSentence to fetch from the new IP or Port. Afterwards you need to build, and use the updated image.
Brain Teaser — Why Kubernetes?
In this section, we learned about the Dockerfile, how to use it to build an image, and the commands to push it to the Docker registry. Additionally, we investigated how to reduce the number of files sent to the build context by ignoring useless files. And in the end, we got the application running from containers. So why Kubernetes? We will investigate deeper into that in the next article, but I want to leave you a brainteaser.
- Our Sentiment Analysis web app became a world hit and we suddenly have a million requests per minute to analyze sentiments and we experience huge load on sa-webapp and sa-logic. How can we scale the containers?
- Erstellt am .