Posts Docker Self Help Part-1, The Intro
Post
Cancel

Docker Self Help Part-1, The Intro

As the expensive Mainframes sat idle most of the time, the tinkering Hackers went on a journey to optimize hardware utilization in such a way that compute is not only shared for multiple tasks but also provides guaranteed process isolation and security.

Solutions like VmWare vSphere Hypervisor were built allowing a machine to host multiple virtual or guest operating systems.

So far all of this was for sysadmins to play and configure, because in most of the organisations only System Administrators have access to configure operating systems and resources as requested by a team for running their application.

As this does not include or take into consideration the software developer, these things in most organisations only come into play when the software is ready to be deployed, in some cases it is used to provide a virtual development environment but again the developer is more of a “user” here.

Although all this can be automated by using Terraform, chef, puppet to reduce the setup time and to add or remove a guest on demand but resource optimization via full virtualization is still tricky.

Example:

A development environment built with let’s say 2Gb of RAM, but sat idle most of the time and even under “load” would consume 40% of RAM, as only a few developers would use it, that to only when doing sanity tests to verify their integration code changes.

In most if not all Organisations development, testing, staging and production environments software layer is identical the difference is mostly in CPU/RAM/Disk configuration, and these environments have a lot of moving parts in them, each virtual or guest setup requires a full operating system, core utilities and application runtime software to be installed + configured, this makes them resource intensive.

If the software application only required 512mb of RAM, system administrator will still have to add a buffer on top of that so that operating system, security-audit, antivirus software can run smoothly.

Even after all this the most common problem that software developers face is when the application refuses to behave properly on a test environment and all the poor developer can say is “But… it works on my machine”.

Most of the time this happens because a developer has full freedom to tune and fine-tune local/development environment.

Such as freedom to import local certificate chains, upgrade or downgrade runtime versions.

Now in order to do the same on Testing, Staging or Live Environments, the developer has to suffer through endless support calls, or due to endless meetings the developer just forgets that “such and such” was done to get this working on local/development environment.

Thereby causing runtime failures when the same application is running on test environment.

Now this brings us to containerization.

Dipping toe’s in containerization

Imagine if developer created a file sort of like a “blue print”, that not only specifies what operating system to use, what to install in that OS, but also all the configuration steps required for the application to run.

Example in plain english:

  • setup runtime operating system as ubuntu
  • update ubuntu
  • install openssl and java
  • copy local organisation certificates from the current folder
  • now add those certificates to the ubuntu certificates bundle
  • next copy my application binary
  • make port 8080 visible as my application listens on port 8080
  • finally to run my application execute command java -jar my_application

This file is then sent to a software that goes through these instructions and “bundles” them in another file known as an “image”.

This image is like a “package” which holds instructions in the exact order as they were specified in the above text file including all the required files and binaries.

When creating this “image”/”package” the image-creator-software basically marks the above file instructions as “things to do at runtime” or things to do right now.

Like “setup runtime operating system as ubuntu” will happen when this “Image” is actually executed.

But “copy local organisation certificates from the current folder” must happen right now.

So the files as instructed will be copied from the current folder to this “Image”/”package”.

This “image” is then given a name and is then upload to a local or remote image repository.

Anytime anyone wants to run the application on local, development, testing, staging or live environment.

That person will execute a software that understands these images and pass it the image name that was given when building the image.

This software will then using the image name first lookup the local repository, If the image is not available locally, then a remote repository lookup happens.

After the image has been downloaded from the remote repository, to speed up future executions, it is first added to the local repository.

The software then:

  • unpacks this image
  • starts the operating system
  • executes all the run time instructions like “update ubuntu”
  • expose port 8080 that application will bind to
  • finally, as per the sequence of instructions run the packaged binary

Looking at the above you can see that in this approach developer is a first class citizen as he/she is involved in not only in selecting, configuring the runtime environment but also packaging the application.

This also means if the application “image” works on “developers machine”, it will work on other environments as well, as during runtime the other environment will pull the same image.

This post is licensed under CC BY 4.0 by the author.