Author: Scott

  • Docker For Idiots

    This took me way too long to wrap my head around. Hence the name of this post.

    What is Docker?

    Docker is an open source platform that allows for the packaging of software inside containers.

    Cool, what’s a Container?

    A container is similar to a virtual machine. Each container contains a stripped down version of linux + the additional libraries and software needed to run the application the specific container was built for. This allows developers to package their software applications, as well as all the applications’ dependancies, into a single portable unit. One of the major problems this solves is the ‘it works on my machine’ problem that can occur when different packages are installed on different systems.

    NOTE: A container is not a complete virtual machine since many non-essential components have been removed from whichever linux distribution it’s based off. For example, there is no kernal in a container, it shares the kernal with the host operating system instead.

    For example. If I have an application that relies on some features of PHP 8.2 but your host operating system or server is running PHP 7.4 the application will not work.

    With Docker, we can package up the correct PHP version (and any other dependancies) inside a container with the rest of the applicaiton to ensure it works properly on any machine that has docker installed.

    Neat, Can You Show Me?

    First, install docker desktop on your system. Once that is done we can run our first dockerized application by running the following in the terminal.

    $ docker run ubuntu

    If this is your first time running a ubuntu container, you’ll see this message:

    Unable to find image 'ubuntu:latest' locally
    latest: Pulling from library/ubuntu

    And then … nothing. You’ll be back in your terminal.

    What happened here?

    1. The Docker client contacted the Docker daemon.
    2. The Docker daemon pulled the “ubuntu” image from the Docker Hub.
    3. The Docker daemon created a new container from the ubuntu image.
    4. There was nothing for the container to do so it just exits

    Explain those Terms / Concepts Please.

    Docker Client: A CLI tool used to interact with the docker daemon.

    Docker Daemon: A background process that manages the Docker platform (images, containers etc.)

    Docker Image / Container: Similar to how in OOP, an Object is an instance of a Class, so to a Container is an ‘instance’ of an Image. An Image contains the files and instructions needed to create a specific Container (unlimited times). Similar to how if you have an .iso of an operating system, you can create unlimited virtual machines.

    Docker Hub: A place to upload / store / download Docker Images. We can download the images to create unlimited containers on our machines / servers.

    What we did was quite underwhelming because nothing visible happened.

    Lets try again:

    $ docker run -it ubuntu

    NOTE: The -it flag tells docker we would like an interactive terminal into the container we are creating.

    So, this time you should find your terminal is now inside a running ubuntu container, similar to what it looks like when you SSH into another server.

    You can type ls to view the contents of your current directory. Or type cat /etc/os-release to view information about your linux environment.

    Can I Have an Example of How this is Useful?

    Sure thing. Let’s start dead simple.

    Let’s create a PHP app that will check if a person is on the guest list. Create a directory with an index.php and add the following code:

    <?php
    
    $user_name = readline("Enter your Name: ");
    if ( str_contains('bill, bob', $user_name) ) {
        echo "You are on the guest list \n";
    } else {
        echo "Go away. \n";
    }
    
    ?>

    Expert level programming here as you can see.

    But UH-OH we have a problem. the str_contains() function is not available in PHP versions older than version 8, so if we share our software with someone running an earlier version, their program will crash. If our own machine is running a PHP version earlier than 8, we won’t be able to run our software either.

    That’s no good.

    Thankfully Docker has a solution for us.

    We can simply run a docker container with the proper PHP version installed to see our software in action. Let’s start again with our Ubuntu container.

    $docker run -it --rm ubuntu

    NOTE: The –rm flag tells docker to delete the container once it exits.

    Once inside the container, we will update ubuntu and install php

    $ apt update
    $ apt install php libapache2-mod-php php-cli

    That’s the hard way.

    The easy way is to run this instead:

    $ docker run -it --rm php bash

    And docker will pull a php image from docker hub instead.

    NOTE: We need to suffix this command with ‘bash’ because the this php container by default drops us into a PHP interactive shell, and we would prefer to get into a regular bash terminal instead.

    There are thousands of different images on docker hub. Docker also publishes a bunch of different official images (the php image is one) to use.

    This php image is just some version of linux with php pre-installed so we don’t have to deal with that part ourselves. There are ready made images for most major programming langages as well as for popular software.

    For example:

    • nginx
    • ruby
    • postgres
    • mysql
    • node
    • rust

    In each of these cases, the image is just linux + the specific software installed.

    I digress.

    To test this works, we can move into the home directory ( or anywhere really ) and run:

    $ echo '<?php echo "hello world\n"?>' > test.php

    … And then run:

    $ php test.php

    You should see a nice ‘hello world’ in spit back at you.

    Now lets go ahead and edit test.php and copy and paste our guest list program above. If we type:

    $ nano test.php

    UH-OH … nothing happened. Remember, containers use very stripped down versions of linux so the nano text editor is not even included in the base container. We can install it by running:

    $ apt update
    $ apt install nano

    Now we should be able to use nano test.php and copy and paste our program into it. Once that’s done, give it a run with php test.php to see our program in action!

    That was annoying

    Agreed! We don’t want to install nano every time we run a container and we definitely don’t want to be copy and pasting into it / using it as our main code editor.

    Thankfully docker has a better solution in the form of volumes. These allow us to mount data stored outside the container into the container at runtime.

    We will use it to link our program’s files on our local machine to a directory inside the container which will allow us to use VS Code ( or whatever ) on our local machine for development, but still run the program inside the container.

    Making sure we are inside our project directory on our local machine, run the following:

    $ docker run -it --rm -v "$PWD":/home/myapp php

    NOTE: “$PWD” evaluates to the current directory.

    You should now be able to navigate to /home/myapp inside the container and see the project files from you current directory! In our case, we can run these files with php index.php (or whatever you named the file) inside the container to run the code.

    We can also make changes on our local machine with VS Code ( or whatever ) to see these changes in real time inside our container. If we add new content to our program, or even new files they will appear instantly inside the container, ready to be executed.

    Getting Online

    We’ve made some great progress, but although we can use docker for random scripts, one of the main use cases is for the web. Let’s dive into how to spin up a server on docker.

    Let’s start with an nginx image. This time we will use the alpine linux version. We could get the same result by choosing an alpine linux image and manually installing nginx. This is more convenient.

    Make sure you’re in your project directory and create an index.html with some content in it. Then you can go ahead and run:

    $ docker run -v "$PWD":/usr/share/nginx/html -p 8080:80 nginx:alpine

    This will pull the alpine-based nginx image and link the current directory to the folder in the container that nginx serves from by default. In addition we needed to do one more thing. The -p 8080:80 flag tells the container to map port 8080 on the local machine to port 80 of the container. Now we can head to localhost:8080 to see whatever was put in this index.html inside your project!

    Creating our Own Images

    It’s all well and good to use an image off of docker hub when we are developing, but we can also create our own images for when it’s time to publish our software.

    Let’s continue on with our previous work and make a simple web page that allows a user to check if there name is on the guest list.

    Change your index.html to an index.php and make this the contents:

    <!DOCTYPE html>
    <html>
    <head>
        <title>Guest List</title>
    </head>
    <body>
        <h1>Enter Your Name:</h1>
        <form action="" method="POST">
            <input type="text" name="name">
            <button type="submit">Check</button>
            <?php
            if ($_SERVER['REQUEST_METHOD'] === 'POST' && ! empty($_POST['name'])) {
                $name = $_POST['name'];
                if ( str_contains('bill, bob', $name) ) {
                    echo "<p>You are on the guest list!</p>";
                } else {
                    echo "<p>Go away.</p>";
                }
            }
            ?>
        </form>
    </body>
    </html>

    Since, we already have docker installed on our system the next step is to create a file called Dockerfile in the same directory as our index.php. The Dockerfile is what is needed to create a Docker Image.

    # Start with the Official PHP + Apache Image from Docker Hub
    # All Dockerfiles start with the FROM statement
    FROM php:apache
    
    # Set the working directory inside the container
    WORKDIR /var/www/html
    
    # Copy the index.php file to the container ( note the . )
    COPY index.php .
    
    # Expose the port
    EXPOSE 80

    Alternativly:

    FROM php:apache
    COPY . /var/www/html
    WORKDIR /var/www/html
    EXPOSE 80

    Note: Again, Instead of choosing the php:apache image we could have also started with a basic linux distro (like ubuntu or alpine) and installed PHP and Apache before running the rest of the commands. However, if there is an official image that supports your use case, it’s best practice to use it.

    Note 2: For both cases, in the COPY command, the ‘.’ indicates the current directory ( COPY [Local Location] [Container Location] ) So we are copying the files to the container in both cases.

    That is all that is needed for this application! To create the image we can run:

    $ docker build -t guestlist-web .

    Again the ‘.’ at the end of the statement is the current directory. the ‘-t’ flag allows us to tag the image with a name, in this example I chose ‘php-cli-app’. This will be used later when we want to create a container from the image.

    When the process has completed sucessfully we should be able to run:

    $ docker images

    And see our newly created image available on our system.

    We are now ready to run our application! We can run our app with:

    $ docker run -i php-cli-app

    You should see the app we created earlier!

    NOTE: Every time we run this command a new container is created. We can run the following to see all the containers on our system:

    $ docker ps -a

    This will give a list of all the containers on our system including their names and IDs.

    To remove a container you no longer need, you can run:

    $ docker rm magical_hermann

    Obviously substituting ‘magical_hermann’ with the name of the container you’d like to delete. You can also use the container ID instead.

    My Toes Are Wet, Lets Save Some Data.

    We are able to set up a webserver now, but there is one problem… we can’t save the data anywere. For that we need a database.

    We could of course take our previous container and install MySQL into it, but that’s best practice in docker.

    Since containers are so lightweight, it’s become best practice to seperate out services into different containers. This allows us to for example, scale up the services seperatly depending on demand and also has security benefits.

    Let’s get this going with another example. More real world this time than our simple apps we have been showcasing before.

    The first thing we are going to do is create a network so our containers can talk to each other.

    $ docker network create wp-network

    Next, we will start up a MySQL container.

    $ docker run --name mysql-service --network wp-network -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=wordpress -e MYSQL_USER=wp-admin -e MYSQL_PASSWORD=wp-pass -d mysql

    This sets up a MySQL container and preloads it with a root password, database named ‘wordpress’ as well as a database username and password.

    Now we will create Another container for wordpress.

    $ docker run -p 8080:80 --network wp-network -d wordpress

    If we had to localhost:8080 we should see the wordpress setup screen. If we enter the details given in the MySQL container we will have a working wordpress install just like that.

    So now my data is saved forever!

    Well, no. Only as long as the container exits. If we want to persist the data after the container is removed we need to add some volumes.

    We used a Bind Mount volume previously when we wanted to be able to use our code editor outside the container to edit our PHP app inside the container. The syntax inside the run command for that was:

    -v /my/local/path:/usr/share/nginx/html (remembering that we used “$PWD” to automatically print our local path location)

    However, If we don’t want to manage the persistent files oursleves we can use another type of volume called a Named Volume instead.

    The syntax for a named volume is almost identical, but instead of a local path, we supply a unique name. Let’s remove our containers and create some new ones and supply some named volumes this time.

    $ docker run --name mysql-service --network wp-network -v wpdb:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=wordpress -e MYSQL_USER=wp-admin -e MYSQL_PASSWORD=wp-pass -d mysql

    which will create a named volume called ‘wpdb’ to store the database.

    $ docker run -p 8080:80 --network wp-network -v wpfiles:/var/www/html  -d wordpress

    which will create a named volume called ‘wpfiles’ to store the wordpress files ( themes plugins etc.)

    We can view the volumes on our system with:

    $ docker volume ls

    Powerful! But… kind of annoying?

    Kind of, we need to:

    1. Create a network
    2. Create the mysql container with environment variables and attach it to the network – oh and don’t forget to make a volume for the data so we don’t lose it.
    3. Create the WordPress container and connect it using the environment variables and create a volume of it.

    If only there was a way to not have to manually run all of these commands…

    As you can probably guess from my subtle lead up here – there is.

    Docker Compose!

    With Docker compose we can create a single file that will allows us to bring an entire web app up with one command.

    Here is what it looks like for our wordpress example. We simply create a compose.yaml file and place in the root of our project.

    services:
    
      app:
        image: wordpress
        ports:
          - 8080:80
        volumes:
          - wpfiles:/var/www/html
    
      database:
        image: mysql:8.0
        environment:
          MYSQL_DATABASE: wpdatabase
          MYSQL_USER: wp-admin
          MYSQL_PASSWORD: wp-pass
          MYSQL_ROOT_PASSWORD: root
        volumes:
          - wpdb:/var/lib/mysql
    
    volumes:
      wpfiles:
      wpdb:

    Each individual service inside the compose.yaml is equivalent to a docker run command. The volumes should also be specified.

    One thing we don’t have to do is create a network. Docker compose will assume every service in the file should be connected to the same network and it will create one for us. Neat!

    Now from the command line we can start up our entire stack with one command:

    $ docker compose up -d

    You should now be up and running!