Docker-based Multiple MySQL

Preparing a MySQL multiple-server test environment using Docker images.

In a previous article, I showed how I build a Docker-based MySQL Test Environment when testing out various MySQL features. In this article, I will expand on this and prepare three separate MySQL instances, each with their own persistent data directories which will survive a full Docker image re-build, and connect them together in a regular MySQL replication topology of primary and two read-only replicas.

Folder structure

Assuming you already have Docker installed on your machine (go here if you have not yet done this), create the following folder structure as shown below.

$ tree -d test
test
└── data
    ├── mysql1
    ├── mysql2
    └── mysql3

All the files we create for this test environment will go inside the test folder, and all the MySQL data files for the test instances will be stored within the data/mysqlx folders.

Docker parameter file

Create a file in the test folder called docker-compose.yml containing the following content. This file defines three separate Docker containers based on the same starting MySQL image (mysql:5.7). Each of these containers will mount the data folders under the data folder into the default data file location for MySQL on each of the containers when they are created. They all use the same default password for the root user, and have sequential ports (33061 to 33063) mapped to allow access from the host machine.

version: '2'
services:

  # --------------------------------------------------------------
  mysql1:
    image: mysql:5.7
    container_name: mysql-test-1
    volumes:
      - ./data/mysql1:/var/lib/mysql
    environment:
      - "MYSQL_ROOT_PASSWORD=secret"
    ports:
        - "33061:3306"

  # --------------------------------------------------------------
  mysql2:
    image: mysql:5.7
    container_name: mysql-test-2
    volumes:
      - ./data/mysql2:/var/lib/mysql
    environment:
      - "MYSQL_ROOT_PASSWORD=secret"
    ports:
        - "33062:3306"

  # --------------------------------------------------------------
  mysql3:
    image: mysql:5.7
    container_name: mysql-test-3
    volumes:
      - ./data/mysql3:/var/lib/mysql
    environment:
      - "MYSQL_ROOT_PASSWORD=secret"
    ports:
        - "33063:3306"

Build and run

From within the test folder, you can now fire up this environment by downloading the MySQL 5.7 Docker image (if required) and starting up the three containers in background ‘daemon’ mode.

$ docker-compose up -d
Creating network "test_default" with the default driver
Pulling mysql1 (mysql:5.7)…
5.7: Pulling from library/mysql
d599a449871e: Pull complete
f287049d3170: Pull complete
08947732a1b0: Pull complete
96f3056887f2: Pull complete
871f7f65f017: Pull complete
1dd50c4b99cb: Pull complete
5bcbdf508448: Pull complete
02a97db830bd: Pull complete
c09912a99bce: Pull complete
08a981fc6a89: Pull complete
818a84239152: Pull complete
Digest: sha256:5779c71a4730da36f013a23a437b5831198e68e634575f487d37a0639470e3a8
Status: Downloaded newer image for mysql:5.7
Creating mysql-test-1 … done
Creating mysql-test-3 … done
Creating mysql-test-2 … done

This will automatically start three instances of MySQL, which will, in turn, create all the necessary folders and files required in the data folders to allow these instances to run.

$ tree -d data
data
├── mysql1
│   ├── mysql
│   ├── performance_schema
│   └── sys
├── mysql2
│   ├── mysql
│   ├── performance_schema
│   └── sys
└── mysql3
    ├── mysql
    ├── performance_schema
    └── sys

You can also confirm the running MySQL containers with the following command, showing the base image used and what ports are mapped to each of the containers.

$ docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                NAMES
9966a7c53eca        mysql:5.7           "docker-entrypoint.s…"   11 hours ago        Up 11 hours         33060/tcp, 0.0.0.0:33061->3306/tcp   mysql-test-1
ed2625f793ed        mysql:5.7           "docker-entrypoint.s…"   11 hours ago        Up 11 hours         33060/tcp, 0.0.0.0:33063->3306/tcp   mysql-test-3
b6d37464375e        mysql:5.7           "docker-entrypoint.s…"   11 hours ago        Up 11 hours         33060/tcp, 0.0.0.0:33062->3306/tcp   mysql-test-2

Testing

You can see that each of the containers has the port 33060/tcp open, and an additional port has been opened on each container that is mapped to the regular MySQL port 3306. This allows us to connect to each of the MySQL instances using an address such as localhost:33061.

$ mysqlsh --sql -h localhost -P 33061 -u root -psecret
mysqlsh: [Warning] Using a password on the command line interface can be insecure.
Creating a session to 'root@localhost:33061'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 7
Server version: 5.7.28 MySQL Community Server (GPL)
No default schema selected; type \use <schema> to set one.
MySQL Shell 8.0.12

Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type '\help' or '\?' for help; '\quit' to exit.


 MySQL  localhost:33061 ssl  SQL > show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.0174 sec)

I wouldn’t normally supply the password on the command line using -p{password} as it would be stored in the shell history and visible on the system process list, but as this is just a test environment, I’m happy to do so.

Stop and clean-up

When you are happy that everything is working as expected, stop the Docker container by running the following command. This helps to free up resources on your host machine – many times I’ve realised I’ve still got a number of Docker containers still running that I’d forgotten about 🙂

$ ./docker-stop.sh
Stopping mysql-test-1 ... done
Stopping mysql-test-3 ... done
Stopping mysql-test-2 ... done
Removing mysql-test-1 ... done
Removing mysql-test-3 ... done
Removing mysql-test-2 ... done
Removing network test_default

The next post in this series will add regular replication configuration between the hosts, and updates to the Docker compose file to supply tailored MySQL configuration files to each of the containers so that they will work as intended.

Extending an Existing Hard Drive

This article is targetted at virtual servers where the capacity of a drive can be extended without having to replace the physical hardware. If you have a physical hard drive where all the space has not been allocated, these instructions could still be of use – but that scenario is not likely.

Identify additional drive capacity

After using your selected hypervisor tool (VMware, AWS Console, etc) to extend the selected drive, you will need to re-scan the drives to allow the operating system to recognise the additional storage capacity.

# ls -1 /sys/class/scsi_device/*/device/rescan
/sys/class/scsi_device/1:0:0:0/device/rescan
/sys/class/scsi_device/2:0:0:0/device/rescan
/sys/class/scsi_device/3:0:0:0/device/rescan
# for i in `ls /sys/class/scsi_device/*/device/rescan`; do echo 1 > ${i}; done

The additional storage capacity should now be available through the lsblk command – in this example I extended the volume from 10Gb to 20Gb

# lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0                       7:0    0 89.1M  1 loop /snap/core/8268
loop1                       7:1    0 88.5M  1 loop /snap/core/7270
sda                         8:0    0   10G  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    1G  0 part /boot
└─sda3                      8:3    0    9G  0 part
  └─ubuntu--vg-ubuntu--lv 253:1    0    4G  0 lvm  /
sdb                         8:16   0   20G  0 disk
└─data--vg-data           253:0    0   10G  0 lvm  /data
sr0                        11:0    1 1024M  0 rom

Utilising the new capacity

Now that the operating system has recognised the additional capacity, the filesystem needs to be resized to take up the new storage.

# pvresize /dev/sdb
  Physical volume "/dev/sdb" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

After the physical volume has been increased, the logical volume can be extended – make sure to enter the correct volume name to be extended at this time.

# lvextend -l +100%free /dev/data-vg/data
  Size of logical volume data-vg/data changed from <10.00 GiB (2559 extents) to <20.00 GiB (5119 extents).
  Logical volume data-vg/data successfully resized.

So the logical volume has now been resized, but the operating system still has the filesystem reported as the original size. To extend the filesystem size, run the following command

# resize2fs /dev/data-vg/data
resize2fs 1.44.1 (24-Mar-2018)
Filesystem at /dev/data-vg/data is mounted on /data; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 3
The filesystem on /dev/data-vg/data is now 5241856 (4k) blocks long.

And that’s it! The new storage is now available for the operating system to use.

In summary, the commands required are as follows, assuming you are extending drive sdb and logical volume /dev/data-vg/data

for i in `ls /sys/class/scsi_device/*/device/rescan`; do echo 1 > ${i}; done
pvresize /dev/sdb
lvextend -l +100%free /dev/data-vg/data
resize2fs /dev/data-vg/data

Adding a New Hard Drive

Adding a new hard drive to a Linux instance involves the following steps

  • Add physical / virtual drive to the instance
  • Scan for and identify new drive
  • Prepare new drive for use

Adding the new drive is outside of the scope of this article, as it can be done in many different ways depending on the instance – either a new physical disk needs to be added, or a virtual drive created using your virtual machine hypervisor GUI or Cloud console added to the virtual machine.

Scanning for the new drive

After the new drive has been added to the instance, it needs to be presented to the operating system for configuration. There are two ways of doing this, by either rebooting the instance or by scanning all of the drive busses for new devices.

# ls -1 /sys/class/scsi_host/*/scan
/sys/class/scsi_host/host0/scan
/sys/class/scsi_host/host1/scan
/sys/class/scsi_host/host2/scan
/sys/class/scsi_host/host3/scan
# for i in `ls /sys/class/scsi_host/*/scan`; do echo "- - -" > ${i}; done

Once the new drive has been recognised by the operating system, it should be visible to the lsblk statement – in this example it is sdb which we can see has no partitions defined. It’s best to have an example of the following output before adding the drive for comparison.

# lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0                       7:0    0 88.5M  1 loop /snap/core/7270
loop1                       7:1    0 89.1M  1 loop /snap/core/8268
sda                         8:0    0   10G  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    1G  0 part /boot
└─sda3                      8:3    0    9G  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0    4G  0 lvm  /
sdb                         8:16   0   10G  0 disk
sr0                        11:0    1 1024M  0 rom

Preparing the new drive for use

Now that the new drive has been presented to the operating system and is accessible, we need to make the drive available for use. I find that the best and most flexible way to use this new drive is to allocate all of the drive to the logical volume manager LVM – do this by first creating the new physical volume

# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.

Once the physical volume has been recognised by the operating system, we can either create a new volume group as shown below, or add this new physical volume to an existing volume group.

# vgcreate data-vg /dev/sdb
  Volume group "data-vg" successfully created

The next step is to create a new logical volume or extend an existing logical volume. You can only extend a logical volume if you have extended the volume group it belongs to.

# lvcreate -l 100%FREE -n data data-vg
  Logical volume "data" created.

Which will create a new logical volume called data inside the data-vg volume group. The details for this new logical volume can be displayed with the lvdisplay command

# lvdisplay /dev/data-vg/data
  --- Logical volume ---
  LV Path                /dev/data-vg/data
  LV Name                data
  VG Name                data-vg
  LV UUID                Fv1q0Y-F1Bt-HLBY-sKWk-svvJ-W9WE-eMasO5
  LV Write Access        read/write
  LV Creation host, time abd-template, 2019-12-27 16:39:00 +0000
  LV Status              available
  # open                 0
  LV Size                <10.00 GiB
  Current LE             2559
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

The new logical volume can be formatted with a filesystem, and for most general applications the ext4 filesystem is well suited

# mkfs.ext4 /dev/data-vg/data
mke2fs 1.44.1 (24-Mar-2018)
Creating filesystem with 2620416 4k blocks and 655360 inodes
Filesystem UUID: 018b2220-6e10-4f46-9f06-233b68acf4f3
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

Then mount the new volume to the filesystem – by adding an entry to the /etc/fstab file, we can ensure this volume is automatically mounted after a reboot

# mkdir /data
# echo "/dev/data-vg/data   /data   ext4  defaults  0  0" >> /etc/fstab
# mount /data
# df /data
Filesystem                1K-blocks  Used Available Use% Mounted on
/dev/mapper/data--vg-data  10251540 36888   9674188   1% /data

The new drive is now available to the operating system under the /data path.

In summary, the commands required are as follows, assuming you are adding the drive sdb and creating the logical volume /dev/data-vg/data

for i in `ls /sys/class/scsi_host/*/scan`; do echo "- - -" > ${i}; done
pvcreate /dev/sdb
vgcreate data-vg /dev/sdb
lvcreate -l 100%FREE -n data data-vg
mkfs.ext4 /dev/data-vg/data
mkdir /data
echo "/dev/data-vg/data   /data   ext4  defaults  0  0" >> /etc/fstab
mount /data

Convert an IP Address to a Number

There are many occasions when it can be useful to take the IP address of a host and convert that into a decimal number. Create a new file called ip2dec on your system containing the following code

#!/usr/bin/awk -f
BEGIN {
    ip = ARGV[1]
    split(ip, octets, ".")
    for (i = 1; i <= 4; i++) {
        dec += octets[i] * 256 ** (4 - i)
    }
    printf("%i\n", dec)
}

Then make the file executable by running

$ chmod +x ip2dec

This script will take a single IP address as an input parameter and will output the decimal equivalent as shown below

$ ./ip2dec 192.168.1.1
3232235777

The script will convert any IP address from from 0.0.0.0 to 255.255.255.255 to its numeric equivalent which will be within the range 0 to 4294967295.

It is also fairly straight-forward to convert back from a Number to an IP Address. I tend to put these scripts into a folder on my system that is included in my PATH variable, so I can use them whenever needed.

Convert a Number to an IP Address

On occasion, it is necessary to convert from a numeric value to an IP address, particularly if that numeric value was created by converting from an IP address. In a previous article, I showed how to Convert an IP Address to a Number, here I will show you how to reverse that process.

Create a new file called dec2ip on your system containing the following code

#!/usr/bin/awk -f
BEGIN {
    dec = ARGV[1]
    for (e = 3; e >= 0; e--) {
        octet = int(dec / (256 ^ e))
        dec -= octet * 256 ^ e
        ip = ip delim octet
        delim = "."
    }
    printf("%s\n", ip)
}

Then make the file executable by running

$ chmod +x dec2ip

This script will take a single numeric value as an input parameter and will output the equivalent IP address as shown below

$ ./dec2ip 3232235777
192.168.1.1

The script will convert any numeric value from 0 to 4294967295 to its IP address equivalent from 0.0.0.0 to 255.255.255.255. Anything outside of that range will produce unexpected results.

I tend to put these scripts into a folder on my system that is included in my PATH variable, so I can use them whenever needed.

Docker-based MySQL Test Environment

Preparing a MySQL test environment using Docker images.

When preparing or changing a MySQL environment, it is essential to test many aspects of the build prior to rolling it out to a production system. For this, I use a Docker-based test environment which I spin up to perform the relevant testing.

In this example, I will prepare a MySQL instance, with its own persistent data directory which will survive a full Docker image re-build. This then becomes the foundation of future articles and examples.

Assuming you already have Docker installed on your machine (install docker if you have not yet done this), create the following folder structure as shown below.

$ tree -d test
test
└── data
    └── mysql

All the files we create for this test environment will go inside the test folder, and all the MySQL data files for the test instance will be stored within the data/mysql folder.

Create a file in the test folder called docker-compose.yml containing the following content. This file defines a Docker container based on the MySQL image (mysql:5.7). The container will mount the MySQL data folder located in the data folder into the default data file location for MySQL on the container when it is created. The default password for the root user is provided within the config file, which is then used by the MySQL setup scripts. Ports 33060 is mapped to 3306 to allow access from the host machine.

version: '2'
services:

  # --------------------------------------------------------------
  mysql:
    image: mysql:5.7
    container_name: mysql_test
    volumes:
      - ./data/mysql:/var/lib/mysql
    environment:
      - "MYSQL_ROOT_PASSWORD=secret"
    ports:
        - "3306:3306"

From within the test folder you can now fire up this environment by downloading the MySQL 5.7 Docker image (if required) and starting up the container in background ‘daemon’ mode.

$ docker-compose up -d
Creating network "single_test_default" with the default driver
Pulling mysql (mysql:5.7)...
5.7: Pulling from library/mysql
d599a449871e: Pull complete
f287049d3170: Pull complete
08947732a1b0: Pull complete
96f3056887f2: Pull complete
871f7f65f017: Pull complete
1dd50c4b99cb: Pull complete
5bcbdf508448: Pull complete
02a97db830bd: Pull complete
c09912a99bce: Pull complete
08a981fc6a89: Pull complete
818a84239152: Pull complete
Digest: sha256:5779c71a4730da36f013a23a437b5831198e68e634575f487d37a0639470e3a8
Status: Downloaded newer image for mysql:5.7
Creating mysql_test ... done

This will automatically start the MySQL instance, which will, in turn, create all the necessary folders and files required in the data folder to allow the instance to run.

$ tree -d data
data
└── mysql
    ├── mysql
    ├── performance_schema
    └── sys

You can also confirm the running MySQL container with the following command, showing the base image used and what port is mapped to the container.

$ docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                NAMES
18fc7741b5ee        mysql:5.7           "docker-entrypoint.s…"   58 seconds ago      Up 57 seconds       33060/tcp, 0.0.0.0:33060->3306/tcp   mysql_test

You can see that the container has the port 33060/tcp open, and an additional port has been opened on each container that is mapped to the regular MySQL port 3306. This allows us to use MySQL Shell (mysqlsh) to connect to the MySQL instance using the address localhost:33060.

$ mysqlsh --sql -h localhost -P 33060 -u root -psecret
mysqlsh: [Warning] Using a password on the command line interface can be insecure.
Creating a session to 'root@localhost:33060'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 7
Server version: 5.7.28 MySQL Community Server (GPL)
No default schema selected; type \use <schema> to set one.
MySQL Shell 8.0.12

Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type '\help' or '\?' for help; '\quit' to exit.


 MySQL  localhost:33060 ssl  SQL > show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.0174 sec)

I wouldn’t normally supply the password on the command line using -p{password} as it would be stored in the shell history and visible on the system process list, but as this is just a test environment, I’m happy to do so.

When you are happy that everything is working as expected, stop the Docker container by running the following command. This helps to free up resources on your host machine – many times I’ve realised I’ve still got a number of Docker containers still running that I’d forgotten about 🙂

$ docker-compose down
Stopping mysql_test ... done
Removing mysql_test ... done
Removing network single_test_default

To extend this to multiple instances to test replication, take a look at my article Docker-based Multiple MySQL. In this and future articles, I will be expanding this environment with additional replica hosts, management and monitoring tools.

Command Pipeline Errors

When chaining commands in Bash using the pipe | separator, the commands execute in sequence, with the output of the first command being used as the input to the second command.

When running a single Bash command, it is possible to detect if an error has occurred by examining the $? built-in Bash variable directly after the command has completed. A zero 0 return value means success, a non-zero return value means that an error has occurred.

For example, trying to write output to a new file in the root of the file system.

$ echo "Create a file with one line in it" > /new-file.txt
-bash: /new-file.txt: Permission denied
$ echo $?
1

Whereas writing a file to your home path will succeed as expected.

$ echo "Create a file with one line in it" > ~/new-file.txt
$ echo $?
0

But what happens if an error occurs when executing a command within a piped list of commands? In this case, the default is for the last command to set the return value for the entire command chain.

$ cat /new-file.txt | wc
cat: /new-file.txt: No such file or directory
       0       0       0
$ echo $?
0

Which doesn’t help very much if you are trying to detect when anything has gone wrong within the combined command. This can be overridden by setting the pipefail Bash option.

$ set -o pipefail
$ cat /new-file.txt | wc
cat: /new-file.txt: No such file or directory
       0       0       0 
$ echo $?
1

This can be extremely useful if you regularly chain commands together, as the return value can be used to display a message to the user indicating an issue has occurred within the command chain.

Command Pipelines

When using pipelines, the output from the first command is treated as the input to the second command, the output of the second command is treated as the input to the third command, and so on.

When writing bash scripts, I often connect a series of commands together into a pipeline, where the commands are separated by the pipe | character.

When using pipelines, the output from the first command is treated as the input to the second command, the output of the second command is treated as the input to the third command, and so on.

This can be useful in a number of situations, such as when you need to process the output of a command further before displaying or assigning to a variable.

For example, given a file containing a sequence of numbers

$ cat numbers.txt
2250
2262
1
1
1
15379
15379
1
16112
16121

We can find the numbers in the file with the largest distribution as follows

$ sort -n numbers.txt | \
        uniq -c | \
        sort -rn | \
        head
141 2
 69 1685
 59 1
 53 2950
 11 1902
  4 2870
  4 2132
  3 9151
  3 4345
  3 1796

Where we first sort the contents of the file, using -n to sort them numerically, then pipe that output into the uniq command with the -c option to count the unique values, then sort again, this time with -rn for reverse numeric order, and finally, take the first 10 entries in the output (10 is the default number of lines that head will return.)

I’m finally getting started!

After spending over 25 years in the industry as a full-stack developer, Linux admin, MySQL DBA, Data Architect and Infrastructure Architect I thought it was about time I started to write down some of my findings and experience over the years.

My current specialist areas are MySQL and Amazon Web Services (AWS). In the MySQL space, I work with all areas from installation and upgrade, to performance tuning and High Availability configurations.

In the MySQL space, I work with all areas from installation and upgrade, to Performance Tuning and High Availability configurations – in particular, backup and restore, automatic fail-over and disaster recovery.

I cover most areas of AWS, with a particular focus on automated infrastructure builds using Terraform and Ansible of the full stack from CloudFront through Elastic Load Balancing to EC2 instances, Redshift, DynamoDB, S3 and RDS databases – focusing here on MySQL and Aurora.

I intend to write a number of articles on the areas I have worked on, highlighting some of the challenges and how to overcome them. Hopefully, this will be of some use to anyone who stumbles across this site in the future!