Extending an Existing Hard Drive

This article is targetted at virtual servers where the capacity of a drive can be extended without having to replace the physical hardware. If you have a physical hard drive where all the space has not been allocated, these instructions could still be of use – but that scenario is not likely.

Identify additional drive capacity

After using your selected hypervisor tool (VMware, AWS Console, etc) to extend the selected drive, you will need to re-scan the drives to allow the operating system to recognise the additional storage capacity.

# ls -1 /sys/class/scsi_device/*/device/rescan
/sys/class/scsi_device/1:0:0:0/device/rescan
/sys/class/scsi_device/2:0:0:0/device/rescan
/sys/class/scsi_device/3:0:0:0/device/rescan
# for i in `ls /sys/class/scsi_device/*/device/rescan`; do echo 1 > ${i}; done

The additional storage capacity should now be available through the lsblk command – in this example I extended the volume from 10Gb to 20Gb

# lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0                       7:0    0 89.1M  1 loop /snap/core/8268
loop1                       7:1    0 88.5M  1 loop /snap/core/7270
sda                         8:0    0   10G  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    1G  0 part /boot
└─sda3                      8:3    0    9G  0 part
  └─ubuntu--vg-ubuntu--lv 253:1    0    4G  0 lvm  /
sdb                         8:16   0   20G  0 disk
└─data--vg-data           253:0    0   10G  0 lvm  /data
sr0                        11:0    1 1024M  0 rom

Utilising the new capacity

Now that the operating system has recognised the additional capacity, the filesystem needs to be resized to take up the new storage.

# pvresize /dev/sdb
  Physical volume "/dev/sdb" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

After the physical volume has been increased, the logical volume can be extended – make sure to enter the correct volume name to be extended at this time.

# lvextend -l +100%free /dev/data-vg/data
  Size of logical volume data-vg/data changed from <10.00 GiB (2559 extents) to <20.00 GiB (5119 extents).
  Logical volume data-vg/data successfully resized.

So the logical volume has now been resized, but the operating system still has the filesystem reported as the original size. To extend the filesystem size, run the following command

# resize2fs /dev/data-vg/data
resize2fs 1.44.1 (24-Mar-2018)
Filesystem at /dev/data-vg/data is mounted on /data; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 3
The filesystem on /dev/data-vg/data is now 5241856 (4k) blocks long.

And that’s it! The new storage is now available for the operating system to use.

In summary, the commands required are as follows, assuming you are extending drive sdb and logical volume /dev/data-vg/data

for i in `ls /sys/class/scsi_device/*/device/rescan`; do echo 1 > ${i}; done
pvresize /dev/sdb
lvextend -l +100%free /dev/data-vg/data
resize2fs /dev/data-vg/data

Adding a New Hard Drive

Adding a new hard drive to a Linux instance involves the following steps

  • Add physical / virtual drive to the instance
  • Scan for and identify new drive
  • Prepare new drive for use

Adding the new drive is outside of the scope of this article, as it can be done in many different ways depending on the instance – either a new physical disk needs to be added, or a virtual drive created using your virtual machine hypervisor GUI or Cloud console added to the virtual machine.

Scanning for the new drive

After the new drive has been added to the instance, it needs to be presented to the operating system for configuration. There are two ways of doing this, by either rebooting the instance or by scanning all of the drive busses for new devices.

# ls -1 /sys/class/scsi_host/*/scan
/sys/class/scsi_host/host0/scan
/sys/class/scsi_host/host1/scan
/sys/class/scsi_host/host2/scan
/sys/class/scsi_host/host3/scan
# for i in `ls /sys/class/scsi_host/*/scan`; do echo "- - -" > ${i}; done

Once the new drive has been recognised by the operating system, it should be visible to the lsblk statement – in this example it is sdb which we can see has no partitions defined. It’s best to have an example of the following output before adding the drive for comparison.

# lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0                       7:0    0 88.5M  1 loop /snap/core/7270
loop1                       7:1    0 89.1M  1 loop /snap/core/8268
sda                         8:0    0   10G  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    1G  0 part /boot
└─sda3                      8:3    0    9G  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0    4G  0 lvm  /
sdb                         8:16   0   10G  0 disk
sr0                        11:0    1 1024M  0 rom

Preparing the new drive for use

Now that the new drive has been presented to the operating system and is accessible, we need to make the drive available for use. I find that the best and most flexible way to use this new drive is to allocate all of the drive to the logical volume manager LVM – do this by first creating the new physical volume

# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.

Once the physical volume has been recognised by the operating system, we can either create a new volume group as shown below, or add this new physical volume to an existing volume group.

# vgcreate data-vg /dev/sdb
  Volume group "data-vg" successfully created

The next step is to create a new logical volume or extend an existing logical volume. You can only extend a logical volume if you have extended the volume group it belongs to.

# lvcreate -l 100%FREE -n data data-vg
  Logical volume "data" created.

Which will create a new logical volume called data inside the data-vg volume group. The details for this new logical volume can be displayed with the lvdisplay command

# lvdisplay /dev/data-vg/data
  --- Logical volume ---
  LV Path                /dev/data-vg/data
  LV Name                data
  VG Name                data-vg
  LV UUID                Fv1q0Y-F1Bt-HLBY-sKWk-svvJ-W9WE-eMasO5
  LV Write Access        read/write
  LV Creation host, time abd-template, 2019-12-27 16:39:00 +0000
  LV Status              available
  # open                 0
  LV Size                <10.00 GiB
  Current LE             2559
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

The new logical volume can be formatted with a filesystem, and for most general applications the ext4 filesystem is well suited

# mkfs.ext4 /dev/data-vg/data
mke2fs 1.44.1 (24-Mar-2018)
Creating filesystem with 2620416 4k blocks and 655360 inodes
Filesystem UUID: 018b2220-6e10-4f46-9f06-233b68acf4f3
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

Then mount the new volume to the filesystem – by adding an entry to the /etc/fstab file, we can ensure this volume is automatically mounted after a reboot

# mkdir /data
# echo "/dev/data-vg/data   /data   ext4  defaults  0  0" >> /etc/fstab
# mount /data
# df /data
Filesystem                1K-blocks  Used Available Use% Mounted on
/dev/mapper/data--vg-data  10251540 36888   9674188   1% /data

The new drive is now available to the operating system under the /data path.

In summary, the commands required are as follows, assuming you are adding the drive sdb and creating the logical volume /dev/data-vg/data

for i in `ls /sys/class/scsi_host/*/scan`; do echo "- - -" > ${i}; done
pvcreate /dev/sdb
vgcreate data-vg /dev/sdb
lvcreate -l 100%FREE -n data data-vg
mkfs.ext4 /dev/data-vg/data
mkdir /data
echo "/dev/data-vg/data   /data   ext4  defaults  0  0" >> /etc/fstab
mount /data

Convert an IP Address to a Number

There are many occasions when it can be useful to take the IP address of a host and convert that into a decimal number. Create a new file called ip2dec on your system containing the following code

#!/usr/bin/awk -f
BEGIN {
    ip = ARGV[1]
    split(ip, octets, ".")
    for (i = 1; i <= 4; i++) {
        dec += octets[i] * 256 ** (4 - i)
    }
    printf("%i\n", dec)
}

Then make the file executable by running

$ chmod +x ip2dec

This script will take a single IP address as an input parameter and will output the decimal equivalent as shown below

$ ./ip2dec 192.168.1.1
3232235777

The script will convert any IP address from from 0.0.0.0 to 255.255.255.255 to its numeric equivalent which will be within the range 0 to 4294967295.

It is also fairly straight-forward to convert back from a Number to an IP Address. I tend to put these scripts into a folder on my system that is included in my PATH variable, so I can use them whenever needed.

Convert a Number to an IP Address

On occasion, it is necessary to convert from a numeric value to an IP address, particularly if that numeric value was created by converting from an IP address. In a previous article, I showed how to Convert an IP Address to a Number, here I will show you how to reverse that process.

Create a new file called dec2ip on your system containing the following code

#!/usr/bin/awk -f
BEGIN {
    dec = ARGV[1]
    for (e = 3; e >= 0; e--) {
        octet = int(dec / (256 ^ e))
        dec -= octet * 256 ^ e
        ip = ip delim octet
        delim = "."
    }
    printf("%s\n", ip)
}

Then make the file executable by running

$ chmod +x dec2ip

This script will take a single numeric value as an input parameter and will output the equivalent IP address as shown below

$ ./dec2ip 3232235777
192.168.1.1

The script will convert any numeric value from 0 to 4294967295 to its IP address equivalent from 0.0.0.0 to 255.255.255.255. Anything outside of that range will produce unexpected results.

I tend to put these scripts into a folder on my system that is included in my PATH variable, so I can use them whenever needed.

Command Pipeline Errors

When chaining commands in Bash using the pipe | separator, the commands execute in sequence, with the output of the first command being used as the input to the second command.

When running a single Bash command, it is possible to detect if an error has occurred by examining the $? built-in Bash variable directly after the command has completed. A zero 0 return value means success, a non-zero return value means that an error has occurred.

For example, trying to write output to a new file in the root of the file system.

$ echo "Create a file with one line in it" > /new-file.txt
-bash: /new-file.txt: Permission denied
$ echo $?
1

Whereas writing a file to your home path will succeed as expected.

$ echo "Create a file with one line in it" > ~/new-file.txt
$ echo $?
0

But what happens if an error occurs when executing a command within a piped list of commands? In this case, the default is for the last command to set the return value for the entire command chain.

$ cat /new-file.txt | wc
cat: /new-file.txt: No such file or directory
       0       0       0
$ echo $?
0

Which doesn’t help very much if you are trying to detect when anything has gone wrong within the combined command. This can be overridden by setting the pipefail Bash option.

$ set -o pipefail
$ cat /new-file.txt | wc
cat: /new-file.txt: No such file or directory
       0       0       0 
$ echo $?
1

This can be extremely useful if you regularly chain commands together, as the return value can be used to display a message to the user indicating an issue has occurred within the command chain.

Command Pipelines

When using pipelines, the output from the first command is treated as the input to the second command, the output of the second command is treated as the input to the third command, and so on.

When writing bash scripts, I often connect a series of commands together into a pipeline, where the commands are separated by the pipe | character.

When using pipelines, the output from the first command is treated as the input to the second command, the output of the second command is treated as the input to the third command, and so on.

This can be useful in a number of situations, such as when you need to process the output of a command further before displaying or assigning to a variable.

For example, given a file containing a sequence of numbers

$ cat numbers.txt
2250
2262
1
1
1
15379
15379
1
16112
16121

We can find the numbers in the file with the largest distribution as follows

$ sort -n numbers.txt | \
        uniq -c | \
        sort -rn | \
        head
141 2
 69 1685
 59 1
 53 2950
 11 1902
  4 2870
  4 2132
  3 9151
  3 4345
  3 1796

Where we first sort the contents of the file, using -n to sort them numerically, then pipe that output into the uniq command with the -c option to count the unique values, then sort again, this time with -rn for reverse numeric order, and finally, take the first 10 entries in the output (10 is the default number of lines that head will return.)

I’m finally getting started!

After spending over 25 years in the industry as a full-stack developer, Linux admin, MySQL DBA, Data Architect and Infrastructure Architect I thought it was about time I started to write down some of my findings and experience over the years.

My current specialist areas are MySQL and Amazon Web Services (AWS). In the MySQL space, I work with all areas from installation and upgrade, to performance tuning and High Availability configurations.

In the MySQL space, I work with all areas from installation and upgrade, to Performance Tuning and High Availability configurations – in particular, backup and restore, automatic fail-over and disaster recovery.

I cover most areas of AWS, with a particular focus on automated infrastructure builds using Terraform and Ansible of the full stack from CloudFront through Elastic Load Balancing to EC2 instances, Redshift, DynamoDB, S3 and RDS databases – focusing here on MySQL and Aurora.

I intend to write a number of articles on the areas I have worked on, highlighting some of the challenges and how to overcome them. Hopefully, this will be of some use to anyone who stumbles across this site in the future!