Today was a good day. We finally dropped off the last of the children at their chosen University. The best part of 3 hours in the car was followed by numerous trips from the car to the accommodation. I can’t believe how much University accommodation has changed in the 25 or so years since I was there – his room has an en-suite
I can’t believe how much University accommodation has changed in the 25 or so years since I was there – his room has an en-suite wet room, a small PC connected to the University network, and to top it all off, a mini fridge stashed away in his cupboard! Not to mention the two fridges, two freezers, lockers and a big TV in the shared kitchen/communal area.
The campus facilities look good too, numerous places to eat, decent sporting area. Most of the building look modern, and those that aren’t are well maintained.
Another three hours home in the car, followed by a curry for tea, set me up for an evening working on the side-project to fix a couple of bugs that had surfaced in the auto-scaling web nodes that run in my AWS environment.
When writing bash scripts, I often connect a series of commands together into a pipeline, where the commands are separated by the pipe
When using pipelines, the output from the first command is treated as the input to the second command, the output of the second command is treated as the input to the third command, and so on.
This can be useful in a number of situations, such as when you need to process the output of a command further before displaying or assigning to a variable.
For example, given a file containing a sequence of numbers
bash ~$ cat numbers.txt
We can find the numbers in the file with the largest distribution as follows
bash ~$ sort -n numbers.txt | \
uniq -c | \
sort -rn | \
Where we first sort the contents of the file, using
-n to sort them numerically, then pipe that output into the
uniq command with the
-c option to count the unique values, then sort again, this time with
-rn for reverse numeric order, and finally take the first 10 entries in the output (10 is the default number of lines that
head will return.)
I’ve only just started this blog, after spending over 20 years in the industry as a full-stack developer, Linux admin, MySQL DBA, Data Architect and Infrastructure Architect.
My current specialist areas are MySQL and Amazon Web Services (AWS). In the MySQL space, I work with all areas from installation and upgrade, to performance tuning and High Availability configurations.
In the MySQL space, I work with all areas from installation and upgrade, to Performance Tuning and High Availability configurations – in particular, backup and restore, automatic fail-over and disaster recovery.
I cover most areas of AWS, with a particular focus on automated infrastructure builds using Terraform and Ansible of the full stack from CloudFront through Elastic Load Balancing to EC2 instances, Redshift, DynamoDB, S3 and RDS databases – focusing here on MySQL and Aurora.
I intend to write a number of articles on the areas I have worked on, highlighting some of the challenges and how to overcome them. Hopefully, this will be of some use to anyone who stumbles across this site in the future!