Determining Block Sizes in CentOS 5 & 6

Been a looooong time, but I felt this was worthy of continuing to build up my tips and tools library/blog with some block searching info since I ran into this today @ work.

lsblk is only available on CentOS 6+ to my best knowledge, so on CentOS 5, we’d want to run the following:

[root@deadbeef ~]# tune2fs -l /dev/disk/block/fun | grep 'Block' | tr -s ' ' | cut -f3 -d' '


So the first number is the amount of blocks and the second is the block size here.  Multiply them together to get your answer in bytes. The word group: is just an anomaly of the filtering and can be ignored.

In CentOS 6, we can leverage lsblk:

lsblk -o NAME,PHY-SeC

sda        512 
├─sda1     512 
├─sda2     512 
└─sda5     512 


mkinitrd syntax error regarding conditional expression(s)

So working with an older CentOS 5.1 system today, I ran into a problem which I believed to be mkinitrd related when attempting to install kernel-PAE on a vmguest I upgraded to 4GB of vRAM.

I was seeing this upon yum installing kernel-PAE:

/sbin/mkinitrd: line 489: syntax error in conditional expression: unexpected token `('
/sbin/mkinitrd: line 489: syntax error near `^(d'
/sbin/mkinitrd: line 489: `        if [[ "$device" =~ ^(dm-|mapper/|mpath/) ]]; then'

So it turns out that bash was the problem here not mkinitrd.  I never thought that mkinitrd and bash would be so intertwined, but it makes perfect sense if you think about it.  I just never had to think about it ’til today.

I guess that serves us right for not keeping up with CentOS releases.  A big thanks to SmogMonkey for posting this find on

OSX & Linux Disk Benchmarking

As a SysAdmin, you’ll need to benchmark disk performance from time to time, or rather just gloat that your system’s drive is better than your buddy’s. In my particular case, I have an existing need to quantify performance metrics on my work macbook pro’s disk speed with Symantec PGP (don’t get me started on this product), without it, and with Apple FileVault in place of it.

Perusing the web I came across a good article from that assisted me with my disk benchmark testing.  This can apply to OSX and most Linux systems as well.

To test a system’s write speed, I used the following command from a terminal window:

time dd if=/dev/zero bs=1024k of=speedtest count=1024

Output from my work iMac w/SSD:

1024+0 records in
1024+0 records out
1073741824 bytes transferred in 8.948791 secs (119987362 bytes/sec)

real 0m8.954s
user 0m0.001s
sys 0m0.413s

The bytes per seconds number when converted to Megabytes equates to 114.42887 Mb per second.

To test the read speed of my disk, I ran the following from

dd if=speedtest bs=1024k of=/dev/null count=1024


1024+0 records in
1024+0 records out
1073741824 bytes transferred in 0.145955 secs (7356659197 bytes/sec)

So this is around 7016 Mb (6.8 Gb) per second with some rounding.  Based on the above output results, my work iMac w/ SSD can write at 114 Mb/s, and read at 6.8Gb/s.

When I get around to testing on my work laptop, these commands should give me good data metrics.

As far as bytes to Mb/Gb/etc conversioning goes, you can use google conversion web tools, or the “units” command on a Linux host if available.  There are several alternatives you can use as well.

Units Linux command sample for reference:

units –terse “119987362 bytes” “MiB”



Running logrotate Manually

So the other day at work, I needed to modify logrotate.conf for one of our groups.  To test my changes, I forced a logrotate.d run in verbose mode to ensure that my changes were applied properly.

The command I used:

logrotate -vf /etc/logrotate.conf

Since I’m predominantly administering CentOS, /etc/logrotate.conf is where my logrotate config file lives.  In Debian or Ubuntu however, one might find their logrotate configuration file under /etc/logrotate.d/<hostname>.conf.

Clearing Memory Cache with Linux

Linux is usually pretty good at efficient memory management notably with freeing up cached memory.  At times when an application(s) is abusing your system, Linux may decide that cached memory is needed when in fact it’s not.  This in turn can and will eventually rob your server of free memory.  A way to combat this is to run this simple command:

sync; echo 3 > /proc/sys/vm/drop_caches

If you need to do this on a scheduled basis, you can turn the above line into a script, and create a cron job for it.  It’s a bad sign if  apps or system functions are hogging up free memory when it doesn’t need it, so it’d be better to investigate and troubleshoot that aspect of your system, rather than blindly clearing the memory cache of a system.

Parallel remote "shelling" via pdsh

Ever have a multitude of hosts you need to run a command (or series of commands) on?  We all know that forloop outputs are super fun to parse through when you need to do this, but why not do it better with a tool like pdsh.
A respected ex-colleague of mine made a great suggestion to start using pdsh instead of forloops and other creative make shift parallel shell processing.  The majority of my notes in this blog post are from him.  If he’ll allow me too, I’ll give him a shout out and cite his Google+ profile for anyone interested.
Pdsh is a parallel remote shell client available from sourceforge.  If you are using rpmforge CentOS repos you can pick it up there as well, but it may not be the most bleeding edge package available.
Pdsh lives on sourceforge, but the code is on google:
Usage docs:
Some quick tips on how to get started using pdsh:
  1. Set up your environment:
  2. export PDSH_SSH_ARGS_APPEND=”-o ConnectTimeout=5 -o CheckHostIP=no -o StrictHostKeyChecking=no” (Add this to your .bashrc to save time.)
  1. Create your target list in a text file, one hostname per line (in the examples below, this file is called “host-list”
  2. It would probably be a good idea to use “tee” to capture output.
    • man tee” if you need more information on tee.
  3. Run a test first to make sure your pdsh command works the way you think it will before potentially doing anything destructive:
    • sudo pdsh -R ssh -w ^host-list “hostname” | tee output-test-1
  4. Change your test run to do what you really want it to after a successful test.  e.g.:
    • sudo pdsh -R ssh -w ^host-list “/usr/bin/mycmd args” | tee output-mycmd-run-1
Obviously if you have Puppet Enterprise fully integrated within your environment, you can take advantage of powerful tools such as mcollective.  If you do not, pdsh is a great alternative.

Get Your grep-fu On

More sysadmin goodness from

Search for red OR green:

grep ‘red|green’ files

Search for searchtext at the beginning of a line in files:

grep ‘^searchtext’ files

Search for searchtext at the end of a line in files:

grep ‘searchtext$’ files

Search files for blank lines:

grep ‘^$’ files

Search files for US formatted phone numbers (###-###-####):

grep ‘[0-9][0-9][0-9]-[0-9][0-9][0-9]-[0-9][0-9][0-9][0-9]’ files


grep ‘[0-9]{3}-[0-9]{3}-[0-9]{4}’ files

Search for e-commerce or ecommerce in files:

grep e-*commerce files

Search for searchtext case-insenstively in files:

grep -i searchtext files

Chain two grep commands together for more advanced searches. Search for lines in files that contain both partial_name and function:

grep partial_name files | grep function

That one is great for searching source directories for a function definition when you can’t remember the completely function name.