How to Improve Performance in Bash Scripts

For a lot of people, shell scripts are a quick-and-dirty way to record a few commands and run them as a convenience. But shells like Bash, Zsh, and Korn shell are actually very high level programming languages, and they can be used to do significant processing.

Back in the day, there was the Bourne Shell (‘sh’) and C-shell (csh). These were the standards across platforms, and many scripts, even to this day, are written to be able to run in one of those shells. But Linux, with Bash as its default shell, has changed this, and the defacto standard is now Bash, which is a much more powerfull shell language than its predecessor, sh. This article talks about Bash, but the concepts are the same for other shells like Zsh and Korn Shell.

One of my past jobs was working with a product that written partially in C, with a significant portion of PRODUCTION CODE written in Korn Shell! At first, I was quite surprised, and a little amused. Really? Using a shell language? Did they not have any ‘real’ developers to write their software? But the truth was, they were developers, and they were doing things with shell scripts that I didn’t even know were possible! The shell, along with all of the standard Unix utilities, like ‘sed’, ‘awk’, ‘cat’, ‘cut’, ‘grep’, etc..., allowed them to create some very powerful functionality.

But there was one major drawback to their approach, one that wasn’t recognized because the software was not designed for performance (which is probably why they used so much shell programming to begin with). The problem was that the performance of their scripts was being dragged down by the heavy use of all of these external unix utilities.

When writing shell scripts that have the potential to do a lot of work, not too many people think about the performance impact of using different unix utilities, piping output to them, or to a series of piped commands.. I think a lot of people will be surprised at how expensive it is.

The following is done in the traditional way of piping out put to unix utilities in order to do processing. This is a simple example that counts the audio devices on the system.

pipes.sh

c=0
for f in /dev/* ; do
    # ls, pipe, & cut for every file
    group=$(ls -l $f | cut -d ' ' -f 4)  
    if [[ $group == audio ]] ; then
        ((c+=1))
    fi
done
echo There are $c audio devices

This produced the following output:

$ time ./pipes.sh
7 audio devices
 
real 0m7.307s
user 0m2.736s
sys 0m4.364s


So, about 7 seconds. This doesn’t really seem all that bad, but then again, I have no frame of reference, nothing to compare it to. But when I implement the exact same functionality using only a single invocation of an external command and one pipe to go with it, doing everything else with the Bash scripting language, I get a very different result:

nopipes.sh

c=0
# one ls and pipe for all files
ls -l /dev | { while read -a line ; do
        group=${line[3]}
        if [[ $group == audio ]] ; then
            ((c+=1))
        fi
    done
    echo There are $c audio devices
}

This version was 55 times faster! And it makes sense, if you think about it. It takes much, much longer for the shell to load executables, open files, and create external pipes than it does to simply refer to locations in the current process’s memory space. In this case, piping standard output through multiple utilities took 5,500% longer!!!

So, the moral of the story is, do as much work in the local process as possible. It can drastically improve the performance of your scripts.

More by this Author


Comments

No comments yet.

    Sign in or sign up and post using a HubPages Network account.

    0 of 8192 characters used
    Post Comment

    No HTML is allowed in comments, but URLs will be hyperlinked. Comments are not for promoting your articles or other sites.


    Click to Rate This Article
    working