Some of my scripts have been hanging around since 1996, and are just plain bad. It's interesting to look back on code you wrote 15 or 20 years ago, and wonder exactly what the hell you were thinking when you wrote that piece of... but I digress.
A great sites to start is David Pashley's 'Robust Bash Script' site, http://www.davidpashley.com/articles/writing-robust-shell-scripts.html.
Here's my quck list of things to do to clean up scripts.
First, a meta-script thing. I assume nobody reading this is crazy enough to have '.' in their PATH. If you do, get rid of it now.
I use the full pathname with every single command in a script. This eliminates the possibility of someone messing around with your PATH. If I use the command a lot, I just define it Here's the cron job I use to make sure disk space is OK on the servers.
#!/bin/bash
# check_disk_space
/bin/rm /tmp/disk_space 2>>/dev/null
/bin/df -P >/tmp/disk_space
HOST=`/bin/hostname -s`
ECHO="/bin/echo -e"
CUT="/bin/cut"
MAIL="/bin/mail"
{
read fs
# skip the header...
while read fs
do
blocks=`$ECHO $fs|$CUT -f2 -s -d" "`
if [ $blocks != "-" ]; then
avail=`$ECHO $fs|$CUT -f4 -s -d" "`
let valu='avail * 100 / blocks'
if [ $valu -gt 0 ]; then# check_disk_space
/bin/rm /tmp/disk_space 2>>/dev/null
/bin/df -P >/tmp/disk_space
HOST=`/bin/hostname -s`
ECHO="/bin/echo -e"
CUT="/bin/cut"
MAIL="/bin/mail"
{
read fs
# skip the header...
while read fs
do
blocks=`$ECHO $fs|$CUT -f2 -s -d" "`
if [ $blocks != "-" ]; then
avail=`$ECHO $fs|$CUT -f4 -s -d" "`
let valu='avail * 100 / blocks'
if [ $valu -lt 8 ]; then
$ECHO "CRITICAL $HOST disk space!"|$MAIL root pager
fi
if [ $valu -lt 13 ]; then
$ECHO "Check $HOST disk space!"|$MAIL root
fi
fi
fi
done
}
I error check everything. My one disagreement with Mr. Pashley is the 'set -e' and 'set -u' constructs - I avoid them, and do error checking instead. The advantage to using the 'set' constructs is that it makes it easy to error out of scripts without writing a whole lot of code. For instance, to use an example from his site:
chroot=$1
..
rm -rf $chroot/usr/share/doc
If you don't pass an argument to the script, you'll wipe out your documentation. If you use the 'set -u' at the top of the script, it'll fail with
./scriptname: line 15: $1: unbound variable
Great! But... what if that's a script we really, really need to run correctly? Like, say, a cron job? It'll fail, all right, and if you're faithful about reading logs you might catch it. And if not, it could be not working for a long, long time before it's caught. Hopefully, not doing anything too important - like backups.
Nope; I want to get hit over the head with a 2x4 if one of my scripts are failing:
MAIL="/bin/mailx"
ECHO="/bin/echo -e"
if [ "$#" -lt 1 ]; then {
$ECHO "$0: Error: too few arguments. Exiting..."|$MAIL root pager; exit 1; }
fi
if [ "$#" -lt 1 ]; then {
$ECHO "$0: Error: too few arguments. Exiting..."|$MAIL root pager; exit 1; }
fi
Note the "$0". Be nice to yourself; let yourself know which script you're getting the error message from.
But if nothing else, use this construct - it'll at least keep you out of serious trouble:
cd /nosuchdirectory || exit 1
rm -rf *
rm -rf *
I always use mkdir with the -p switch. That way, worst case it creates a directory you don't want. Best case... well:
mkdir /doesntexist/dir
cd doesntexist/dir
{process stuff}
rm -f *
or something similar. There's not much downside with the -p.
More tomorrow. Well, next week - the plant's closed the rest of the week. Happy New Year!
No comments:
Post a Comment