Unix Shell

Unix Shell Scripting

Here I will be mainly discussing BASH scripting also know as Bourne Shell.

This post will help administrator and novice UNIX programmer to understand and write strong as well as useful shell programs. I will talks on shell syntax and few simple day-to-day utilities for UNIX. Will discussing mainly Bourn shell because of its widely acclaimed popularity and usage among the UNIX world.

Many standard utilities (rdist, make, cron, etc.) allow you to specify a command to run at a certain time. Usually, this command is simply passed to the Bourne shell, which means that you can execute whole scripts, should you choose to do so. Steve Bourne, wrote the Bourne shell which appeared in the Seventh Edition Bell Labs Research version of Unix.

Lastly, UNIX runs Bourne shell scripts when it boots. If you want to modify the boot-time behavior of a system, you need to learn to write and modify Bourne shell scripts. It said there are 95% shell code written in Bourne shell. Code written in Bourne shell is compatible with shell scripting like ksh, bash & zsh. That means with minimal or no change one could port shell script written in Bourne shell to ksh or bash or zsh.

*Bash is popular among Linux users.

There are couple of other shells available which I will not be talking are C shell ( Used by C program to create programming rich shell) this is incompatible to Bourne shell. The C shell, csh, and its variant tcsh is a fine interactive shell (I use tcsh), but is a lousy shell for writing scripts.

Lets understand what is an executable files, they also know as program or binary executables, This files are machine readable files and human eyes can not make sense out of it. Others only contain text, knows as script. They are interpreter scripts like awk, sed, perl and many more.

Learning Shell Scripting


The first line of any script must begin with #!, followed by the name of the interpreter. Some versions of UNIX allow whitespace between #! and the name of the interpreter. Others do not. Hence, if you want your script to be portable, leave out the blank.

A script, like any file that can be run as a command, needs to be executable: save this script as rotatelog and run

chmod +x rotatelog
to make it executable.

You can now run it by running

Unlike some other operating systems, UNIX allows any program to be used as a script interpreter. This is why people talk about “a Bourne shell script” or “an awk script.” One might even write a more script, or an ls script (though the latter wouldn’t be terribly useful). Hence, it is important to let Unix know which program will be interpreting the script.

When Unix tries to execute the script, it sees the first two characters (#!) and knows that it is a script. It then reads the rest of the line to find out which program is to execute the script. For a Bourne shell script, this will be /bin/sh. Hence, the first line of our script must be


After the command interpreter, you can have one, and sometimes more, options. Some flavors of Unix only allow one, though, so don’t assume that you can have more.


sh allows you to have variables, just like any programming languages. Variables do not need to be declared.

To set a sh variable, use VAR=value

And to use the value of the variable later, use $VAR or ${VAR}

The latter syntax is useful if the variable name immediately followed by other text:

<p style="margin: 0 0 .0001pt;"><strong></strong></p>
#shell variables
echo "\$BASH_VERSION version bash $BASH_VERSION"
echo "\$EUID user ID of the current user $EUID"

#command line arguments
echo "\$0 Learn shell using $0"
echo "\$# Total no. of arguments in command = $#"
echo "\$* Total argument $*"
echo "\$@ Total argument $@"
echo "\$? Exit status of last command $?"
echo "\$- Option used to execute the shell $-"
echo "\$$ PID $$"
echo "\$! Last executable command in the background $!"

#unix popular commands
echo "Popular unix commands used in shell"
echo "who display who logged in"
echo "ps processes running in this shell"
echo "pwd current directory path"
echo "ls list of files"
echo "du -h disk consumed by current folder in kilo bytes"
du -h
echo "kill -l lists all available signals"
kill -l
ehco "ulimit system limits on resource usage."
#control flow
echo "Control flow syntax"

if grep "jabber" /etc/passwd
 echo "Jabber account found in system"
 echo "Jabber does not exist here"

echo "\$IFS (Input Field Separator) to splits strings into words"
echo "Example for \$IFS"
echo "Value of \$PATH variable is $PATH";
echo "Let us split it by : colon"
echo "IFS=:
for n in \$PATH
 echo \$n
for n in $PATH
 echo $n
#file handling
Baat jo dil se nikalti hai asar rakhti hai
Par nahi taaqte parvaaz magar rakjhti hai

while read line
 echo $line
done < poem > poem2

exit 0
<p style="margin: 0 0 .0001pt;"></p>
<p style="margin: 0 0 .0001pt;">



echo This looks $COLORish

echo This seems ${COLOR}ish


This looks

This seems yellowish

There is only one type of variable in sh: strings.

This is somewhat limited, but is sufficient for most purposes.

Local vs. environment variables

A sh variable can be either a local variable or an environment variable. They both work the same way; the only difference lies in what happens when the script runs another program (which, as we saw earlier, it does all the time).

Environment variables are passed to subprocesses. Local variables are not.

By default, variables are local. To turn a local variable into an environment variable,


export VAR

Here’s a simple wrapper for a program:




export CLASSPATH $NETSCAPE_HOME/bin/netscape.bin

Here, NETSCAPE_HOME is a local variable; CLASSPATH is an environment variable. CLASSPATH will be passed to netscape.bin (netscape.bin uses the value of this variable to find Java class files); NETSCAPE_HOME is a convenience variable that is only used by the wrapper script; netscape.bin doesn’t need to know about it, so it is kept local.

The only way to unexport a variable is to unset it:

unset VAR

This removes the variable from the shell’s symbol table, effectively making as if it had never existed; as a side effect, the variable is also unexported.

Also, if you have a function by the same name as the variable, unset will also delete that function. Since you may want to use this variable later, it is better not to define it in the first place.

Also, note that if a variable was passed in as part of the environment, it is already an environment variable when your script starts running. If there is a variable that you really don’t want to pass to any subprocesses, you should unset it near the top of your script. This is rare, but it might conceivably happen.

If you refer to a variable that hasn’t been defined, sh substitutes the empty string.


echo aaa $FOO bbb

echo xxx${FOO}yyy


aaa bbb xxxyyy

Special variables

sh treats certain variables specially: some are set for you when your script runs, and some affect the way commands are interpreted.

Command-line arguments

The most useful of these variables are the ones referring to the command-line arguments. $1 refers to the first command-line argument (after the name of the script), $2 refers to the second one, and so forth, up to $9.

If you have more than nine command-line arguments, you can use the shift command: this discards the first command-line argument, and bumps the remaining ones up by one position: $2 becomes $1, $8 becomes $7, and so forth.

The variable $0 (zero) contains the name of the script (argv[0] in C programs).

Often, it is useful to just list all of the command-line arguments.

For this, sh provides the variables $* (star) and $@ (at). Each of these expands to a string containing all of the command-line arguments, as if you had used $1 $2 $3…

The difference between $* and $@ lies in the way they behave when they occur inside double quotes: $* behaves in the normal way, whereas $@ creates a separate double-quoted string for each command-line argument.

That is, “$*” behaves as if you had written “$1 $2 $3”, whereas “$@” behaves as if you had written “$1” “$2” “$3”.

Finally, $# contains the number of command-line arguments that were given.

Other special variables

$? gives the exit status of the last command that was executed. This should be zero if the command exited normally.

$- lists all of the options with which sh was invoked. See sh(1) for details.

$$ holds the PID of the current process.

$! holds the PID of the last command that was executed in the background.

$IFS (Input Field Separator) determines how sh splits strings into words.

Quasi-variable constructs

The ${VAR} construct is actually a special case of a more general class of constructs:


Use default value: if VAR is set and non-null, expands to $VAR. Otherwise, expands to expression.


Set default value: if VAR is set and non-null, expands to $VAR. Otherwise, sets VAR to expression and expands to expression.


If VAR is set and non-null, expands to $VAR. Otherwise, prints expression to standard error and exits with a non-zero exit status.


If VAR is set and non-null, expands to the empty string. Otherwise, expands to expression.


Expands to the length of $VAR.

The above patterns test whether VAR is set and non-null. Without the colon, they only test whether VAR is set.


sh supports a limited form of pattern-matching. The operators are

* Matches zero or more characters.

? Matches exactly one character.

[range] Matches any character in range.

Range can be either a list of characters that match, or two endpoints separated by a dash: [ak3] matches either a, k, or 3; [a-z] matches any character in the range a through z; [a-mz] matches either a character in the range a through m, or z. If you wish to include a dash as part of the range, it must be the first character, e.g., [-p] will match either a dash or p.

When an expression containing these characters occurs in the middle of a command, sh substitutes the list of all files whose name matches the pattern. This is known as “globbing.” Otherwise, these are used mainly in the case construct.

As a special case, when a glob begins with * or ?, it does not match files that begin with a dot. To match these, you need to specify the dot explicitly (e.g., .*, /tmp/.*).

Note to MS-DOS users: under MS-DOS, the pattern *.* matches every file. In sh, it matches every file that contains a dot.


If you say something like

echo * MAKE $$$ FAST *

it won’t do what you want: first of all, sh will expand the *s and replace them with a list of all the files in the current directory. Then, since any number of tabs or blanks can separate words, it will compress the three spaces into one. Finally, it will replace the first instance of $$ with the PID of the shell. This is where quoting comes in.

sh supports several types of quotes. Which one you use depends on what you want to do.


Just as in C strings, a backslash (“\”) removes any special meaning from the character that follows. If the character after the backslash isn’t special to begin with, the backslash has no effect. The backslash is itself special, so to escape it, just double it: \\.

Single quotes

Single quotes, such as ‘foo’ work pretty much the way you’d expect: anything inside them (except a single quote) is quoted.

You can say

echo ‘* MAKE $$$ FAST *’

and it’ll come out the way you want it to.

Note that a backslash inside single quotes also loses its special meaning, so you don’t need to double it. There is no way to have a single quote inside single quotes.

Double quotes, such as


preserve spaces and most special characters. However, variables and backquoted expressions are expanded and replaced with their value.

If you have an expression within backquotes (also known as backticks), e.g.,


the expression is evaluated as a command, and replaced with whatever the expression prints to its standard output. Thus,

echo You are `whoami`


You are irfan

Built-in commands

sh understands several built-in commands, i.e., commands that do not correspond to any program. These commands include:

{ commands ; }, ( commands )

Execute commands in a subshell. That is, run them as if they were a single command. This is useful when I/O redirection is involved, since you can pipe data to or from a mini-script inside a pipeline.

The { commands; } variant is somewhat more efficient, since it doesn’t spawn a true subshell. This also means that if you set variables inside of it, the changes will be visible in the rest of the script.

: (colon)

Does nothing. This is generally seen as

: ${VAR:=default}

. filename

The dot command reads in the specified filename, as if it had occurred at that place in the script.

bg [job], fg [job]

bg runs the specified job (or the current job, if none is specified) in the background. fg resumes the specified job (or the current job, if none is specified) in the foreground.

Jobs are specified as %number. The jobs command lists jobs.

cd [dir]

Sets the current directory to dir. If dir is not specified, sets the current directory to the home directory.


Prints the current directory.

echo args

Prints args to standard output.

eval args

Evaluates args as a sh expression. This allows you to construct a string on the fly (e.g., using a variable that holds the name of a variable that you want to set) and execute it.

exec command

Runs the specified command, and replaces the current shell with it. That is, nothing after the exec statement will be executed, unless the exec itself fails.

exit [n]

Exit the current shell with exit code n. This defaults to zero.

kill [-sig] %job

Send signal sig to the specified job. sig can be either numeric or symbolic. kill -l lists all available signals. By default, sig is SIGTERM (15).

read name…

Reads one line from standard input and assigns it to the variable name. If several variables name1, name2, name3 etc. are specified, then the first word of the line read is assigned to name1, the second to name2, and so forth. Any remaining words are assigned to the last variable.

set [+/-flag] [arg]

With no arguments, prints the values of all variables.set -x turns on the x option to sh; set +x turns it off.set args… sets the command-line arguments to args.

test expression

Evaluates a boolean expression and exits with an exit code of zero if it is true, or non-zero if it is false. See test for more details.

trap [command sig]…

If signal sig is sent to the shell, execute command. This is useful for exiting cleanly (e.g., removing temporary files etc.) when the script is interrupted.


Print or set system limits on resource usage.

umask [nnn]

Sets the umask to nnn (an octal number). With no argument, prints the current umask. This is most useful when you want to create files, but want to restrict who can read or write them.

wait [n]

Wait for the background process whose PID is n to terminate. With no arguments, waits for all of the background processes to terminate.

Bear in mind that the list of builtins varies from one implementation to another, so don’t take this list as authoritative.

Flow control

sh supports several flow-control constructs, which add power and flexibility to your scripts.


The if statement is a simple conditional. You’ve seen it in every programming language. Its syntax is

if condition ; then
[elif condition ; then

That is, an if-block, optionally followed by one or more elif-blocks (elif is short for “else if”), optionally followed by an else-block, and terminated by fi.

The if statement pretty much does what you’d expect: if condition is true, it executes the if-block. Otherwise, it executes the else-block, if there is one. The elif construct is just syntactic sugar, to let you avoid nesting multiple if statements.



if [ $myname = root ]; then

echo “Welcome to FooSoft 3.0”


echo “You must be root to run this script”

exit 1


The more observant among you (or those who are math majors) are thinking, “Hey! You forgot to include the square brackets in the syntax definition!”

Actually, I didn’t: [ is actually a command, /bin/[, and is another name for the test command. See below for details.

This is why you shouldn’t call a test program test: if you have “.” at the end of your path, as you should, executing test will run /bin/test.

The condition can actually be any command. If it returns a zero exit status, the condition is true; otherwise, it is false. Thus, you can write things like



if grep $user /etc/passwd; then

echo “$user has an account”


echo “$user doesn’t have an account”



The while statement should also be familiar to you from any number of other programming languages. Its syntax in sh is

while condition; do

As you might expect, the while loop executes commands as long as condition is true. Again, condition can be any command, and is true if the command exits with a zero exit status.

A while loop may contain two special commands: break and continue.

break exits the while loop immediately, jumping to the next statement after the done.

continue skips the rest of the body of the loop, and jumps back to the top, to where condition is evaluated.


The for loop iterates over all of the elements in a list. Its syntax is

for var in list; do

list is zero or more words. The for construct will assign the variable var to each word in turn, then execute commands. For example:


for i in foo bar baz “do be do”; do

echo “$i”


will print

foo bar baz do be do

A for loop may also contain break and continue statements. They work the same way as in the while loop.


The case construct works like C’s switch statement, except that it matches patterns instead of numerical values. Its syntax is

case expression in


expression is a string; this is generally either a variable or a backquoted command.

pattern is a glob pattern (see globbing).

The patterns are evaluated in the order in which they are seen, and only the first pattern that matches will be executed. Often, you’ll want to include a “none of the above” clause; to do this, use * as your last pattern.


A command’s input and/or output may be redirected to another command or to a file. By default, every process has three file descriptors: standard input (0), standard output (1) and standard error (2). By default, each of these is connected to the user’s terminal.

However, one can do many interesting things by redirecting one or more file descriptor:

< filename

Connect standard input to the file filename. This allows you to have a command read from the file, rather than having to type its input in by hand.

> filename

Connect standard output to the file filename. This allows you to save the output of a command to a file. If the file does not exist, it is created. If it does exist, it is emptied before anything happens.

(Exercise: why doesn’t cat * > zzzzzzz work the way you’d expect?)

>> filename

Connects standard output to the file filename. Unlike >, however, the output of the command is appended to filename.


This construct isn’t used nearly as often as it could be. It causes the command’s standard input to come from… standard input, but only until word appears on a line by itself. Note that there is no space between << and word.

This can be used as a mini-file within a script, e.g.,

cat > foo.c <<EOT

#include <stdio.h>

main() {

printf(“Hello, world!\n”);


It is also useful for printing multiline messages, e.g.:

line=13 cat <<EOT An error occurred on line $line. See page 98 of the manual for details. EOT

As this example shows, by default, << acts like double quotes (i.e., variables are expanded). If, however, word is quoted, then << acts like single quotes.


Use file descriptor digit as standard input.


Use file descriptor digit as standard output.


Close standard input.


Close standard output.

command1 | command2

Creates a pipeline: the standard output of command1 is connected to the standard input of command2. This is functionally identical to

command1 > /tmp/foo
command2 < /tmp/foo

except that no temporary file is created, and both commands can run at the same time

There is a proverb that says, “A temporary file is just a pipe with an attitude and a will to live.” Any number of commands can be pipelined together.

command1 && command2

Execute command1. Then, if it exited with a zero (true) exit status, execute command2.

command1 || command2

Execute command1. Then, if it exited with a non-zero (false) exit status, execute command2.

If any of the redirection constructs is preceded by a digit, then it applies to the file descriptor with that number, rather than the default (0 or 1, as the case may be). For instance,

command 2>&1 > filename

associates file descriptor 2 (standard error) with the same file as file descriptor 1 (standard output), then redirects both of them to filename.

This is also useful for printing error messages:

echo “Danger! Danger Will Robinson!” 1>&2

Note that I/O redirections are parsed in the order they are encountered, from left to right. This allows you to do fairly tricky things, including throwing out standard output, and piping standard output to a command.


When a group of commands occurs several times in a script, it is useful to define a function. Defining a function is a lot like creating a mini-script within a script.

A function is defined using

name () {

and is invoked like any other command:

name args…

You can redirect a function’s I/O, embed it in backquotes, etc., just like any other command.

One way in which functions differ from external scripts is that the shell does not spawn a subshell to execute them. This means that if you set a variable inside a function, the new value will be visible outside of the function.

A function can use return n to terminate with an exit status of n. Obviously, it can also exit n, but that would terminate the entire script.

Function arguments

A function can take command-line arguments, just like any script. Intuitively enough, these are available through $1, $2… $9 just like the main script.

Useful utilities

There are a number of commands that aren’t part of sh, but are often used inside sh scripts. These include:


basename pathname prints the last component of pathname:

basename /foo/bar/baz




The complement of basename: dirname pathname prints all but the last component of pathname, that is the directory part: pathname:

dirname /foo/bar/baz



/bin/[ is another name for /bin/test. It evaluates its arguments as a boolean expression, and exits with an exit code of 0 if it is true, or 1 if it is false.

If test is invoked as [, then it requires a closing bracket ] as its last argument. Otherwise, there must be no closing bracket.

test understands the following expressions, among others:

-e filename

True if filename exists.

-d filename

True if filename exists and is a directory.

-f filename

True if filename exists and is a plain file.

-h filename

True if filename exists and is a symbolic link.

-r filename

True if filename exists and is readable.

-w filename

True if filename exists and is writable.

-n string

True if the length of string is non-zero.

-z string

True if the length of string is zero.


True if string is not the empty string.

s1 = s2

True if the strings s1 and s2 are identical.

s1 != s2

True if the strings s1 and s2 are not identical.

n1 -eq n2

True if the numbers n1 and n2 are equal.

n1 -ne n2

True if the numbers n1 and n2 are not equal.

n1 -gt n2

True if the number n1 is greater than n2.

n1 -ge n2

True if the number n1 is greater than or equal to n2.

n1 -lt n2

True if the number n1 is less than n2.

n1 -le n2

True if the number n1 is less than or equal to n2.

! expression

Negates expression, that is, returns true iff expression is false.

expr1 -a expr2

True if both expressions, expr1 and expr2 are true.

expr1 -o expr2

True if either expression, expr1 or expr2 is true.

( expression )

True if expression is true. This allows one to nest expressions.

Note that lazy evaluation does not apply, since all of the arguments to test are evaluated by sh before being passed to test. If you stand to benefit from lazy evaluation, use nested ifs.


echo is a built-in in most implementations of sh, but it also exists as a standalone command.

echo simply prints its arguments to standard output. It can also be told not to append a newline at the end: under BSD-like flavors of Unix, use

echo -n “string”

Under SystemV-ish flavors of Unix, use

echo “string\c”


Awk (and its derivatives, nawk and gawk) is a full-fledged scripting language. Inside sh scripts, it is generally used for its ability to split input lines into fields and print one or more fields. For instance, the following reads /etc/passwd and prints out the name and uid of each user:

awk -F : ‘{print $1, $3 }’ /etc/passwd

The -F : option says that the input records are separated by colons. By default, awk uses whitespace as the field separator.


Sed (stream editor) is also a full-fledged scripting language, albeit a less powerful and more convoluted one than awk. In sh scripts, sed is mainly used to do string substitution: the following script reads standard input, replaces all instances of “foo” with “bar”, and writes the result to standard output:

sed -e ‘s/foo/bar/g’

The trailing g says to replace all instances of “foo” with “bar” on a line. Without it, only the first instance would be replaced.


tee [-a] filename reads standard input, copies it to standard output, and saves a copy in the file filename.

By default, tee empties filename before it begins. With the -a option, it appends to filename.


Unfortunately, there are no symbolic debuggers such as gdb for sh scripts. When you’re debugging a script, you’ll have to rely the tried and true method of inserting trace statements, and using some useful options to sh:

The -n option causes sh to read the script but not execute any commands. This is useful for checking syntax.

The -x option causes sh to print each command to standard error before executing it. Since this can generate a lot of output, you may want to turn tracing on just before the section that you want to trace, and turn it off immediately afterward:

set -x # XXX – What’s wrong with this code? grep $user /etc/passwd 1>&2 > /dev/null set +x


Perl GMAIL Feed

#!/usr/bin/env perl
use warnings;
use strict;


Checks if there are new unread messages in your GMail Inbox.

=head1 USAGE

$ perl

############## Configuration ##############

# Change this to your correct username.
use constant GMAIL_USERNAME => “username”;
# Change this to your correct password.
use constant GMAIL_PASSWORD => “password”;

########## Don’t change anything below this. ##########

use LWP::UserAgent;
use XML::Atom::Feed;

my $fetcher = LWP::UserAgent->new();

my $request = HTTP::Request->new(
‘GET’   => “”,
$request->authorization_basic(GMAIL_USERNAME, GMAIL_PASSWORD);

my $response = $fetcher->request($request);

if (! $response->is_success())
die(“Unsuccessful in trying to talk to GMail”);

my $content = $response->content;
my $feed = XML::Atom::Feed->new(\$content);
my @new_messages = $feed->entries();

my $i = 1;
foreach my $message(@new_messages)
print join(“\t”, $i, $message->author->name,
$message->title), “\n”;

# The End

MYSQL replication in the same box

MYSQL replication in the same box


MYSQL 5.0.17 standard (SLAVE) –

MYSQL 4.1.1 standard (MASTER) – MASTER running on port 3306 SLAVE running on port 3308 MASTER (my.cnf)
server-id = 1
log-bin SLAVE (my.cnf)
server-id = 2
master-host = localhost
master-user = root
master-password = mysql
master-port = 3306

replicate-do-db = dbrep

to check the MASTER status on mysql cosole execute this querymysql > SHOW MASTER STATUS;

mysql > grant replication slave, replication client,file,super,reload,select on *.* to root@’%hostname%’ identified by ‘mysql’; to check the SLAVE status on mysql cosole execute this querymysql > SHOW SLAVE STATUS;

mysql > grant all on *.* to root@%hostname% identified by ‘mysql’; mysql> CHANGE MASTER TO
-> MASTER_HOST=’master_host_name’,
-> MASTER_USER=’master_user_name’,

-> MASTER_PASSWORD=’master_pass’,

-> MASTER_LOG_FILE=’recorded_log_file_name’,

-> MASTER_LOG_POS=recorded_log_position;
Above mentioned query can be used to chnage the slave info at runtime. restart both MYSQL SLAVE & MASTER On slave execute the command
mysql > start slave; Now create the database at MASTER and start creating tables and inserting values into it. Check the slave it started replicating all those tables. Isn’t it simple. )