Académique Documents
Professionnel Documents
Culture Documents
UNIT - I
The origin of UNIX can be tracked back to 1965,when a joint venture was
undertaken by Bell Telephone Laboratories, the General Electric Company and Massachusetts
Institute of Technology. The aim was to develop an operating system that could serve a large
community of users and allow them to store data if need be. This never-to-be enterprise was
called Multics, for multiplex information and computer service. Even after much time,
resources and efforts had be devoted to the project, the convenient, interactive computing
service as by Ritchie, failed to materialize. This led Dennis Ritchie and Ken Thompson, both of
AT&T, to start a fresh on what their mind’s eye had so illustriously envisioned. Thus in 1969,
the two along with a few others evolved what was to be the first version of multi-user system
UNIX. Armed with a museum piece of a computer called PDP-7, a rudimentary file system was
developed. Though this was not tapped to the fullest, it had all the trappings of a truly potent
multi-user operating system. This system was christened ‘UNIX’ by Brain Kernigham, as a
remainder of the ill-fated Multics. Later, in 1971 UNIX was ported to a PDP-11 computer with
a 512 KB disk. UNIX then was a 16 KB system with 8 KB for user program and a upper limit
of 64 KB per file. All its assembly code being machine dependent, the version was not portable,
a key requirement for a successful operating system.
To remedy this, Ken Thompson created a new language ‘B’ and set about the
Herenlean task of the whole UNIX code in this high level programming. Ritchie shifted the
inadequacies of B and modified it to a new language, which he named as ‘C’- the language that
finally enabled UNIX to stand tall on any machine.
Thus, by 1973, UNIX had come a long way from its PDP-7 days, and was soon
licensed to quite a number of universities, companies and other commercial institutions. With
its uncomplicated elegance it was charming a following perhaps more effortlessly than to pied
piper of the fables. The essentially accommodating nature of the system encouraged many a
developer to polish and enhance its capabilities, which kept it alive and with the times.
By the mid eighties there were more than a hundred thousand UNIX installations
running on anything from a micro to a mainframe computer and over numerous varying
architectures – a remarkable achievement for an operating system by any standard. Almost a
decade later UNIX still holds the record for being the soul of more computer networks than
other operating system is.
1. Multi-user capability
Among its salient features, what comes first is its multi-user capability. In a multi-
user system, the same computer resources hard disk, memory etc are accessible to many users.
Of course, the users don’t flock together at the same computer, but are given different
terminals to operate from. A terminal, in turn, is a keyboard and a monitor, which are the
input and output devices for that user. All terminals are connected to the main computer whose
resources are availed by all users. So, a user at any of the terminals can use not only the
computer, but also any peripherals that may be attached, says for instance a printer. One can
easily appreciate now economical such a setup is than having as many computers as there are
users, and also much more convenient when the same data is to be shared by all. The following
figure-1.1 shows the typical setup.
Figure 1.1:
TREMINAL
TERMINAL TERMINAL
HOST MACHINE
TERMINAL
TERMINAL
At the heart of a UNIX installation is the host machine, often known as a server (or)
a console. The number of terminals that can be connected to the host machine depends on the
number of posts that are present in its controller card. For example, a 4-port controller card in
the host machine can support 4 terminals. There are several types of terminals that can be
attached to the host. There are:
These terminals consist of a keyboard and a display unit with no memory or disk of
its own. These can never act as independent machines. If they are to be used they have to be
connected to the host machine.
Terminal
Modem
Figure 1.2
2. Multitasking capability
Most of us must have used sidekick (or) some other memory resident program.
Once we load this into memory, a simple keystroke can take us from sidekick to another
program we may be running (or) vice versa.
If, for example, we invoke sidekick in the middle of some calculation being done, then
all work on the calculations would be stopped as the computer responds to sidekick. Once we
are through with sidekick and we list a key to go out of sidekick the calculations then be
resumed. Wouldn’t it be far better to give sidekick only a part of the computer time? So that
even while we were in sidekick, the calculations would carry on being performed in the
background. And this is exactly what UNIX does. Using the timer interrupt it schedules the
CPU time between programs. These time periods are known as time-slices. If there were 10
programs running at one time, the microprocessor would keep switching between these 10
programs. At a given point in time the CPU will handle only one program. But because the
switch happens very fast we get the feeling that the microprocessor is working on all the
programs simultaneously.
Thus, multitasking of UNIX is different from DOS, which does not give time-slices
to running programs. And if there are 5 programs running in DOS and even one goes haywire,
the entire machine hangs. In any genuine multitasking environment like UNIX this does not
happen.
Does UNIX give equal time-slices to all programs running in memory? No.
There may be some programs that are relatively more important. For example, those that wait
for user responses are given a higher priority. Programs, which have the same priority, are
scheduled on a round-robin basis.
3. Communication
UNIX has excellent provision for communicating with fellow users. The
communication may be within the network of a single main computer, or between two (or)
more such computer networks. The users can easily exchange mail, data, and programs
through such networks. Distance posses no barrier to passing information (or) messages to and
fro. We may be two feet away (or) at two thousand miles our mail will hardly take any time to
reach its destination.
4. Security
UNIX allows sharing of data, but not indiscriminately. Had it been so, it would be
the delight of mischief mongers and useless for any worthwhile enterprise. UNIX has three
inherent provisions for protecting data. Assigning passwords and login names to individual
users ensuring that not anybody can come and have access to our work provides this first.
At the file level, thee are read, write and execute permissions to each file which
decide who can access a particular file, who can modify it and who can execute it. We may
reserve read and write permissions for our self and leave others on the network free to execute
it, or any such combination.
Lastly, there is file encryption. This utility encodes our file into an unreachable
format, so that even if someone succeeds in opening it, our secrets are safe. Of course should
we want to see the contents, we can always decrypt the file.
5. Portability
One of the main reasons for the universal popularity of UNIX is that it can be
ported to almost any computer system with only the bare minimum of adaptations to suit the
given computer architecture. As of today, there are innumerable computer manufacturers
around the globe, and tens of hundreds of hardware configurations, more often than not.
UNIX is running strong on each one of them. And lest we forget, due credit for this feat must
be given to the Dennis Ritchie’s prodigy, c, which granted UNIX this hardware transparency.
UNIX, in fact is almost entirely written in c.
The functioning of UNIX is manned in three levels. On the outer crust reside the
application program and other utilities, which speak our language. At the heart of UNIX, on
the other hand, is the kernel, which interacts with the actual hardware in machine language.
The middle layer, called the shell, does the streamlining of these two modes of communication.
Figure 1.3 shows the three layers of UNIX operating system.
Users
Shell
Kernel
Hardware
Figure 1.3: Three Layers of UNIX OS
The kernel has various functions. It manages files, carries out all the data transfer
between the file system and the hardware, and also manages memory. The owns of scheduling
of various programs running in memory (or) allocation of CPU time to all running programs
also lies with the kernel. It also handles any interrupts issued, as it is the entity that has direct
dealings with the hardware.
The kernel program is usually stored in a file called ‘UNIX’ where as the shell
program is in a file called ‘SH’. For each user working with UNIX at any time different shell
programs are running. Thus, at a particular point in time there may be several shells running
in memory but only one kernel. This is because; at any instance UNIX is capable of executing
only one program as the other programs wait for their turn. And since it’s the kernel which
executes the program one kernel is sufficient. However, different users at different terminals
are trying to seek kernel’s attention. And since the user interacts with the kernel through the
shell different shells are necessary.
TYPES OF SHELL
Different people implemented the interpreter function of the shell in different ways.
This gave rise to various types of shell, the most prominent of which are outlined below :
Bourne Shell
Among all, Steve Bourne’s creation, known after time as the Bourne shell, is the
most popular; probably that’s why it is bundled with every UNIX system. Or perhaps it is the
other way round. Because it was bundled with every system it became popular. Whatever the
cause and the effect, the fact remains that this is the shell used by many UNIX users. This will
also be the shell we shall be talking about extensively through the course of this material.
C Shell
This shell is a hit with those who are seriously into UNIX programming. Bill Joy,
then pursuing his graduation at the university of California at Berkeley, created it. It has the
two advantages over the Bourne shell.
Firstly, it allows aliasing of commands. That is, we can decide what name we want
to call a command by. This proves very useful when we rename lengthy commands, which are
used time and again. Instead of typing the entire command we can simply use the short alias at
the command line.
If we want to save even more on the typing work, C shell has a command history
feature. This is the second benefit that comes with C shell. Previously typed commands can be
recalled, since the c shell keeps track to the one provided by the program DOSKEY in MSDOS
environment.
Korn Shell
If there was any doubt about the cause-effect relationship of the popularity of
Bourne shell and its inclusion in every package, this adds fuel to it. The not-so-widely-used
Korn shell is very powerful, and is decidedly more efficient than the other. It was designed to
be so by David Korn of AT&T’s Bell laboratories.
If we haven’t been given a login name and password, we won’t be able to gain
access to UNIX. After we enter our login name, we are prompted to enter the password which
when keyed is does not appear on the display. Obviously this is to ensure that no-chance (or)
premeditated passer-by is able to sneak in on it.
When we try to access our system, UNIX will display a prompt that loops
something like this:
Login: aa1
Password: SITAMS
After receiving the login prompt, we can enter our login name (aa1 in the above
example), after which we receive the password prompt. At this stage we must type in our
password (SITAMS in this example). The password of course would not appear on the screen.
The password we use should be kept private. It is the method used by UNIX to prevent
unauthorized entry into the system. The password should be changed frequently. On many
systems, after a specified period of time, our password expires (ages) and the next time we
login the system requires us to change our password. In addition, we can change our password
whenever we like on most systems by using a command to alter the password in a later
discussion.
Sometimes we may not type the login name (or) password properly. When we do
this, the system will respond with the following message:
Login: aa1
Password: SITAMS
Login incorrect.
Note:
The system does not tell which one is incorrect, the login name or the password. Again,
this is a security measure. Even if we type our login name improperly, we will still get the
password prompt. We usually get three to five attempts to get it right before our terminal is
disconnected. Many times a message is displayed to the system administrator telling him (or)
her that several unsuccessful attempts were made on our login name.
Once the correct login name and password have been supplied, we find some welcome
messages from the suppliers of the UNIX version installed on the host machine, followed by a
command prompt. The command prompt is a $ (dollar) if we are operating in Bourne shell, or
a % if in C shell.
UNI X UT IL ITIES -1
Introduction to UNIX File System:
Before to learn UNIX commands it is essential to understand the UNIX file system
since UNIX treats everything it knows and understands, as a file. All utilities, applications and
data in UNIX are stored as files. Even a directory is treated as a file, which contains several
other files. The UNIX file system resembles an upside down tree. Thus, the file system begins
with a directory called root. The root directory is denoted as slash (/). Branching from the root
there are several other directories called bin, lib, usr, etc, tmp and dev. The root directory also
contains a file called UNIX which in UNIX kernel itself. These directories are called sub
directories, their parent being the root directory. Each of these sub directories contains several
files and directories called sub sub directories. Fig: shows the basic structure of the unix file
system:
/ ( root )
The main reason behind creation of the directories is to keep related files
together and separate them from other group of related files. For example, it is keep all user
related files in the user directory, all devices related files in the dev directory, all temporary
files in temp directory and so on. Let us now lok at the purpose of each of these directories.
The bin directory contains executable files for most of the unix commands
.unix commands can be either c programs or shell programs are nothing but a collection of
unix commands.
The lib directory contains all the library functions provided by unix for
programmers. The program written under unix make use of these library functions in the lib
directory.
The dev directory contains files that control various input/output devices like
terminals, printers, disk drivers etc. For each device there is a separate file. In unix each device
there is a separate file. In UNIX each device is implemented as a file. For example, everything
that is displayed on your terminal and this file is present in the dev directory.
In the usr directory there are several directories, each associated with a
particular user. These directories are created by the system administrator when he creates
accounts (often called home directory) and can organize his directory by creating other
subdirectories in it, to contain functionality related files.
Within the user directory there is another bin directory which contains
additional unix command files.
The temp directory contains the temporary files created by the unix (or) by
the user’s. Since the files present in it are created for the temporary purpose unix can afford
to dispense with them. These files get automatically deleted when the system is shutdown and
restarted.
All the afore mentioned directories are present on almost all unix
installations. The following figure captures the essence of these directories and their purpose
Directory Description
V i Edito r
No matter what work you do with unix system, you will eventually write some c programs
(or) shell (or pearl) scripts. You can have to edit some of the system files at the times. If you are
working on databases, you will also need to write SQL query, scripts, procedures and triggers.
For all this, you must learn to use an editor, and unix provides a very versatile one-vi (visual
editor).
VI is a full screen editor now available with all unix system, and is widely acknowledged
as one of the most powerful editors available in any environment. Another contribution of the
University of the California, Berkley, it owes its origin to William (Bill) Joy, a graduate student
who wrote this unique program. It became extremely popular, leading popular, leading joy to
later remark that he wouldn’t have written it had he known that it would become favors.
VI offers cryptic, and sometimes mnemonic, internal commands for editing work. It
makes complete use of keyboard, where practically every key has a function. VI has
innumerable features, and it takes time to master most of them. You don’t need to do that
everyday. As a beginner, you shouldn’t waste your time learning the frills and nuances of the
editor. Editing is secondary task in any environment, and a working knowledge is all that is
required initially.
Linux features a number of “vi” editors, of which vim (vi improved) is the most
common. Apart from vim, there are xvi, hvi, and elvis which have certain excusive functions
not found in Berkley version.
Comman
d mode
Input EX mode
mode
Infact vi created a sensation when it appeared on the unix screen since it was the first
full screen editor. It allowed the user to view and edit the entire document at the same time.
Creating and editing files became a lost easier and that’s the reason it became an instant bit
with the programmers.
There are several disadvantages in using vi. These are:
1. The user is always kept guessing. There are no self explanatory error messages. If
anything goes wrong no error messages appear, only the speaker keeps informing you
that something went wrong.
2. The guy who wrote VI didn’t believe in help, so there wasn’t any online help available
in VI. Incidentally, VI was written by Bill Joy when he was student at the University of
the California.
3. There are three modes in which the editor works. Under each mode the key pressed
creates different effects. Hence meaning of several keys and their effects in each mode
has to be memorized.
4. VI is fanatically case sensitive. A ‘h’ moves the cursor position to the left where as
‘H’positions it at the top left corner. Moreover, you are required to remember both.
Inspite all the above advantages the vi is so popular even today. One of the major
reasons is that vi is available on almost all unix system. Yes even if it is installed in Siberia.
VI can handle files that contain the text. No fancy formatting, no fonts, no embedded
graphics or junk like that, just plain simple text. You can create files, edit files and print them.
It cannot do bold fare, running headers (or) footers, italics, or all other fancy stuff. You need in
order to produce really modern, over-formatted, professional-quality memos.
Like all unix programs, vi is a power packed editor possess. It’s possibly the last word
in how a program can be non-user friendly. While using vi, you time and again realize that is
possibly wants to make users aware that unix demands a certain level of maturity and
knowledge when it comes to using even its elementary editors. In many ways, vi sets standard
in unix presents the true non sense picture that unix is built over. Even the most experienced
computer user can take a while to get accustomed to vi. Infact one needs to develop a taste for
VI. And once you do that you would realize it is best editor in the world. Learning VI is gaint
step towards mastering the intricacies of UNIX.
Commands for positioning cursor in the window
1.Positioning by character
Command function
H moves the cursor one character to the left.
2. Positioning by line
Command function
J moves the cursor down one line from its present position in the same column
k moves the cursor up one line from its present position in the same column
3. Positioning by word
Command function
w moves the cursor to the right to the first character of the next word
b moves the cursor back to the first character of previous word
e moves the cursor to the end of current word
Command function
Ctrl f scrolls the screen forward a full window revealing the window of text below
the current window.
Ctrl b scrolls the screen back a full window, revealing the window of text above the
current window.
Command function
Command function
Command function
Command function
Commands function
ZZ writes the buffer to the file and quits VI.
: wq writes the buffer to the file and quits VI.
UNI X C OM MA NDS
cp chap01 unit1
If the destination file (unit1) doesn’t exist, it will first be created before copying take place. If
not it is simply be overwritten without any warning from the system. So be careful while
choosing a destination filename. Just check with an ls command whether or not file exists.
If there is one file to be copied, the destination can be either an ordinary or a directory
file. You can have the option of choosing your own destination filename. The following example
shows the way of copying the file to the progs directory:
cp chap01 progs/unit 1
chap 01 copied to unit1 under progs
cp chap01 progs
chap01 retains its name under progs
cp is often used with an shorthand notation .(dot) to signify the current directory as the
destination. For instance to copy the file. Profile from user/sharma to your current directory
you can use either of the two commands:
cp/user/sharma/.profile.profile
Destination is a file
cp/user/sharma/.profile
Destination is a current directory
Obviously the second is preferable because it requires lesser key strokes. cp can also be
used to copy more than one file with the single invocation of command. In that case the last
filename must be the directory. For instance to copy the files chap01, chap02 and chap03 to the
progs directory, you will have to use cp like this.
In the UNIX system the * is used to frame a pattern for matching more than one file. If
there are only three files in the current directory having a common string “chap” you can
compress above sequence using * as suffix to chap:
cp chap* progs
copies all files beginning with chap
NOTE:
Cp overwrites without warning the destination file if it exists:
The * is used as shorthand for multiple filenames.
Interactive copying (-i)
The -i (interactive) option, originally a Berkley enhancement, warns the user before
overwriting the destination file. If unit 1 exists, cp prompts for response:
Since process is recursive, all files resident in the sub-directory cp here has to create
sub –directories if it does not find them during the copying process.
NOTE:
Sometimes it does not possible to create a file, this happen if it is read protected or the
destination file (or) directory is writing protected.
If the destination doesn’t exist it will be created. mv by default, does not prompt for
overwriting the destination file if it exists so be careful, again.
Like cp a group of files can be moved, but only to a directory, the following command
moves three files to a progs directory.
ln: LINKS:
When a file is copied both the original and copy occupy the separate space in the disk.
UNIX allows a file to copy more than one name and yet maintain a single copy in the disk;
changes to one file are also reflected in the other. The file is then said to have more than one
link, i.e. it has more than one name. A file can have as many names as you want to give it, but
only thing that is common to all of them is that they all have same inode number.
Windows (95 and NT) has the similar notation called shortcuts, but it is not easy to know
how many shortcuts a file has. Moreover shortcuts themselves occupy a certain amount of disk
space. In UNIX you can easily know number of links of a file from the fourth column of the ls –
l listing. The number is normally 1, but exceeds the figure for linked files.
Files are linked with the ln(link) command, which takes two filenames as arguments. The
following command links emp.lst with employee.
ln emp.lst employee
The –I option of ls shows that they have same inode number, meaning they are actually one
and same file:
$ ls –li emp.lst employee
29518 -rwxr –xr-x 2 Kumar metal 915
may 4 09:58 emp.lst
29518 -rwxr-xr-x 2 Kumar metal 915
may 4 09:58 employee
The number of the links which is normally one for unlinked files is shown tom be two. You
can think a third file, emp.dat, and$ ln employee emp.dat
ls –li emp*
The link count has come down to two. Another rm command will further bring it down to one.
A file is considered to be completely removed from its system when its link count drops to zero.
sometimes it is difficult to remove all the files, especially when they exist in multiple directories
and you don’t known where they are UNIX has special tools to locate such files in the system.
rm won’t normally remove a directory, but it can remove from one. You can remove the two
chapters’ the prog’s directly without having to “cd” to it:
rm prog’s|chap01 prog’s|chap02
You may sometimes require deleting all the files of a directory as a part of a cleaning-up
operation. The * when used by itself, represents all files, and you can use rm like this:
$ rm *
$_ # all files gone!
Dos users, beware! When you delete files in this fashion, the system wont prompt you with the
message “Are you sure?” or “All the files in the directory will be deleted” before removing the
files! The $ prompt will return silently, suggesting that the work has been done.*is used is
equivalent to *. * used in dos
.
INTERACTIVE DELETION (-i) like in cp, the –i (interaction) option makes he commands
ask the user for information before removing each file:
mkdir patch
So far simple enough, but the UNIX system goes further and lets you create directory chains
with just one invention of command. For instance, the following command creates directory
chain:
This creates three subdirectory-pis, and two subdirectories under pis.The order of
specifying the arguments is important: you obviously can’t create a subdirectory before
creation of parent directory. For instance you enter:
$ mkdir test
The rmdir command removes the directory. You just have to do this to remove the
directory pis:
rmdir pis
Like mkdir, rmdir can also delete more than one directory in one shot. For instance the
directories and subdirectories that were just created with mkdir can be removed by using
rmdir with reverse set of arguments.
Note that when you delete the directory and its sub_directory, a reverse logic has to be
applied. The following directory sequence used by mkdir is invalid in rmdir:
Have you observed one thing from the error message? rmdir is silently deleted the lowest level
subdirectories progs and data. This error message leads to two important rules that you should
remember while deleting the directories.
1. You can’t delete the directory unless it is empty.
In this case the pis directory couldn’t be removed because of existence of sub _directories
progs and data under it.
2. You can’t remove a subdirectory unless you have placed in a directory which is
hierarchically above the one you have chosen to remove.
The first rule follows logically from example above to illustrate second cardinal rule try
removing progs directory by executing command in the same directory itself.
$ cd progs
$ Pwd
/user/kumar/pis/progs
$ rmdir progs #trying to remove the current directory rmdir: progs: directory does not
exist.
To remove this directory you must position yourself in the directory above progs i.e, pis and
remove it from there:
$ cd/user/kumar/pis
#pwd
/user/kumar/pis
$rmdir progs
The mkdir and rmdir commands work only with those directories owned by user. Generally
the user I the owner of the home directory and she can create and remove subdirectories in this
directory or in any subdirectory created by her. However she normally won’t be able to create
or remove files and directories in other user’s directories.
du: disk usage
You will often need to find out the consumption of a specific directory tree rather than an
entire file system.
The du (disk usage) command reports usage by a recursive examination of the directory tree.
This is how du lists the usage of /home/sales/tml 1:
#du /home/sales/tml1
1154 /home/sales/tml1/forms
12820/home/sales/tml1/data
136 /home/sales/tml1/database/safe
638 /home/sales/tml1/database
156 /home/sales/tml1/reports
25170/home/sales/tml1 #Also reports the summary at the end
By default, du lists the usage of each sub-directory of it's arguments, and finally produces a
summary. The list can often him quite big, and more often than hot, you may be interested
only in a single figure that takes into account all these sub-directories. For this, the -s
(summary) option is quite convenient.
#du -s /home/sales/tml1
25170 /home/sales/tml1
du can also report on each file in a directory(-a option), but the list would be too big to be of
any use. you may instead look for some of the notorious disk eaters, and expectation reporting
is what you probably need. There's a better command to do that (find), and is taken up shortly.
There are four file systems, of which the first two are always created by the system during
installation. The root file system (/dev/root), has 112778 blocks if disk space free. It also has
53,375 i-nodes free, which means that up to that many additional files can be created on this
file system. The system will continue to function until the free blocks (or) i-nodes are eaten
away, whichever occurs earlier. The total free space in the system is the sum of the free blocks
of the four file system.
Linux produces a different output; df there shows the percentage disk utilization also
$df
File System 1024 blocks used Available Capacity Mounted on
/dev/hdb3 485925 342168 118658 74% /
NOTE:
When the space in one file system is totally consumed, the file system can't borrow space
from another file system.
The -t(total) option includes the above output, as well as the total amount of disk space in the
file system. The time, we'll find out the space usage of the oracle file system only:
#df -t/dev/oracle
/oracle (/dev/oracle): 46838 blocks 119773 i-nodes
Total: 1024000 blocks 128000 i-nodes
The interpretation is simple enough; the total space allocated to this file system is 1,024,800
blocks for files, and 128,000 for i-nodes.
Linux users this option for different purpose, the default output itself is information enough.
$ps
Like who, ps also generates header information. Each line shows the PID, the terminal (tty)
with which the process is associated, the cumulative processor time (TIME) that has been
consumed since the process has been started, and the process name (CMD). Linux shows an
additional column that is usually an s (sleeping) or r (running). SCO unix ware shows two
more, one of which shows the priority with which the process is running.
You can see that your login shell (sh) has the PID 659, the same number echoed by the
special variable $$. Ps itself is an instance of a process identified by the PID 684. There is yet
another, viz login that now appears on SCO open server system, but not on SCO unix ware and
Linux systems. These are the only commands associated with the terminal / dev/ tty03.
Unix maintains an account of all the current users of the system. It’s a good idea to know the
people working on the various terminals so that you can send those messages directly. A list of
them is displayed by who command, which by default produces a three columnar output:
$who
NOTE: The terminal names that you see in the who output are actually special files
representing the devices. These files are available in /dev. For example, the file tty01 can be
found in the dev/ directory.
While it is a general feature of most unix commands to avoid churning the display with
header information, this command has the header option (-H). This option prints column
headers, and when clubbed with the –u option, provides a more detailed list:
$who –Hu
Two users have logged out, so it seems. The first three columns are the same as before, but
it’s the fourth column (IDLE) that is interesting. As shown against Kumar shows that activity
has occurred in the last one minute before the command as invoked. Tiwary seems to be idling
for the last 40 min.
The who command, when used with the arguments “am” and “I” displays a single line of
output only i.e., the login details pertaining to the user who invoked this command.
$who am i
Who is regularly used by the system by the system administrator to monitor whether
terminals are being properly utilized. It also offers a number of other options that are quite
useful for him. The linux output differs, but the underlying idea does not.
Unix features a universal word counting program. The command name is in fact a
misnomer; it count lines, words and characters, depending on the options used. It takes one
(or) more filenames as its arguments, and displays a four columnar output.
Before you use wc to make a count of the contents of the file infile, just use cat to have a
look at its contents.
$cat infile
I am the wc command
I count characters, words and lines
With options I can also make selective counts
You can now use wc without options to make a “word count” of the data in the file:
$wc infile
3 20 104 infile
Wc counts, 3 lines, 20 words and 104 characters. The filename has also been shown in the
fourth column. The meanings of these terms should be clear to you as they are used frequently:
Wc offers three options to make a specific count. The –l option counts only the number of
lines, while the –w and –c options counts words and characters respectively.
$wc –l infile
When used with multiple file names, wc produces a line for each file, as well as a total
count:
Wc, like cat, doesn’t merely work with files. It also acts on a data stream.
The second field shows the users full name, obtained from the fifth field in / etc/ passwd.
The third field shows the terminal of the user, preceded by an asterisk if the terminal doesn’t
have write permission. The fourth field shows the office location that is also taken from the
fifth field of etc/ passwd.
Unlike who, finger can also provide details of a single user:
$finger summit
Login: summit (messages off) name: summitabha das
Office: ballygurj
Directory: / usr/ summit shell: / bin/ ksh
On since Feb 16 09:24:27 on tty 05 8 minutes 31 seconds Idle time
This shows something more; summit is using the korn shell, has disabled his terminal
(messages off) and has “no plan”. If the user is not logged in, the output will show the last login
time.
Finger in Linux also features two Linux showing the date and time the user last received and
read mail:
If you don’t know a users login name, but only the full name, you can try using either the
first name (or) the last name as the arguments to finger. If Henry was set up with the following
entry in / etc/ passwd:
Finger James
NOTE: If you to know the users, who are idling and their idle time, use finger.
The .plan and .project files: It is often necessary to leave behind your schedule and other
important information for others to see, especially if you are going on vacation. Since it is
simply not possible to send mails to all users, you can use finger to display contents of two
files: $home/ .plan and $home/ .project (only first in the SCO UNIX ware). If summit has
these two files, this is what finger could show at the end of the normal output:
Because of this e-mail feature, the finger command should be placed in the .profile. For
finger to report the contents of these files, they must be readable by all users.
$ftp jill
Connected to jill
220-
220 jill FTP server (version 2.1W0CD) ready
Name (Jill: Hendry): Charlie
# Henry logs in as Charlie
331 password required for Charlie
Password: *********** # Enter the password
230 user Charlie has logged in.
Remote system type in unix
Using binary mode to transfer files.
ftp>-
After establishing a connection with the server Jill, ftp prompts for the username and the
password. The local username is provided as default (Henry), and if you had pressed <Enter>
key, the system would have logged you in as Henry. Since ftp can connect the non-unix
machines operating system and the default mode to transfer files.
Ftp displays the ftp> prompt when used without and arguments. You can then establish a
connection with its open command.
$ftp
ftp> open Jill
Connected to jill.
220-
220 jill FTP server (version 2.1WUCD) ready.
Name (Jill: sales) : <Enter>
Password: <enter> #Enter the password
530 login incorrect
Login failed
Ftp works in two stages. First, it makes a connection with the remote machine. This can
be done either by invoking ftp with the hostname (Jill), or later, with open command.
After the connection has been established, ftp asks for the username and password. At both the
prompts, the <Enter> key was pressed without supplying either. This leads to an unusual
situation where you have established a connection with the remote machine without actually
logging in to it. You are still on the local machine in every sense of the term.
To log in at this stage, you have to use the user command and then through usual login
sequence.
Termination of ftp is done in two stages. First, you have to disconnect from the remote
machine with close, and then quit ftp either with bye or quit.
$ftp 192.168.0.2
Connected to 192.168.0.2
220-
220 Jill FTP server (version 2.1WU (1)) ready
Name (192.168.0.2: sales)
331 password required for sales
Password: *********** # Enter the password
230 user sales logged in
Remote system type is unix
Using binary mode to transfer files # turn offs some FTP messages
Ftp>verbose
Verbose mode off
FTP>pwd
257 “/ usr/ sales” is current directory
Ftp>ls
……….
-rw-r--r-- | sales group 1498 Jul 25 18:34 exrc
-rw-r--r-- | sales group 20 Jul 25 18:37 login.sql
-rw-r--r-- | sales group 289312 Jul 25 18:22 perl
Ftp> mkdir reports
Ftp> cd reports
Ftp> pwd
257 “/ usr/ sales/ reports” is current directory
Ftp> cdup #equivalent to cd…
Ftp> pwd #this is on remote machine
257 “/ usr/ sales/” is current directory
Ftp>! pwd #this is on the local machine
/ home/ henry/ project 3
Ftp> delete exrc
Ftp> m delete login.sql vb4*
# *is interpreted on remote machine
mdelete login.sql? Y
mdelete vb4cab.2? Y
Observe that mdelete prompts for each filename. This default behavior is also displayed
by the mget and mput commands which are also meant it be used with multiple files.
In as similar manner, you van copy a file from remote machine with the get command. If
it is a text file (a shell script, for instance), set the mode to ASCII before you proceed to
transfer.
Ftp> ascii
200 type set to A
Ftp> get send2kgp.sh #copied under same name
Local: send2kgp.sh send to kgp.sh
Both put and get also optionally use a second filename as argument
Put sales-menu.imp main-menu.imp
Get send2kgp.sh send to kgp.sh
To make sure that you are retrieving the latest version of the file, use never instead of get,
newer acts exactly like get when the remote file has a later modification date, but refuses to
initiate transfer if it is not:
The wild-card t*.sql is interpreted on the remote machine and not on the local machine.
The mput command behaves in a similar manner, except that wild-card patterns are
implemented locally.
$telnet 192.168.0.2
Trying 192.168.0.2
Connected to 192.168.0.2
Escape character is ‘^]’
SCO open server ™ release 5 (Jill) (tty p0)
Login:
You now have to enter your login name at this prompt, and then the password to gain access
to the remote machine. As long as you are logged in, anything you type is sent to the remote
machine, and your machine just acts as any other dump terminal. Any files that you use or any
commands that you run will always be on the remote machine. After you have finished, you
can press <ctrl+d>, or type exit to logout and return to your local shell.
The TELNET prompt
When telnet is used without the address, the system displays the telnet> prompt, from
where you can use its internal commands. This doesn’t connect you to any remote machine,
but can invoke a login session from here with open:
Telnet> open (Jill)
Script started on Mon Aug 25 17:18:51 1997
Trying 192.168.0.2…..
Connected to Jill
Escape character is ‘^]’
…….
…….
The “Escape character” lets you make a temporary escape to the telnet prompt so that you
can execute a command on your local machine. To invoke it, press < ctrl+]>. You can then use
the ! with a unix command, say ls, to list files on the local machine.
$<ctrl+]>
telnet>!ls –l *.sam
You can close a telnet session in two ways. First you can the shell’s exit command, or you can
escape to the telnet prompt with <ctrl+]>, close the session (close), and then use quit to exit the
local shell.
rlogin never prompts for the username (unlike telnet), and the above command sequence
allows Henry to log in to the remote machine with the same username, and without supplying a
password. For remote login to be possible, both machines must have entries for Henry in their
/etc /passwd files. A password may (or) may not be required, depending in the way the remote
machine is setup.
The –l option allows a user to log in with a different name:
rlogin –l Charlie Jill
#Henry logs in to Jill with username Charlie
For this to happen, it is not necessary for Charlie to have an umount in the local machine. If
the remote machine was not specifically set up for Henry to login, the system will prompt for
the password, which has to be Charlie’s password in the remote machine.