Vous êtes sur la page 1sur 41

ADVANCED UNIX PROGRAMMING

UNIT - I

A BR IEF SESSI ON – UN IX BI OG RAP HY :


UNIX as the world knows it today, is the happy outcome of the proverbial rags-to-
riches story. What is now heralded as the most powerful and popular multi-user operating
system had a very humble beginning in the austere premises of AT & T’s bell laboratories, the
fertile spawning ground of many a landmark in computer history.

The origin of UNIX can be tracked back to 1965,when a joint venture was
undertaken by Bell Telephone Laboratories, the General Electric Company and Massachusetts
Institute of Technology. The aim was to develop an operating system that could serve a large
community of users and allow them to store data if need be. This never-to-be enterprise was
called Multics, for multiplex information and computer service. Even after much time,
resources and efforts had be devoted to the project, the convenient, interactive computing
service as by Ritchie, failed to materialize. This led Dennis Ritchie and Ken Thompson, both of
AT&T, to start a fresh on what their mind’s eye had so illustriously envisioned. Thus in 1969,
the two along with a few others evolved what was to be the first version of multi-user system
UNIX. Armed with a museum piece of a computer called PDP-7, a rudimentary file system was
developed. Though this was not tapped to the fullest, it had all the trappings of a truly potent
multi-user operating system. This system was christened ‘UNIX’ by Brain Kernigham, as a
remainder of the ill-fated Multics. Later, in 1971 UNIX was ported to a PDP-11 computer with
a 512 KB disk. UNIX then was a 16 KB system with 8 KB for user program and a upper limit
of 64 KB per file. All its assembly code being machine dependent, the version was not portable,
a key requirement for a successful operating system.

To remedy this, Ken Thompson created a new language ‘B’ and set about the
Herenlean task of the whole UNIX code in this high level programming. Ritchie shifted the
inadequacies of B and modified it to a new language, which he named as ‘C’- the language that
finally enabled UNIX to stand tall on any machine.
Thus, by 1973, UNIX had come a long way from its PDP-7 days, and was soon
licensed to quite a number of universities, companies and other commercial institutions. With
its uncomplicated elegance it was charming a following perhaps more effortlessly than to pied
piper of the fables. The essentially accommodating nature of the system encouraged many a
developer to polish and enhance its capabilities, which kept it alive and with the times.
By the mid eighties there were more than a hundred thousand UNIX installations
running on anything from a micro to a mainframe computer and over numerous varying
architectures – a remarkable achievement for an operating system by any standard. Almost a
decade later UNIX still holds the record for being the soul of more computer networks than
other operating system is.

SA LIE NT FEA TU RES OF UN IX :


The UNIX operating system offers several salient features, the important of which
are discussed below.

1. Multi-user capability
Among its salient features, what comes first is its multi-user capability. In a multi-
user system, the same computer resources hard disk, memory etc are accessible to many users.
Of course, the users don’t flock together at the same computer, but are given different
terminals to operate from. A terminal, in turn, is a keyboard and a monitor, which are the
input and output devices for that user. All terminals are connected to the main computer whose
resources are availed by all users. So, a user at any of the terminals can use not only the
computer, but also any peripherals that may be attached, says for instance a printer. One can
easily appreciate now economical such a setup is than having as many computers as there are
users, and also much more convenient when the same data is to be shared by all. The following
figure-1.1 shows the typical setup.
Figure 1.1:

TREMINAL
TERMINAL TERMINAL

HOST MACHINE
TERMINAL
TERMINAL

At the heart of a UNIX installation is the host machine, often known as a server (or)
a console. The number of terminals that can be connected to the host machine depends on the
number of posts that are present in its controller card. For example, a 4-port controller card in
the host machine can support 4 terminals. There are several types of terminals that can be
attached to the host. There are:

(a) Dumb Terminals

These terminals consist of a keyboard and a display unit with no memory or disk of
its own. These can never act as independent machines. If they are to be used they have to be
connected to the host machine.

(b) Terminal Simulation


A PC has its own microprocessor, memory and disk devices. By attaching this PC to
the host through a cable and running software from this PC we can emulate it to work as if it is
a dumb terminal. At such times, however the memory and the disk are not in use and the PC
cannot carry out any processing on its own. Like a dumb terminal it transmits its processing
jobs to the host machine. The software that makes the PC work like a dumb terminal is called
TERMINAL EMULATION SOFTWARE. VTREM and XTALK are two such popularly used
software’s.

(c) Dial –In Terminals


These terminals use telephone line to connect with the host machine. To
communicate over telephone lines it is necessary to attach a unit called modem to the terminal
as well as to the host. Figure 1.2 shows a typical layout of such a communication with the host
machine. The modem is required to transmit a data over telephone lines.

Host machine Modem

Terminal

Modem
Figure 1.2

2. Multitasking capability

Another highlight of UNIX is that it is multitasking, implying that it is capable of


carrying out more than one job at the same time. It allows us to type in a program in its editor
while it simultaneously executes some other command we might have given earlier; say to sort
and copy a huge file. The latter job is performed in the “background”, while in the
“foreground” we use the editor, or take a directory listing (or) whatever else. This is managed
by dividing the CPU time intelligently between all processes being carried out. Depending on
the priority of the task, the operating system appropriately allots small time slots (of the order
of milliseconds) to each foreground and background task.
The very concept of a multi-user operating system except‘s the same to be
multitasking too. We can say this because even when a user is executing only one command at
a time, the CPU is not dedicated to the solitary user. In all probability, there are ten more users
who also demand execution of their commands. UNIX therefore, has to be on its toes all the
time, obliging all the users connected to it.

Although crude, MS-DOS also provides a multitasking capability. The type of


multitasking provided by MS-DOS is known as serial multitasking. In this type of multitasking
one program is stopped temporarily while another is allowed to execute. At any given time only
one task is run. We can listen this to a situation in which a human working on a computer
stops his work to answer a ringing phone and then, having finished with the call, switches back
to the computer.

Most of us must have used sidekick (or) some other memory resident program.
Once we load this into memory, a simple keystroke can take us from sidekick to another
program we may be running (or) vice versa.

If, for example, we invoke sidekick in the middle of some calculation being done, then
all work on the calculations would be stopped as the computer responds to sidekick. Once we
are through with sidekick and we list a key to go out of sidekick the calculations then be
resumed. Wouldn’t it be far better to give sidekick only a part of the computer time? So that
even while we were in sidekick, the calculations would carry on being performed in the
background. And this is exactly what UNIX does. Using the timer interrupt it schedules the
CPU time between programs. These time periods are known as time-slices. If there were 10
programs running at one time, the microprocessor would keep switching between these 10
programs. At a given point in time the CPU will handle only one program. But because the
switch happens very fast we get the feeling that the microprocessor is working on all the
programs simultaneously.

Thus, multitasking of UNIX is different from DOS, which does not give time-slices
to running programs. And if there are 5 programs running in DOS and even one goes haywire,
the entire machine hangs. In any genuine multitasking environment like UNIX this does not
happen.

Does UNIX give equal time-slices to all programs running in memory? No.
There may be some programs that are relatively more important. For example, those that wait
for user responses are given a higher priority. Programs, which have the same priority, are
scheduled on a round-robin basis.

3. Communication

UNIX has excellent provision for communicating with fellow users. The
communication may be within the network of a single main computer, or between two (or)
more such computer networks. The users can easily exchange mail, data, and programs
through such networks. Distance posses no barrier to passing information (or) messages to and
fro. We may be two feet away (or) at two thousand miles our mail will hardly take any time to
reach its destination.

4. Security

UNIX allows sharing of data, but not indiscriminately. Had it been so, it would be
the delight of mischief mongers and useless for any worthwhile enterprise. UNIX has three
inherent provisions for protecting data. Assigning passwords and login names to individual
users ensuring that not anybody can come and have access to our work provides this first.

At the file level, thee are read, write and execute permissions to each file which
decide who can access a particular file, who can modify it and who can execute it. We may
reserve read and write permissions for our self and leave others on the network free to execute
it, or any such combination.
Lastly, there is file encryption. This utility encodes our file into an unreachable
format, so that even if someone succeeds in opening it, our secrets are safe. Of course should
we want to see the contents, we can always decrypt the file.

5. Portability

One of the main reasons for the universal popularity of UNIX is that it can be
ported to almost any computer system with only the bare minimum of adaptations to suit the
given computer architecture. As of today, there are innumerable computer manufacturers
around the globe, and tens of hundreds of hardware configurations, more often than not.
UNIX is running strong on each one of them. And lest we forget, due credit for this feat must
be given to the Dennis Ritchie’s prodigy, c, which granted UNIX this hardware transparency.
UNIX, in fact is almost entirely written in c.

UNI X S YSTEM ORGAN IS ATIO N

The functioning of UNIX is manned in three levels. On the outer crust reside the
application program and other utilities, which speak our language. At the heart of UNIX, on
the other hand, is the kernel, which interacts with the actual hardware in machine language.
The middle layer, called the shell, does the streamlining of these two modes of communication.
Figure 1.3 shows the three layers of UNIX operating system.

Users

Tools and applications

Shell
Kernel
Hardware
Figure 1.3: Three Layers of UNIX OS

The shell, or the command interpreter as it is called, is the mediator, which


interprets the command that we give and then convey them to the kernel, which ultimately
executes them. We can imagine kernel as a monarch who is in overall control of everything,
where as the shell as its emissary.

The kernel has various functions. It manages files, carries out all the data transfer
between the file system and the hardware, and also manages memory. The owns of scheduling
of various programs running in memory (or) allocation of CPU time to all running programs
also lies with the kernel. It also handles any interrupts issued, as it is the entity that has direct
dealings with the hardware.

The kernel program is usually stored in a file called ‘UNIX’ where as the shell
program is in a file called ‘SH’. For each user working with UNIX at any time different shell
programs are running. Thus, at a particular point in time there may be several shells running
in memory but only one kernel. This is because; at any instance UNIX is capable of executing
only one program as the other programs wait for their turn. And since it’s the kernel which
executes the program one kernel is sufficient. However, different users at different terminals
are trying to seek kernel’s attention. And since the user interacts with the kernel through the
shell different shells are necessary.

TYPES OF SHELL

Different people implemented the interpreter function of the shell in different ways.
This gave rise to various types of shell, the most prominent of which are outlined below :

Bourne Shell

Among all, Steve Bourne’s creation, known after time as the Bourne shell, is the
most popular; probably that’s why it is bundled with every UNIX system. Or perhaps it is the
other way round. Because it was bundled with every system it became popular. Whatever the
cause and the effect, the fact remains that this is the shell used by many UNIX users. This will
also be the shell we shall be talking about extensively through the course of this material.

C Shell

This shell is a hit with those who are seriously into UNIX programming. Bill Joy,
then pursuing his graduation at the university of California at Berkeley, created it. It has the
two advantages over the Bourne shell.

Firstly, it allows aliasing of commands. That is, we can decide what name we want
to call a command by. This proves very useful when we rename lengthy commands, which are
used time and again. Instead of typing the entire command we can simply use the short alias at
the command line.

If we want to save even more on the typing work, C shell has a command history
feature. This is the second benefit that comes with C shell. Previously typed commands can be
recalled, since the c shell keeps track to the one provided by the program DOSKEY in MSDOS
environment.

Korn Shell

If there was any doubt about the cause-effect relationship of the popularity of
Bourne shell and its inclusion in every package, this adds fuel to it. The not-so-widely-used
Korn shell is very powerful, and is decidedly more efficient than the other. It was designed to
be so by David Korn of AT&T’s Bell laboratories.

T he f ir st falt eri ng steps


We have done enough homework on UNIX how to venture for our first practical
contact with it. Given that our terminal is secured to the host computer and is powered on, our
display prompts and for our login name. Each user is given a unique login name and a
password, which are like an entry pass to connect to the host machine as a user.

If we haven’t been given a login name and password, we won’t be able to gain
access to UNIX. After we enter our login name, we are prompted to enter the password which
when keyed is does not appear on the display. Obviously this is to ensure that no-chance (or)
premeditated passer-by is able to sneak in on it.

When we try to access our system, UNIX will display a prompt that loops
something like this:
Login: aa1
Password: SITAMS

After receiving the login prompt, we can enter our login name (aa1 in the above
example), after which we receive the password prompt. At this stage we must type in our
password (SITAMS in this example). The password of course would not appear on the screen.
The password we use should be kept private. It is the method used by UNIX to prevent
unauthorized entry into the system. The password should be changed frequently. On many
systems, after a specified period of time, our password expires (ages) and the next time we
login the system requires us to change our password. In addition, we can change our password
whenever we like on most systems by using a command to alter the password in a later
discussion.

Sometimes we may not type the login name (or) password properly. When we do
this, the system will respond with the following message:
Login: aa1
Password: SITAMS
Login incorrect.

Wait for login retry:


Login:

Note:
The system does not tell which one is incorrect, the login name or the password. Again,
this is a security measure. Even if we type our login name improperly, we will still get the
password prompt. We usually get three to five attempts to get it right before our terminal is
disconnected. Many times a message is displayed to the system administrator telling him (or)
her that several unsuccessful attempts were made on our login name.

Once the correct login name and password have been supplied, we find some welcome
messages from the suppliers of the UNIX version installed on the host machine, followed by a
command prompt. The command prompt is a $ (dollar) if we are operating in Bourne shell, or
a % if in C shell.

UNI X UT IL ITIES -1
Introduction to UNIX File System:

Before to learn UNIX commands it is essential to understand the UNIX file system
since UNIX treats everything it knows and understands, as a file. All utilities, applications and
data in UNIX are stored as files. Even a directory is treated as a file, which contains several
other files. The UNIX file system resembles an upside down tree. Thus, the file system begins
with a directory called root. The root directory is denoted as slash (/). Branching from the root
there are several other directories called bin, lib, usr, etc, tmp and dev. The root directory also
contains a file called UNIX which in UNIX kernel itself. These directories are called sub
directories, their parent being the root directory. Each of these sub directories contains several
files and directories called sub sub directories. Fig: shows the basic structure of the unix file
system:
/ ( root )

Unix bin lib dev usr tmp emp

User1 user2 user3 user4

The main reason behind creation of the directories is to keep related files
together and separate them from other group of related files. For example, it is keep all user
related files in the user directory, all devices related files in the dev directory, all temporary
files in temp directory and so on. Let us now lok at the purpose of each of these directories.
The bin directory contains executable files for most of the unix commands
.unix commands can be either c programs or shell programs are nothing but a collection of
unix commands.
The lib directory contains all the library functions provided by unix for
programmers. The program written under unix make use of these library functions in the lib
directory.
The dev directory contains files that control various input/output devices like
terminals, printers, disk drivers etc. For each device there is a separate file. In unix each device
there is a separate file. In UNIX each device is implemented as a file. For example, everything
that is displayed on your terminal and this file is present in the dev directory.
In the usr directory there are several directories, each associated with a
particular user. These directories are created by the system administrator when he creates
accounts (often called home directory) and can organize his directory by creating other
subdirectories in it, to contain functionality related files.
Within the user directory there is another bin directory which contains
additional unix command files.
The temp directory contains the temporary files created by the unix (or) by
the user’s. Since the files present in it are created for the temporary purpose unix can afford
to dispense with them. These files get automatically deleted when the system is shutdown and
restarted.
All the afore mentioned directories are present on almost all unix
installations. The following figure captures the essence of these directories and their purpose

Directory Description

Bin Binary executable files

Lib Library functions

Dev Device related files

Etc Binary executable files, usually required for system


administrator

Tmp Temporary files created by unix(or) users

Usr Home directories for all users

/user/bin Additional binary executable files

Following are the salient features of the unix file system

 It has a hierarchical file system.


 Files can grow dynamically.
 Files have access permissions
 All devices are implemented as files.

V i Edito r
No matter what work you do with unix system, you will eventually write some c programs
(or) shell (or pearl) scripts. You can have to edit some of the system files at the times. If you are
working on databases, you will also need to write SQL query, scripts, procedures and triggers.
For all this, you must learn to use an editor, and unix provides a very versatile one-vi (visual
editor).
VI is a full screen editor now available with all unix system, and is widely acknowledged
as one of the most powerful editors available in any environment. Another contribution of the
University of the California, Berkley, it owes its origin to William (Bill) Joy, a graduate student
who wrote this unique program. It became extremely popular, leading popular, leading joy to
later remark that he wouldn’t have written it had he known that it would become favors.
VI offers cryptic, and sometimes mnemonic, internal commands for editing work. It
makes complete use of keyboard, where practically every key has a function. VI has
innumerable features, and it takes time to master most of them. You don’t need to do that
everyday. As a beginner, you shouldn’t waste your time learning the frills and nuances of the
editor. Editing is secondary task in any environment, and a working knowledge is all that is
required initially.
Linux features a number of “vi” editors, of which vim (vi improved) is the most
common. Apart from vim, there are xvi, hvi, and elvis which have certain excusive functions
not found in Berkley version.

The three modes:


A VI session begins by invoking the command VI with (or without) a filename:
Vi filename
You are presented a full empty screen each line beginning with a ~(tilde). This is vi’s way
of indicating that they are non existent lines. For text editing vi uses 24 of the 25 lines that are
normally available in a terminal. The last line is reserved for some commands that you can
enter to alt on the text. This line is also used by the system to display messages. The filename
appears in this line with the message “filename” [new file].
When you open a file with vi, the cursor is positioned at the top left-hand corner of the
screen. You are said to be in the command mode. This is the mode where you can pass
commands to act on the text, using most of the keys of the keyboard. Presenting a key doesn’t
show it on screen, but may perform a function like moving the cursor to the next line. You
can’t use the command mode to enter (or) replace text.
There are two command mode functions that you should know right at this stage-the
spacebar and the backspace key. The spacebar takes you one character ahead, while the
backspace key (or ctrl+h) takes you character back. Backspacing in this mode doesn’t delete
text at all.
To enter text, you have to leave the command mode and enter the input mode. There
are 10 keys which, when pressed, take you to this mode and whatever you enter shows on the
screen. Backspacing in this mode, however, erases all characters that the cursor passes
through. To leave this mode, you have to press <Esc> key.
You have to save your file (or) switch to editing other file. Sometimes, you need to
make global substitution in the file. Neither of the modes will quite do the work for you. You
then have to use the ex mode (or) line mode, where you can enter the instruction in the last line
of the screen. Some command mode functions also have ex mode equivalents. In this mode, you
can see only one line at a time, as you. See when using EDLIN in Dos.
With this knowledge, we can summarize the three modes in which it works:
 Input mode: where any key depressed is entered as text.
 Command mode: where keys are used as commands to act as text.
 Ex mode: where ex mode commands can be entered in the last line of the screen to act
as text
The relation between these three modes is depicted in fig below:

Comman
d mode

Input EX mode
mode
Infact vi created a sensation when it appeared on the unix screen since it was the first
full screen editor. It allowed the user to view and edit the entire document at the same time.
Creating and editing files became a lost easier and that’s the reason it became an instant bit
with the programmers.
There are several disadvantages in using vi. These are:
1. The user is always kept guessing. There are no self explanatory error messages. If
anything goes wrong no error messages appear, only the speaker keeps informing you
that something went wrong.
2. The guy who wrote VI didn’t believe in help, so there wasn’t any online help available
in VI. Incidentally, VI was written by Bill Joy when he was student at the University of
the California.
3. There are three modes in which the editor works. Under each mode the key pressed
creates different effects. Hence meaning of several keys and their effects in each mode
has to be memorized.
4. VI is fanatically case sensitive. A ‘h’ moves the cursor position to the left where as
‘H’positions it at the top left corner. Moreover, you are required to remember both.

Inspite all the above advantages the vi is so popular even today. One of the major
reasons is that vi is available on almost all unix system. Yes even if it is installed in Siberia.
VI can handle files that contain the text. No fancy formatting, no fonts, no embedded
graphics or junk like that, just plain simple text. You can create files, edit files and print them.
It cannot do bold fare, running headers (or) footers, italics, or all other fancy stuff. You need in
order to produce really modern, over-formatted, professional-quality memos.
Like all unix programs, vi is a power packed editor possess. It’s possibly the last word
in how a program can be non-user friendly. While using vi, you time and again realize that is
possibly wants to make users aware that unix demands a certain level of maturity and
knowledge when it comes to using even its elementary editors. In many ways, vi sets standard
in unix presents the true non sense picture that unix is built over. Even the most experienced
computer user can take a while to get accustomed to vi. Infact one needs to develop a taste for
VI. And once you do that you would realize it is best editor in the world. Learning VI is gaint
step towards mastering the intricacies of UNIX.
Commands for positioning cursor in the window

1.Positioning by character

Command function
H moves the cursor one character to the left.

Back space moves the cursor one character to the left.

I moves the cursor one character to the right.

Spacebar moves the cursor one character to the right.

0(zero) moves the cursor to the beginning of current line.

$ moves the cursor to the end of current line.

2. Positioning by line

Command function
J moves the cursor down one line from its present position in the same column

k moves the cursor up one line from its present position in the same column

+ moves the cursor down to beginning of next line

- moves the cursor up to the beginning of previous line

Enter moves the cursor down to the beginning of next line

3. Positioning by word

Command function
w moves the cursor to the right to the first character of the next word
b moves the cursor back to the first character of previous word
e moves the cursor to the end of current word

4. Positioning in the window:


Command function
H moves the cursor to the first line on the screen Or “home”
M moves the cursor to the middle line on the screen
L moves the cursor to the last line on the screen

5. Commands for positioning in a file

Command function
Ctrl f scrolls the screen forward a full window revealing the window of text below
the current window.

Ctrl b scrolls the screen back a full window, revealing the window of text above the
current window.

6. Positioning on a numbered line:

Command function

G moves the cursor to the beginning of next line


in the file.
nG moves the cursor to the beginning of nth line in
the file.

7. Commands for inserting text:

Command function

a enter text input mode and appends text


after cursor.
i enters text input mode and inserts
text at the cursor.
A enters text input mode and appends the
text at end of current line.
I enter the text input mode and inserts text at
the beginning of current line.
o (small) enters the text input mode by opening a new line
immediately below current line.
O (big) enters the input mode by opening a new line
Immediately above the current line.
R enters the text input mode and overwrites from
current cursor position on wards

8. Commands for deleting text:

Command function

x deletes character at the current cursor position


X deletes the character to the left of the cursor.
dw deletes a word from the cursor to the cursor of
The next space or to the next punctuation.
dd deletes the current line.
nx, ndw, ndd deletes the n character’s words (or) n lines.
do deletes the current line from the cursor to the
beginning of the line.
d$ deletes the current line from the cursor to the
end of line.
Miscellaneous commands:

Command function

Ctrl g gives line number of current cursor position


in the buffer and modifies status of file.
. repeats the action performed by last
command.
u undoes the effect of last command

U restore all the changes to the current line


since you moved the cursor to this line.
J joins the line immediately below the current
line with the current line.
~ changes character at the current cursor
position from uppercase to lowercase (or)
from lowercase to uppercase.
:sh temporarily returns to shell to perform
some shell commands. Type exit to return to
VI.
Ctrl l clears and redraws the current window.

Commands for quitting vi:

Commands function
ZZ writes the buffer to the file and quits VI.
: wq writes the buffer to the file and quits VI.

: w filename writes the buffer to the file filename (new)

and: q and quits VI.

: q! Quits VI whether or not changes made to


buffer was written to a file. Does not
incorporate changes made to buffer since
last write (: w) command.
q: quits VI if changes made to buffer were
Written to a file

UNI X C OM MA NDS

CP: (COPYING A FILE)

The cp command copies a files or a group of files. It creates an exact image of


the file on the disk with the different name. The syntax requires at least two file names to be
specified in the command line when both are ordinary files, the first is copied in the second.

cp chap01 unit1

If the destination file (unit1) doesn’t exist, it will first be created before copying take place. If
not it is simply be overwritten without any warning from the system. So be careful while
choosing a destination filename. Just check with an ls command whether or not file exists.

If there is one file to be copied, the destination can be either an ordinary or a directory
file. You can have the option of choosing your own destination filename. The following example
shows the way of copying the file to the progs directory:
cp chap01 progs/unit 1
 chap 01 copied to unit1 under progs

cp chap01 progs
 chap01 retains its name under progs

cp is often used with an shorthand notation .(dot) to signify the current directory as the
destination. For instance to copy the file. Profile from user/sharma to your current directory
you can use either of the two commands:

cp/user/sharma/.profile.profile
 Destination is a file
cp/user/sharma/.profile
 Destination is a current directory

Obviously the second is preferable because it requires lesser key strokes. cp can also be
used to copy more than one file with the single invocation of command. In that case the last
filename must be the directory. For instance to copy the files chap01, chap02 and chap03 to the
progs directory, you will have to use cp like this.

cp chap01 chap02 chap03 progs


The files retain their original names in the progs directory. If the files are already
resident in the progs, then they will be overwritten. For the above command to work, the progs
directory must exist because cp won’t create it.

In the UNIX system the * is used to frame a pattern for matching more than one file. If
there are only three files in the current directory having a common string “chap” you can
compress above sequence using * as suffix to chap:

cp chap* progs
 copies all files beginning with chap

NOTE:
Cp overwrites without warning the destination file if it exists:
The * is used as shorthand for multiple filenames.
Interactive copying (-i)
The -i (interactive) option, originally a Berkley enhancement, warns the user before
overwriting the destination file. If unit 1 exists, cp prompts for response:

$cp –i chap01 unit 1


cp: overwrite unit 1?y
A y at this prompt overwrites the files, any other response leaves it uncopied.

Copying directory structures (-r)


It is not possible to copy the entire directory structure with the
-r (recursive) option the following command copies all the files and subdirectories in progs to
new progs:
cp –r progs newprogs

Since process is recursive, all files resident in the sub-directory cp here has to create
sub –directories if it does not find them during the copying process.
NOTE:
Sometimes it does not possible to create a file, this happen if it is read protected or the
destination file (or) directory is writing protected.

mv: renaming files:


mv is similar to move command in Dos, and used in like manner. It has two functions:
renaming a file (or directory) and moving a group of files to a different directory, mv doesn’t
create a copy of file, it merely renames it. No additional space is consumed in the disk during
renaming to rename the file chap01 to man 01,
You should use:
mv chap01 man01

If the destination doesn’t exist it will be created. mv by default, does not prompt for
overwriting the destination file if it exists so be careful, again.

Like cp a group of files can be moved, but only to a directory, the following command
moves three files to a progs directory.

mv chap01 chap02 chap03 progs


mv can also be used to rename a directory.
There is a –i option available with mv also and behaves exactly like in cp. the messages are the
same and requires similar response.
Cp, rm and mv have the –v (or –verbose) option in Linux. This lets you to print the
name of each file as action is being taken on it.

ln: LINKS:
When a file is copied both the original and copy occupy the separate space in the disk.
UNIX allows a file to copy more than one name and yet maintain a single copy in the disk;
changes to one file are also reflected in the other. The file is then said to have more than one
link, i.e. it has more than one name. A file can have as many names as you want to give it, but
only thing that is common to all of them is that they all have same inode number.

Windows (95 and NT) has the similar notation called shortcuts, but it is not easy to know
how many shortcuts a file has. Moreover shortcuts themselves occupy a certain amount of disk
space. In UNIX you can easily know number of links of a file from the fourth column of the ls –
l listing. The number is normally 1, but exceeds the figure for linked files.

Files are linked with the ln(link) command, which takes two filenames as arguments. The
following command links emp.lst with employee.

ln emp.lst employee

The –I option of ls shows that they have same inode number, meaning they are actually one
and same file:
$ ls –li emp.lst employee
29518 -rwxr –xr-x 2 Kumar metal 915
may 4 09:58 emp.lst
29518 -rwxr-xr-x 2 Kumar metal 915
may 4 09:58 employee

The number of the links which is normally one for unlinked files is shown tom be two. You
can think a third file, emp.dat, and$ ln employee emp.dat
ls –li emp*

29518 -rwxr-xr-x 3 Kumar metal 915 may 4


9:15 emp.dat
29518 -rwxr-xr-x 3 Kumar metal 915 may 4
9:58 emp.lst
29518 -rwxr-xr-x 3 Kumar metal 915 may 4
9:58 employee
All these linked files have equal status its not that one file contains the actual data and
other doesnot.this file simply has three aliases. Changes made in one file are automatically
available in the others for simple reason that there is only a single copy of file in the disk.
It seems strange, but it is rm command that removes the link:
$ rm emp.dat;
ls – l emp.lst employee

-rwxr-xr-x 2 Kumar metal 915 may 4


09:58 emp.lst

-rwxr-xr-x 2 Kumar metal 915 may 4


09:58 employee

The link count has come down to two. Another rm command will further bring it down to one.
A file is considered to be completely removed from its system when its link count drops to zero.
sometimes it is difficult to remove all the files, especially when they exist in multiple directories
and you don’t known where they are UNIX has special tools to locate such files in the system.

rm: DELETING FILES:


Files can be deleted with rm (remove).unlike its dos counterpart Del, it can be delete more than
one with single instructions. It normally operators silently and should be used with caution.
The following command deletes the
First three chapters of the book:
rm chap01 chap02 chap03
#rm chap*should also do.

rm won’t normally remove a directory, but it can remove from one. You can remove the two
chapters’ the prog’s directly without having to “cd” to it:

rm prog’s|chap01 prog’s|chap02

You may sometimes require deleting all the files of a directory as a part of a cleaning-up
operation. The * when used by itself, represents all files, and you can use rm like this:

$ rm *
$_ # all files gone!
Dos users, beware! When you delete files in this fashion, the system wont prompt you with the
message “Are you sure?” or “All the files in the directory will be deleted” before removing the
files! The $ prompt will return silently, suggesting that the work has been done.*is used is
equivalent to *. * used in dos
.
INTERACTIVE DELETION (-i) like in cp, the –i (interaction) option makes he commands
ask the user for information before removing each file:

$ rm –i chap01 chap02 chap03


chap01: y
chap02: ?N
chap03:? Y
a y removes the file, any other response leaves the file undeleted.

RECURSIVE DELETION (-r) with the –r option rm performs a tree walk -


a through recursive search for all sub-directories and with in these sub-directories. it then
deletes all these files and sub-directories.rm won’t normally remove directories, but when used
with options, it will.threfore, when you issue the command.
rm -r *
You will be deleting all files in the current directory, as well as all the sub-directories. If you
don’t have a backup, then these files will be lost forever.
rm wont delete any file if its write_protected, but may prompt for removal. The –f
option overrides this protection also. It will force removal even if the files are writing
protected:
rm –rf * # forcibly removes everything
Note that you can’t normally delete the file that you don’t own. And a file once deleted can’t
be recovered.

Mkdir: MAKING DIRECTORIES:


Like in Dos, directories can be created with mkdir (make directory) command. (The
shorthand notation md is allowed only in Linux) The command is followed by the names of
the directories to be created. A directory patch is created under the current directory like this

mkdir patch

You can create a number of subdirectories with one mkdir command:


mkdir patch dbs doc
#creates three directories.

So far simple enough, but the UNIX system goes further and lets you create directory chains
with just one invention of command. For instance, the following command creates directory
chain:

mkdir pis pis/progs pis/data


# creates the directory tree

This creates three subdirectory-pis, and two subdirectories under pis.The order of
specifying the arguments is important: you obviously can’t create a subdirectory before
creation of parent directory. For instance you enter:

$ mkdir pis/data pis/progs pis

mkdir: cannot make directory: pis/data: no


Such file (or) directory (error 2)
mkdir: cannot make directory: pis/progs:no
Such file (or) directory (error)
Note that the system failed to create two subdirectories progs and data, it has still created the
pis directory. Sometimes the system refuses to create the directory:

$ mkdir test

mkdir: can’t make directory test

This happen due to these reasons:


1. The directory test may already exist.
2. There may be an ordinary file by that name in the current directory.
3. The user doesn’t have adequate authorization to create directory.

rmdir: REMOVING DIRECTORIES:

The rmdir command removes the directory. You just have to do this to remove the
directory pis:

rmdir pis

Like mkdir, rmdir can also delete more than one directory in one shot. For instance the
directories and subdirectories that were just created with mkdir can be removed by using
rmdir with reverse set of arguments.

rmdir pis/data pis/progs pis

Note that when you delete the directory and its sub_directory, a reverse logic has to be
applied. The following directory sequence used by mkdir is invalid in rmdir:

$ rmdir pis pis/progs pis/data


rmdir: pis: directory not empty

Have you observed one thing from the error message? rmdir is silently deleted the lowest level
subdirectories progs and data. This error message leads to two important rules that you should
remember while deleting the directories.
1. You can’t delete the directory unless it is empty.
In this case the pis directory couldn’t be removed because of existence of sub _directories
progs and data under it.
2. You can’t remove a subdirectory unless you have placed in a directory which is
hierarchically above the one you have chosen to remove.
The first rule follows logically from example above to illustrate second cardinal rule try
removing progs directory by executing command in the same directory itself.

$ cd progs
$ Pwd
/user/kumar/pis/progs
$ rmdir progs #trying to remove the current directory rmdir: progs: directory does not
exist.
To remove this directory you must position yourself in the directory above progs i.e, pis and
remove it from there:

$ cd/user/kumar/pis
#pwd
/user/kumar/pis
$rmdir progs

The mkdir and rmdir commands work only with those directories owned by user. Generally
the user I the owner of the home directory and she can create and remove subdirectories in this
directory or in any subdirectory created by her. However she normally won’t be able to create
or remove files and directories in other user’s directories.
du: disk usage
You will often need to find out the consumption of a specific directory tree rather than an
entire file system.
The du (disk usage) command reports usage by a recursive examination of the directory tree.
This is how du lists the usage of /home/sales/tml 1:
#du /home/sales/tml1
1154 /home/sales/tml1/forms
12820/home/sales/tml1/data
136 /home/sales/tml1/database/safe
638 /home/sales/tml1/database
156 /home/sales/tml1/reports
25170/home/sales/tml1 #Also reports the summary at the end
By default, du lists the usage of each sub-directory of it's arguments, and finally produces a
summary. The list can often him quite big, and more often than hot, you may be interested
only in a single figure that takes into account all these sub-directories. For this, the -s
(summary) option is quite convenient.
#du -s /home/sales/tml1
25170 /home/sales/tml1

Accessing the space consumed by users:


Most of the dynamic space in a system is consumed by users home directory and data file. You
should use du -s to report on each user's home directory. The output is brief and yet quite
informative:
#du -s/home/*
144208 /home/henry
98290 /home/image
13834 /home/local
28346 /home/sales

du can also report on each file in a directory(-a option), but the list would be too big to be of
any use. you may instead look for some of the notorious disk eaters, and expectation reporting
is what you probably need. There's a better command to do that (find), and is taken up shortly.

df: reporting free space


df (disk free) reports the amount of free space available on the disk. The output always reports
for each file system separately:
#df
/ (/dev/root): 112778 blocks 53375 i-nodes
/stand (/dev/boot): 14486 blocks 3826 i-nodes
/oracle (/dev/oracle):46836 blocks 119773 i-nodes
/home (/dev/home): 107128 blocks 47939 i-nodes

There are four file systems, of which the first two are always created by the system during
installation. The root file system (/dev/root), has 112778 blocks if disk space free. It also has
53,375 i-nodes free, which means that up to that many additional files can be created on this
file system. The system will continue to function until the free blocks (or) i-nodes are eaten
away, whichever occurs earlier. The total free space in the system is the sum of the free blocks
of the four file system.
Linux produces a different output; df there shows the percentage disk utilization also
$df
File System 1024 blocks used Available Capacity Mounted on
/dev/hdb3 485925 342168 118658 74% /
NOTE:
When the space in one file system is totally consumed, the file system can't borrow space
from another file system.
The -t(total) option includes the above output, as well as the total amount of disk space in the
file system. The time, we'll find out the space usage of the oracle file system only:
#df -t/dev/oracle
/oracle (/dev/oracle): 46838 blocks 119773 i-nodes
Total: 1024000 blocks 128000 i-nodes
The interpretation is simple enough; the total space allocated to this file system is 1,024,800
blocks for files, and 128,000 for i-nodes.
Linux users this option for different purpose, the default output itself is information enough.

Note: The I-NODE


The entire hard disk is never available for storing user data. A certain area of every file
system is always set aside to store the attribute of all files in that file system. This set of
attributes is stored in a fixed-format structure called the i-node. Every file has one i-node, and
a list of such i-nodes is laid out contiguously in an area of the disk not accessible directly by
any user. Each i-node is accessed by a number, called the i-number (or i-node number) that
simply references the position of the i-ode in the list.
The ls command uses the i-node to fetch the file attributes. Practically all the file attributes
can be listed by using suitable option with ls. one of them is the -i(node) option that tells you
the i-number of a file:
$ls -il tule05
9059 _rw_r__r__ 1 henry metal 51813 jan 31 11:15 tule05
The file tule05 has the i-number can have this number unless the file is removed. In that case,
the kernel will allocate it to a new file.

File system Mounting:


The interesting thing about a file system is that, once created, it's a logically separate entity
with its own tree structure and root directory. It just sit's three in a standalone mode, the main
file system doesn't even know of it's existence. These file systems unite to become a single file
system, and it's root directory is also the directory of the unified file system. This happen by a
process known as mounting, when all these secondary file system attach themselves (mounted)
to the main file system at different points.
The unix mount and umount commands are used for mounting and unmounting file
systems. The end result of mounting is that the user sees a single file system in front of her,
quite obvious of the possibility that a file moved from /oracle to /home may actually be moving
between two hard disks!

(1) Mount: File System Mounting


To mount (ie, attach) a file system to the root file system, an empty directory( say /oracle)
must first be made available in the main file system. The root directory of the new file system
(say charlie) has to be mounted on this directory (/oracle). The point at which this linkage
takes place is called the mounted point.
Mounting is achieved with the mount command which normally takes two arguments. -The
name of the file system and the directory under which it is to be mounted. For the charlie file
system, this is how the command should be used:
mount /dev /charlie/oracle #sco unix
mount -t ext2 /dev/Charlie/oracle # Linux
After this device is mounted, the root directory of the file system created on /dev/charlie loses
it's separate identity. It how becomes the directory/oracle, and made to appear as if it is part of
the main file system. The main file system is thus logically extended with incorporation of the
secondary file system.
Mount, when used in this way, requires both the device name and the mount point as it's
arguments. Note that linux requires the type of the file system ( here, ext2) to be also specified
with the -t option. The mount point (/oracle) is normally an empty directory, but if it contains
some files, these files won't be seen when the file system is mounted at this mount point. The
files are seen again when the file system is unmounted.

(2) Umount: Unmounting File Systems


Unmounting is achieved with the umount command (note the spelling!) which today
requires either the file system name or the mount point as argument.
You can use either of the two commands to unmount the file system which was just mounted:
Umount/oracle #specify either the mount point.
Umount/dev/charlie #or the device name of the file system.
Just as you can't remove a directory unless you are placed in a directory above it, you can't
unmount file system unless you are placed above it. If you try to do that, this is what you will
see:
#pwd
/oracle
#umount/dev/charlie
Umount/dev/charlie: Device busy (ever /b) unmounting is also not possible if you have a file
open in it. If a file is being edited with /user/bin/vi, it's not possible to unmount the /usr file
system as long as the file is being edited.
File system are mounted during system startup, and unmounted when the system is
shutdown. If the file system is not clean, it will fail to mount, in which case it has to be checked
for consistency with fsck command. Moreover, there is also a system configuration file which
specifies how and when these file system are mounted and unmounted.

Find: Locating files


Find is an indispensable tool for the system administrator. It recursively examines a
directory tree to look for file matching some criterion, and then takes some action on the
selected files. It has an unusual syntax, but what makes it so powerful is that it's search is
recursive, so it examines a directory tree completely (unless directed to ignore some
directories)
Find has a difficult command line, and if you ever wondered why unix is hated by many,
then you should look up the find documentation. It's totally cryptic and doesn't help at all for
an initial orientation. The best thing is to break up find's arguments into three components:
find path_list selection_criteria action
This is how find operates:
1) First, it recursively examines all file in the directories specified in path_list.
2) It then matches each file for one or more selection_criteria.
3) Finally, it takes some action on those selected files.
The path _list may consist of one or more sub-directories separated by white space. There
can also be a host of selection_criteria that you can use to match a file, and multiple actions to
dispose of the file. This makes the command difficult to use initially, but it is a program that
every user must master since it lets her make file selection under practically any condition.
find can easily locate all files named afiedt.buf (the file used by oracle's SQL *plus program)
in the system:
#find/-name.afiedt.buf -print
/usr/Kumar/scripts/afiedt.buf
/usr/tiwary/scripts/reports/afiedt.buf
/usr/sharma/afiedt.buf.
The path list (/) indicates that the search should start from the root directory. Each file in the
list is then matched against the selection criteria (-name afiedt.buf), which always consists of
an expression in the form -operator argument. If the expression matches the file (ie, the file has
the name afiedt.buf), then the file is selected. The third section specifies the action (-print) to be
taken on the file, in this case, a simple display on the terminal. All find operator's start with a-,
and the path list can never contain one.
When find is used to match a group of filenames with a wild-card pattern, the pattern should
be quoted to prevent the shell from looking at it:
find.-name "*.lst" -print
#All files with extension.lst -name is not the only operator used in framing the selection
criteria; there are many others. The actual list is much longer, and takes into account
practically every file attribute.

umask: DEFAULT FILE PERMISSION:


Let us first create an empty file called sample using the touch command and then try to list
it.
$touch sample
$ls -l sample
_rw_r__r__ | user1 group 24 jan 06 10:12 caribeans
How come that the file permission for this file has been set to 644? Ideally speaking, whenever
you create a file, UNIX should ask you what permissions you would like to set for the file. But
then unix never spoon-feeds you. It assumes that you are an intelligent user who understands
the importance of the file system security and then set up certain default permissions for the
files that you create. What unix does is it uses the value stored in a variable called umask to
decide the default permission. Umask stands for user file creation mask, the term mask
implying which permission to mask (or) hide. The umask value tells UNIX which of the three
permissions are to be denied rather than granted. The current value of umask can be easily
determined by just typing umask.
$umask
0022
Here, the first 0 indicates that what follows is an octal number. The three digits that follow the
first zero refer to the permission to be denied to the owner, group and others. This means that
for the owner no permission is denied. Whenever a file created unix assumes that the
permissions for this file should be 666. But since our umask value is 022, unix subtracts this
value from the default value yields (666-122=644). This value is then used as the permission for
the file that you create. That is the reason why the permissions turned out to be 644 (or)
rw_r___r__ for the file sample that we created.
Similarly system- wide default permission for a directory are 777. This means that when we
create a directory its permissions would be 777-022, ie 755. You must be wondering of what
significance would execute permission be for a directory, since we are never going to execute
permission for a directory has a special significance. If a directory doesn't have an execute
permission we can never enter into it.
Can we not change the current umask value? Very easily. All that we have to say is something
like:
$umask 242
This would see to it that here onwards any new file that you create would have the permissions
424(666-242) and any directory that you create would have the permissions 535(777-242). Two
extreme instances are shown below
umask 666 #All permission off
umask 000 #All read-write permission on.
The important thing to remember is that, no one, not even the administrator, can turn on
permissions not specified in the system- wide default settings. However, you can always use
----------------- as and when required.

ulimit: setting limits on file system


Faulty programs (or) processes can eat up disk space in no time. It is desirable to impose a
restriction on the maximum size that a user can be permitted to create. This limit is set by the
ulimit statement of the shell. The current setting is known by invoking only the command:
#ulimit
2097151 #linux shows "unlimited"
The default ulimit expressed in units of 512 bytes is set inside the kernel. Though an ordinary
user can only reduce this default value, the super user can increase it.
$ulimit 20971510
You will often place this statement in /etc/profile so that every user has to work within the
restriction placed by you.
Ps: Process Status
The ps command is used to display the attributes of a process. It is one of the few commands
of the unix system that ahs knowledge of the kernel built into it. It reads through the kernel
data structures (or) process tables to fetch the process characteristics.
Ps is also highly variant command, the actual output depending upon the version of unix as
well as the hardware used. The unusual aspect of this variation is that the options themselves
mean different things to different systems. For instance, ps –l in linux approximates to ps –f in
unix. Worse still RED HAT and SUSE options are often quite incompatible. The output shown
in the following examples were mainly obtained on sco open server. Linux and SCO unix ware
output would be sometimes different.
By default, ps lists out the processes associated with a user at this terminal.

$ps

PID TTY TIME CMD


476 tty03 00:00:01 login
654 tty03 00:00:01 sh
684 tty03 00:00:00 ps

Like who, ps also generates header information. Each line shows the PID, the terminal (tty)
with which the process is associated, the cumulative processor time (TIME) that has been
consumed since the process has been started, and the process name (CMD). Linux shows an
additional column that is usually an s (sleeping) or r (running). SCO unix ware shows two
more, one of which shows the priority with which the process is running.
You can see that your login shell (sh) has the PID 659, the same number echoed by the
special variable $$. Ps itself is an instance of a process identified by the PID 684. There is yet
another, viz login that now appears on SCO open server system, but not on SCO unix ware and
Linux systems. These are the only commands associated with the terminal / dev/ tty03.

Who: login details

Unix maintains an account of all the current users of the system. It’s a good idea to know the
people working on the various terminals so that you can send those messages directly. A list of
them is displayed by who command, which by default produces a three columnar output:

$who

Root console Jan 30 10:32


Kumar tty01 Jan 30 14:09
Shukla tty02 Jan 30 14:15
Sharma tty05 Jan 30 13:17
There are four users of the system with their login names shown in the first column. The
second column shows the device names of their respective terminals. Kumar has the name
tty01 associated with his terminals, while Sharma’s terminals have the name tty05. The third
column shows the date and time of logging in.

NOTE: The terminal names that you see in the who output are actually special files
representing the devices. These files are available in /dev. For example, the file tty01 can be
found in the dev/ directory.
While it is a general feature of most unix commands to avoid churning the display with
header information, this command has the header option (-H). This option prints column
headers, and when clubbed with the –u option, provides a more detailed list:

$who –Hu

NAME LINE TIME IDLE PID COMMENTS


Kumar tty01 Jan 30 14:19 30
Tiwary tty02 Jan 30 14:15 40 31

Two users have logged out, so it seems. The first three columns are the same as before, but
it’s the fourth column (IDLE) that is interesting. As shown against Kumar shows that activity
has occurred in the last one minute before the command as invoked. Tiwary seems to be idling
for the last 40 min.
The who command, when used with the arguments “am” and “I” displays a single line of
output only i.e., the login details pertaining to the user who invoked this command.

$who am i

Kumar tty01 Jan 30 14:09

Who is regularly used by the system by the system administrator to monitor whether
terminals are being properly utilized. It also offers a number of other options that are quite
useful for him. The linux output differs, but the underlying idea does not.

Wc: Line, word and character counting

Unix features a universal word counting program. The command name is in fact a
misnomer; it count lines, words and characters, depending on the options used. It takes one
(or) more filenames as its arguments, and displays a four columnar output.
Before you use wc to make a count of the contents of the file infile, just use cat to have a
look at its contents.

$cat infile

I am the wc command
I count characters, words and lines
With options I can also make selective counts
You can now use wc without options to make a “word count” of the data in the file:
$wc infile

3 20 104 infile

Wc counts, 3 lines, 20 words and 104 characters. The filename has also been shown in the
fourth column. The meanings of these terms should be clear to you as they are used frequently:

A line is any group of characters not containing a new line character.


A word is a group of characters not containing a space, tab or new line.
A character is the smallest unit of information and includes all spaces, tabs and newlines.

Wc offers three options to make a specific count. The –l option counts only the number of
lines, while the –w and –c options counts words and characters respectively.

$wc –l infile

3 infile # number of lines


$wc –w infile

20 infile #number of words


$wc –c infile

104 infile #number of characters

When used with multiple file names, wc produces a line for each file, as well as a total
count:

$wc chap01 chap02 chap03

305 4058 23172 chap01


550 4732 28132 chap02
377 4500 25221 chap03
1232 13290 76532 total # A total as bonus

Wc, like cat, doesn’t merely work with files. It also acts on a data stream.

Finger: details of users

Finger, a product from Berkeley, is a versatile command having communicative features.


When used without arguments, it simply produces a list of logged users:
$finger

LOGIN NAME TTY IDLE LOGINTIME WHERE


Henry Henry James *01 Fri 08:56
Tulee Tata Infotech 03 16 Fri 09:26 camac st
Root Super User *04 32 Fri 08:59
Summit Summitabha *05 1:07 Fri 09:24 ballygurj

The second field shows the users full name, obtained from the fifth field in / etc/ passwd.
The third field shows the terminal of the user, preceded by an asterisk if the terminal doesn’t
have write permission. The fourth field shows the office location that is also taken from the
fifth field of etc/ passwd.
Unlike who, finger can also provide details of a single user:

$finger summit
Login: summit (messages off) name: summitabha das
Office: ballygurj
Directory: / usr/ summit shell: / bin/ ksh
On since Feb 16 09:24:27 on tty 05 8 minutes 31 seconds Idle time

This shows something more; summit is using the korn shell, has disabled his terminal
(messages off) and has “no plan”. If the user is not logged in, the output will show the last login
time.
Finger in Linux also features two Linux showing the date and time the user last received and
read mail:

New mail received Fri Apr 3 16:38 1998(IST)


Unread since Fri Apr 3 16:14 1998(IST)

If you don’t know a users login name, but only the full name, you can try using either the
first name (or) the last name as the arguments to finger. If Henry was set up with the following
entry in / etc/ passwd:

Henry: X: 200:50: Henry James: / usr/ Henry: / bin/ ksh


You can finger with his last name as well:

Finger James
NOTE: If you to know the users, who are idling and their idle time, use finger.

The .plan and .project files: It is often necessary to leave behind your schedule and other
important information for others to see, especially if you are going on vacation. Since it is
simply not possible to send mails to all users, you can use finger to display contents of two
files: $home/ .plan and $home/ .project (only first in the SCO UNIX ware). If summit has
these two files, this is what finger could show at the end of the normal output:

Project: The tulee2 project should be completed by Feb 28, 1998.


Plan: The DTP people have to be contacted on Feb 25, 1998. A number of diagrams need to
be drawn with CorelDraw. The camera ready copies should be made ready by Feb 28, 1998.

Because of this e-mail feature, the finger command should be placed in the .profile. For
finger to report the contents of these files, they must be readable by all users.

FTP: File Transfer Protocol


TELNET doesn’t allow you to transfer files between two machines. TCP/IP has a special
command can be used to transfer both binary and text files. It is immensely reliable because it
uses the TCP and not the UDP protocol. It also offers a number of unix-like directory-oriented
services, and many of its commands have similar names.
Like TELNET, FTP can also invoke with (or) without the address. We’ll use the alias Jill this
time:

$ftp jill
Connected to jill
220-
220 jill FTP server (version 2.1W0CD) ready
Name (Jill: Hendry): Charlie
# Henry logs in as Charlie
331 password required for Charlie
Password: *********** # Enter the password
230 user Charlie has logged in.
Remote system type in unix
Using binary mode to transfer files.
ftp>-

After establishing a connection with the server Jill, ftp prompts for the username and the
password. The local username is provided as default (Henry), and if you had pressed <Enter>
key, the system would have logged you in as Henry. Since ftp can connect the non-unix
machines operating system and the default mode to transfer files.
Ftp displays the ftp> prompt when used without and arguments. You can then establish a
connection with its open command.

$ftp
ftp> open Jill
Connected to jill.
220-
220 jill FTP server (version 2.1WUCD) ready.
Name (Jill: sales) : <Enter>
Password: <enter> #Enter the password
530 login incorrect
Login failed

Ftp works in two stages. First, it makes a connection with the remote machine. This can
be done either by invoking ftp with the hostname (Jill), or later, with open command.
After the connection has been established, ftp asks for the username and password. At both the
prompts, the <Enter> key was pressed without supplying either. This leads to an unusual
situation where you have established a connection with the remote machine without actually
logging in to it. You are still on the local machine in every sense of the term.
To log in at this stage, you have to use the user command and then through usual login
sequence.

Ftp> user Charlie


331 password required for Charlie
Password: ***********

Termination of ftp is done in two stages. First, you have to disconnect from the remote
machine with close, and then quit ftp either with bye or quit.

Ftp> close # this can be skipped


221 Goodbye
Ftp> bye #you can also quit
$-

Basic file and directory handling:


Ftp has all the basic facilities needed to handle files and directories on the remote machine,
like pwd, ls, cd, mkdir, and chmod. You can delete a single file (delete), multiple files (m delete)
(or) rename a file (rename). However, you must at all times remember that all these commands
apply to only to the remote machine, and not the local machine.
If you have to use the operating system commands in the local machine, you can use the ! in
the usual manner. Since the ! doesn’t work with cd command, ftp offers the lcd (local cd)
command to do the job. The following session tells most of the story:

$ftp 192.168.0.2
Connected to 192.168.0.2
220-
220 Jill FTP server (version 2.1WU (1)) ready
Name (192.168.0.2: sales)
331 password required for sales
Password: *********** # Enter the password
230 user sales logged in
Remote system type is unix
Using binary mode to transfer files # turn offs some FTP messages
Ftp>verbose
Verbose mode off
FTP>pwd
257 “/ usr/ sales” is current directory
Ftp>ls
……….
-rw-r--r-- | sales group 1498 Jul 25 18:34 exrc
-rw-r--r-- | sales group 20 Jul 25 18:37 login.sql
-rw-r--r-- | sales group 289312 Jul 25 18:22 perl
Ftp> mkdir reports
Ftp> cd reports
Ftp> pwd
257 “/ usr/ sales/ reports” is current directory
Ftp> cdup #equivalent to cd…
Ftp> pwd #this is on remote machine
257 “/ usr/ sales/” is current directory
Ftp>! pwd #this is on the local machine
/ home/ henry/ project 3
Ftp> delete exrc
Ftp> m delete login.sql vb4*
# *is interpreted on remote machine
mdelete login.sql? Y
mdelete vb4cab.2? Y

Observe that mdelete prompts for each filename. This default behavior is also displayed
by the mget and mput commands which are also meant it be used with multiple files.

Put and get: transferring single files


Files can be transferred both ways; put copies a local file to the remote machine, while
get copies it from the remote machine. The only thing you may have to set before you have to
set to initiate transfer is its mode. If you have a binary file to transfer to the remote machine,
the mode has to be set binary before using put:

Ftp> binary # the default for open server and linux


220 type set to I
Ftp> put sales-menu. imp #copied under same name
Local: sales-menu. imp remote: sales-menu. imp
220 port command successful
150 opening binary mode data connection for sales-menu. imp
226 transfer complete
6152 bytes sent in 0.04 seconds (150.20 kbytes/s)

In as similar manner, you van copy a file from remote machine with the get command. If
it is a text file (a shell script, for instance), set the mode to ASCII before you proceed to
transfer.

Ftp> ascii
200 type set to A
Ftp> get send2kgp.sh #copied under same name
Local: send2kgp.sh send to kgp.sh

Both put and get also optionally use a second filename as argument
Put sales-menu.imp main-menu.imp
Get send2kgp.sh send to kgp.sh
To make sure that you are retrieving the latest version of the file, use never instead of get,
newer acts exactly like get when the remote file has a later modification date, but refuses to
initiate transfer if it is not:

Ftp> newer exceptions


Local file “exceptions “ is newer than remote file “exceptions”

mput and mget: Transferring multiple files


For transferring multiple files, use the mput and mget commands, which otherwise behave
identically to put and get. However, there are two features which ftp offers here which can be
advantage-wild-cards and prompting:
Ftp> ascii
200 type set to A
Ftp> mget t*. sql # * is interpreted i=on remote machine
mget t-alloc2.sql ? Y
mget t-inv-alloc.sql? N
………..
………..

The wild-card t*.sql is interpreted on the remote machine and not on the local machine.
The mput command behaves in a similar manner, except that wild-card patterns are
implemented locally.

TELNET: remote login


Every unix vendor offers the telnet and rlogin utilities in its TCP/IP package to connect to a
remote unix system. TELNET belongs to the DARPA command set, while rlogin is a member
of the Berkeley set of r-utilities. You must have an umount on the remote machine to use telnet
with the machines IP address as arguments:

$telnet 192.168.0.2
Trying 192.168.0.2
Connected to 192.168.0.2
Escape character is ‘^]’
SCO open server ™ release 5 (Jill) (tty p0)
Login:

You now have to enter your login name at this prompt, and then the password to gain access
to the remote machine. As long as you are logged in, anything you type is sent to the remote
machine, and your machine just acts as any other dump terminal. Any files that you use or any
commands that you run will always be on the remote machine. After you have finished, you
can press <ctrl+d>, or type exit to logout and return to your local shell.
The TELNET prompt
When telnet is used without the address, the system displays the telnet> prompt, from
where you can use its internal commands. This doesn’t connect you to any remote machine,
but can invoke a login session from here with open:
Telnet> open (Jill)
Script started on Mon Aug 25 17:18:51 1997
Trying 192.168.0.2…..
Connected to Jill
Escape character is ‘^]’
…….
…….
The “Escape character” lets you make a temporary escape to the telnet prompt so that you
can execute a command on your local machine. To invoke it, press < ctrl+]>. You can then use
the ! with a unix command, say ls, to list files on the local machine.

$<ctrl+]>
telnet>!ls –l *.sam

You can close a telnet session in two ways. First you can the shell’s exit command, or you can
escape to the telnet prompt with <ctrl+]>, close the session (close), and then use quit to exit the
local shell.

rlogin: remote login without password


TCP/IP is also shipped with the r-utilities, a comprehensive set of tools from Berkeley,
featuring remote login (rlogin), remote copying (rcp) and remote command execution (rsh or
rcmd). These commands were developed for unix machines only. Though they have been some
what over shadowed by the DARPA commands telnet and ftp, the r-utilities do possess some
advantages.
Like telnet, rlogin command can also be used for remote login. After you login, you won’t
know which utility you have used:
$rlogin 192.168.0.2
Last successful login for Henry: Wed Aug 27 12:17:24 2006 on ttyp0
Last unsuccessful login for Henry: Wed Aug 27 12:17:12 2006 on ttyp0
……
……

rlogin never prompts for the username (unlike telnet), and the above command sequence
allows Henry to log in to the remote machine with the same username, and without supplying a
password. For remote login to be possible, both machines must have entries for Henry in their
/etc /passwd files. A password may (or) may not be required, depending in the way the remote
machine is setup.
The –l option allows a user to log in with a different name:
rlogin –l Charlie Jill
#Henry logs in to Jill with username Charlie
For this to happen, it is not necessary for Charlie to have an umount in the local machine. If
the remote machine was not specifically set up for Henry to login, the system will prompt for
the password, which has to be Charlie’s password in the remote machine.

Vous aimerez peut-être aussi