Académique Documents
Professionnel Documents
Culture Documents
The /etc/init.d directory contains the scripts that are used to start, stop, or otherwise control the
operation of system services. When the system changes run level, init, under the control of
the /etc/init/rc.conf file, calls the /etc/rc script to start the services that are required for the
new run level and to stop any currently running services that are not required.
System information
The files in the /proc directory hierarchy contain information about your system hardware
and the processes that are running on the system
Text Editor:
o
awk
Process Monitoring:
1. 1-Top
o Press O(to sort the processes based on any column)
o R to reverse the order
o Press k ( to kill the process you want)
o Press r to Renice the process
o Press u to specify ehsan
o Color => z and b
o Press c => name to real path Firefox -> /lib/app/firefox
o Press n to specify the number of process to show
2. We show who is logged on and what they are doing.
Process Management:
3.
4.
5.
6.
7.
8.
service --status-all
sar
Fdisk l
Fdisk l /dev/sda
mount 10.1.1.50:/home/nfs /home/nfs_local
The command rdist helps the system administrator install software or update
files across many machines. The process is launched from one computer.
TASK SCHEDULING
Ro run shell script to have run hourly, daily, weekly or monthly into the
appropriate directory:
/etc/cron.hourly/
/etc/cron.daily/
/etc/cron.weekly/
/etc/cron.monthly/
The crond daemon executes scheduled tasks on behalf of cron and it starts anacron once
every hour. crond looks in /etc/crontab or in files in /etc/cron.dfor system cron job
definitions
System cron jobs are defined in crontab-format files in /etc/crontab or in files
in /etc/cron.d.
Network
/etc/hosts
/etc/nsswitch.conf /etc/resolv.conf
/etc/sysconfig/network
Bonding interfaces
http://docs.oracle.com/cd/E37670_01/E41138/html/ch11s05.html
ROUTING
Any changes that you make to the routing table using ip route do not persist across system reboots.
To permanently configure static routes, you can configure them by creating a route-interface file
in/etc/sysconfig/network-scripts for the interface. For example, you would configure a static
route for theeth0 interface in a file named route-eth0. An entry in these files can take the same
format as the arguments to the ip route add command. For example, to define a default gateway
entry for eth0, create an entry such as the following in route-eth0:
default via 10.0.2.1 dev eth0
The following entry in route-eth1 would define a route to 10.0.3.0/24 via 10.0.3.1 over eth1:
10.0.3.0/24 via 10.0.3.1 dev eth1
DHCP
client:
/etc/sysconfig/network-scripts/ifcfg-iface BOOYPROTO=dhcp or none
DHCP server:/etc/dhcppd
DNS:
Install
Nscd(cache DNS Server): Linux can run nscd or BIND or dnsmasq as the name service
caching daemon. Large and work-group servers may use BIND or dnsmasq as a dedicated caching
server to speed up queries.
netinet is the folder to store the tcp and udp code
netstat t
netstat s
Installation Option:
Startx
vim /etc/inittabb
sar
Processor Tuning
Network Tuning
ttcp (testing tcp)
TCP tuning :
Time Wait
FIN_WAIT_2
WEBSERVER
SYN COOCKIES
, =
nstat
Memory:
1.
2.
3.
4.
5.
VMSTAT s
free l
/proc/buddyinfo
pmap -d 1115 the memory map of the process
nmon
Queues
Buffer Queue
Limit
tcp_wmem
number of packets.
o Byte Queue Limits (BQL) is a new feature in recent Linux kernels (>
3.3.0) which attempts to solve the problem of driver queue sizing
automatically. This is accomplished by adding a layer which enables and
disables queuing to the driver queue based on calculating the minimum
/sys/devices/pci0000:00/0000:00:14.0/net/eth0/queues/tx0/byte_queue_limits
(still BQL does not differentiate the flows and is FIFO) so we
have
3. Queue Disc
txqueuelen parameter controls the size of the queues in the Queueing Discipline box for the
QDiscs
4. TCP Small Queues:is built to control the BQL and Ring based on the
tcp session
o TCP Small Queues adds a per TCP flow limit on the
number of bytes which can be queued in the QDisc
o allowing no more than ~128KB [1] per tcp socket in qdisc/dev
layers at a > given time
/proc/sys/net/ipv4/tcp_limit_output_bytes131072
KERNEL
Lsmod
modinfo ahci
# modprobe -rv nfs
modprobe sch_netem
net.core.rmem_max
Specifies the maximum read socket buffer size. To minimize network packet loss, this buffer must be
large enough to handle incoming network packets.
net.core.wmem_max
Specifies the maximum write socket buffer size. To minimize network packet loss, this buffer must be
large enough to handle outgoing network packets.
net.ipv4.tcp_available_congestion_control
Displays the TCP congestion avoidance algorithms that are available for use. Use
the modprobe command if you need to load additional modules such as tcp_htcp to implement
the htcp algorithm.
net.ipv4.tcp_congestion_control
Specifies which TCP congestion avoidance algorithm is used.
net.ipv4.tcp_max_syn_backlog
Specifies the number of outstanding SYN requests that are allowed. Increase the value of this
parameter if you seesynflood warnings in your logs, and investigation shows that they are
occurring because the server is overloaded by legitimate connection attempts.
net.ipv4.tcp_rmem
Specifies minimum, default, and maximum receive buffer sizes that are used for a TCP socket. The
maximum value cannot be larger than net.core.rmem_max.
net.ipv4.tcp_wmem
Specifies minimum, default, and maximum send buffer sizes that are used for a TCP socket. The
maximum value cannot be larger than net.core.wmem_max.
vm.swappiness
Specifies how likely the kernel is to write loaded pages to swap rather than drop pages from the
system page cache. When set to 0, swapping only occurs to avoid an out of memory condition.
When set to 100, the kernel swaps aggressively. For a desktop system, setting a lower value can
improve system responsiveness by decreasing latency. The default value is 60.
Caution
This parameter is intended for use with laptops to reduce power consumption by the hard disk. Do
not adjust this value on server systems.
In an OCFS2 cluster. set the value to 1 to specify that a system must panic if a kernel oops occurs. If
a kernel thread required for cluster operation crashes, the system must reset itself. Otherwise,
another node might not be able to tell whether a node is slow to respond or unable to respond,
causing cluster operations to hang.
vm.panic_on_oom
If set to 0 (default), the kernels OOM-killer scans through the entire task list and attempts to kill a
memory-hogging process to avoid a panic. When set to 1, the kernel panics but can survive under
certain conditions. If a process limits allocations to certain nodes by using memory policies or
cpusets, and those nodes reach memory exhaustion status, the OOM-killer can kill one process. No
panic occurs in this case because other nodes memory might be free and the system as a whole
might not yet be out of memory. When set to 2, the kernel always panics when an OOM condition
occurs. Settings of 1 and 2 are for intended for use with clusters, depending on your preferred
failover policy.