System Logging
Syslog is a centralized logging facility that provides different classes of events that are logged to a log file, as well as providing an alerting service for certain events. Because syslogd is configurable by root, it is very flexible in its operations. Multiple log files can exist for each daemon whose activity is being logged, or a single log file can be created. The syslog service is controlled by the configuration file /etc/syslog.conf, which is read at boot time or whenever the syslog daemon receives a HUP signal. This file defines the facility levels or system source of logged messages and conditions. Priority levels are also assigned to system events recorded in the system log, while an action field defines what action is taken when a particular class of event is encountered. These events can range from normal system usage, such as FTP connections and remote shells, to system crashes.
The source facilities defined by Solaris are for the kernel (kern), authentication (auth), daemons (daemon), mail system (mail), print spooling (lp), and user processes (user). Priority levels are classified as system emergencies (emerg), errors requiring immediate attention (attn), critical errors (crit), messages (info), debugging output (debug), and other errors (err). These priority levels are defined for individual systems and architectures in <<sys/syslog.h>>.
Tip | It is easy to see how logging applications, such as TCP wrappers, can take advantage of the different error levels and source facilities provided by syslogd. |
On the Solaris platform, the syslog daemon depends on the m4 macro processor being present. m4 is typically installed with the software developer packages, and it is usually located in /usr/ccs/bin/m4. This version has been installed by default since Solaris 2.4. Users should note that the syslogd supplied by Sun has been error-prone in previous releases. With early Solaris 2.x versions, the syslog daemon left behind zombie processes when alerting logged-in users (for example, notifying root of an emerg).
Tip | If syslogd does not work, check that m4 exists and is in the path for root, and/or run the syslogd program interactively by invoking it with a –d parameter. |
Examining Log Files
Log files are fairly straightforward in their contents, and you can stipulate what events are recorded by instructions in the syslog.conf file. Records of mail messages can be useful for billing purposes and for detecting the bulk sending of unsolicited commercial e-mail (spam). The system log will record the details supplied by sendmail: a message ID, when a message is sent or received, a destination, and a delivery result, which is typically “delivered” or “deferred.” Connections are usually deferred when a connection to a site is down.
Tip | sendmail will usually try to redeliver failed deliveries in 4-hour intervals. |
When using TCP wrappers, connections to supported Internet daemons are also logged. For example, an FTP connection to a server will result in the connection time and date being recorded, along with the hostname of the client. A similar result is achieved for telnet connections.
A delivered mail message is recorded as
Feb 20 14:07:05 server sendmail[238]: AA00238: message-id=<<bulk.11403.19990219175554@sun.com>>
Feb 20 14:07:05 server sendmail[238]: AA00238: from=<<sun-developers-l@sun.com>>,
size=1551, class=0, received from gateway.site.com (172.16.1.1)
Feb 20 14:07:06 server sendmail[243]: AA00238: to=<<pwatters@mail.site.com>>,
delay=00:00:01, stat=Sent, mailer=local
whereas a deferred mail message is recorded differently:
Feb 21 07:11:10 server sendmail[855]: AA00855: message
-id=<<Pine.SOL.3.96.990220200723.5291A-100000@oracle.com>>
Feb 21 07:11:10 server sendmail[855]: AA00855: from=<<support@oracle.com>>,
size=1290, class=0, received from gateway.site.com (172.16.1.1)
Feb 21 07:12:25 server sendmail[857]: AA00855: to=pwatters@mail.site.com,
delay=00:01:16, stat=Deferred: Connection timed out during user open with
mail.site.com, mailer=TCP
An FTP connection is recorded in a single line,
Feb 20 14:35:00 server in.ftpd[277]: connect from workstation.site.com
in the same way that a telnet connection is recorded:
Feb 20 14:35:31 server in.telnetd[279]: connect from workstation.site.com
Logging Disk Usage
For auditing purposes, many sites generate a df report at midnight or during a change of administrator shifts, to record a snapshot of the system. In addition, if disk space is becoming an issue, and extra volumes need to be justified in a systems budget, it is useful to be able to estimate how rapidly disk space is being consumed by users. Using the cron utility, you can set up and schedule a script using crontab to check disk space at different time periods and to mail this information to the administrator (or even post it to a web site, if system administration is centrally managed).
A simple script to monitor disk space usage and mail the results to the system administrator (root@server) looks like this:
#!/bin/csh -f
df | mailx –s "Disk Space Usage" root@localhost
As an example, if this script were named /usr/local/bin/monitor_usage.csh, and executable permissions were set for the nobody user, you could create the following crontab entry for the nobody user to run at midnight every night of the week:
0 0 * * * /usr/local/bin/monitor_usage.csh
Or, you could make the script more general, so that users could specify another user who would be mailed:
#!/bin/csh -f
df | mailx –s "Disk Space Usage" $1
The crontab entry would then look like this:
0 0 * * * /usr/local/bin/monitor_usage.csh remote_user@client
The results of the disk usage report would now be sent to the user remote_user@client instead of root@localhost.
You can find further information on the cron utility and submitting cron jobs in Chapter 8.
Another way of obtaining disk space usage information with more directory-by-directory detail is by using the /usr/bin/du command. This command prints the sum of the sizes of every file in the current directory and performs the same task recursively for any subdirectories. The size is calculated by adding together all of the file sizes in the directory, where the size for each file is rounded up to the next 512-byte block. For example, taking a du of the /etc directory looks like this:
# du /etc
14 ./default
7 ./cron.d
6 ./dfs
8 ./dhcp
201 ./fs/hsfs
681 ./fs/nfs
1 ./fs/proc
209 ./fs/ufs
1093 ./fs
...
2429 .
Thus, /etc and all its subdirectories contain a total of 2,429KB of data. Of course, this kind of output is fairly verbose and probably not much use in its current form. If you were only interested in recording the directory sizes, in order to collect data for auditing and usage analysis, you could write a short Perl script to collect the data, as follows:
#!/usr/local/bin/perl
# directorysize.pl: reads in directory size for current directory
# and prints results to standard output
@du = `du`;
for (@du)
{
($sizes,$directories)=split /\s+/, $_;
print "$sizes\n";
}
If you saved this script as directorysize.pl in the /usr/local/bin/directory and set the executable permissions, it would produce a list of directory sizes as output, like the following:
# cd /etc
# /usr/local/bin/directorysize.pl
28
14
12
16
402
1362
2
418
2186
...
Because you are interested in usage management, you might want to modify the script to display the total amount of space occupied by a directory and its subdirectories, as well as the average amount of space occupied. The latter is very important when evaluating caching or investigating load-balancing issues:
#!/usr/local/bin/perl
# directorysize.pl: reads in directory size for current directory
# and prints the sum and average disk space used to standard output
$sum=0;
$count=0;
@ps = `du -o`;
for (@ps)
{
($sizes,$directories)=split /\s+/, $_;
$sum=$sum+$sizes;
$count=$count+1;
}
print "Total Space: $sum K\n";
print "Average Space: $count K\n";
Note that du -o was used as the command, so that the space occupied by subdirectories is not added to the total for the top-level directory. The output from the command for /etc now looks like this:
# cd /etc
# /usr/local/bin/directorysize.pl
Total Space: 4832 K
Average Space: 70 K
Again, you could set up a cron job to mail this information to an administrator at midnight every night. To do this, first create a new shell script to call the Perl script, which is made more flexible by passing the directory to be measured, and the user to which the mail will be sent as arguments:
#!/bin/csh -f
cd $1
/usr/local/bin/directorysize.pl | mailx –s "Directory Space Usage" $2
If you save this script to /usr/local/bin/checkdirectoryusage.csh and set the executable permission, you could then schedule a disk space check of a cache file system. You could include a second command that sends a report for the /disks/junior_developers file system, which is remotely mounted from client, to the team leader on server:
0 0 * * * /usr/local/bin/checkdirectoryusage.csh /cache squid@server
1 0 * * * /usr/local/bin/checkdirectoryusage.csh /disks/junior_developers
team_leader@server
Tip | Tools may already be available on Solaris to perform some of these tasks more directly. For example, the du –s command will return the sum of directory sizes automatically. However, the purpose of this section has been to demonstrate how to customize and develop your own scripts for file system management. |
EXAM TIP | You will be required to interpret scripts in the exam. |
The syslog.conf File
The file /etc/syslog.conf contains information used by the system log daemon, syslogd, to forward a system message to appropriate log files and/or users. syslogd preprocesses this file through m4 to obtain the correct information for certain log files, defining LOGHOST if the address of “loghost” is the same as one of the addresses of the host that is running syslogd.
The default syslogd configuration is not optimal for all installations. Many configuration decisions depend on the degree to which the system administrator wishes to be alerted immediately should an alert or emergency occur, or whether it is sufficient for all auth notices to be logged and a cron job run every night to filter the results for a review in the morning. For noncommercial installations, the latter is probably a reasonable approach. A crontab entry like this,
0 1 * * * cat /var/adm/messages | grep auth | mail root
will send the root user a mail message at 1:00 A.M. every morning with all authentication messages.
A basic syslog.conf should contain provision for sending emergency notices to all users, as well as altering to the root user and other nonprivileged administrator accounts. Errors, kernel notices, and authentication notices probably need to be displayed on the system console. It is generally sufficient to log daemon notices, alerts, and all other authentication information to the system log file, unless the administrator is watching for cracking attempts, as shown here:
*.alert root,pwatters
*.emerg *
*.err;kern.notice;auth.notice /dev/console
daemon.notice /var/adm/messages
auth.none;kern.err;daemon.err;mail.crit;*.alert /var/adm/messages
auth.info /var/adm/authlog
No comments:
Post a Comment