Section 5 - Local administration

Diarmuid O'Briain, diarmuid@obriain.com
28-04-2014, version 2.0

Last updated: 10-05-2014 23:20


<< Back HOME Next >>
  1. Linux Local Administration elements
  2. Boot and run levels
  3. Runlevel
  4. Monitoring system state
  5. Memory
  6. Disks and file systems
  7. File systems
  8. Users and groups
  9. Printing services
  10. Disk management
  11. RAID software
  12. Logical Volume Manager (LVM)
  13. Updating Software
  14. Batch jobs

1. Linux Local Administration elements

1.1. Boot Loader

The GRand Unified Bootloader (GRUB) boot loader has a text-mode configuration /boot/grub/grub.conf.

1.2. Management of alternatives

/etc/alternatives: If there is more than one equivalent program present for a specific task, the alternative that will be used must be indicated through a directory. This system was borrowed from Debian, which uses it a lot in its distribution.

1.3. TCP/IP - xinetd

extended Internet daemon xinetd is the daemon which manages Internet-based connectivity. It offers a more secure extension to or version of inetd, the Internet daemon, thus most modern Linux distributions have switched to it. xinetd listens for incoming requests over a network and launches the appropriate service to handle that request. Requests are made using port numbers as identifiers which are mapped in /etc/services file. xinetd launches another daemon to handle each request.

1.4. Configuration directories

1.5. Hardware management

udev is a device manager for the Linux kernel. Primarily, it manages device nodes in /dev. It has succeeded devfs and hotplug, which means that it handles the /dev directory and all user space actions when adding/removing devices, including firmware loading.

2. Boot and run levels

Boot and run levels determine the current working mode of the system and the services provided (on the level).

A service is a functionality provided by the machine, based on background execution processes called daemons that control network requests, hardware activity or other programs that provide any task.

The services can be activated or halted using scripts. Most standard processes, which are usually configured in the /etc directory, tend to be controlled with the scripts in /etc/init.d/. Scripts with names similar to those of the service to which they correspond usually appear in this directory and starting or stopping parameters are usually accepted. The following actions are taken:

/etc/init.d/service start start the service
/etc/init.d/service stop stop the service
/etc/init.d/service restart stop and subsequent restart of the service

When a GNU/Linux system starts up, first the system's kernel is loaded, then the first process begins; this INITialisation process is called init and it executes and activates the rest of the system, through the management of different runlevels.

3. Runlevel

Runlevel is the state in which a system boots. It can boot in a single user mode, multiuser mode, with networking, and with graphics etc.. Following are the default runlevels expected in a GNU/Linux system.

Runlevel Function
0 Halt or shutdown the system
1 Single user mode
2 Multi-user mode, without networking
3 Full multi user mode, with NFS (typical for servers)
4 Officially not defined; Unused
5 Full multi user with NFS and graphics (typical for desktops)
6 Reboot
s,S or single Alternate single user mode
emergency Bypass rc.sysinit

3.1. Initial System boot

After the system is turned on the Basic Input/Output System (BIOS) or the Unified Extensible Firmware Interface (UEFI) takes control of the computer, detects the disks, loads the Master Boot Record (MBR) and executes GRand Unified Bootloader (GRUB). The bootloader takes over, finds the kernel on the disk, loads and executes it. The kernel is then initialised, and starts to search for and mount the partition containing the root filesystem, and finally executes the first program init. Frequently, the root partition and init are, located in a virtual filesystem that only exists in Random Access Memory (RAM) and is therefore called initramfs, formerly called initialisation RAM disk (initrd). This filesystem is loaded in memory by GRUB, often from a file on a hard drive or from the network. It contains the bare minimum required by the kernel to load the actual root filesystem. This can be driver modules for the hard drive, or other devices without which the system cannot boot, or initialisation scripts and modules for assembling Redundant Array of Independent Disks (RAID) arrays, opening encrypted partitions, activating Logical Volume Manager (LVM) volumes, etc. Once the root partition is mounted, initramfs hands over control to the actual system init, and the machine goes back to the standard boot process.

Currently there are a number of different init systems with Debian using the original UNIX system V, Ubuntu, Fedora, Redhat using Upstart init and Arch Linux using systemd init.

3.2. System V

System V has been the mainstay of GNU/Linux and UNIX initialisation for a long time. Recently a transformation is underway and while System V is still the initialisation system in Debian, it is no longer in Fedora and Ubuntu.

A runlevel is basically a configuration of programs and services that will be executed in order to carry out determined tasks. Currently there are a number of systems that are used depending on the operating system. Here is the traditional system known as System V which is provided by sysv-rc.

In the System V model, when the init process (PID 1) begins, it uses a configuration file called /etc/inittab to decide on the execution mode it will enter. This file defines the runlevel by default initdefault at start-up Level 2 in Debian, and a series of terminal services that must be activated so that users may log in.

Afterwards, the system, according to the selected runlevel, will consult the files contained in /etc/rcS.d followed by /etc/rcn.d, where n is the number associated to the runlevel (in Debian this is /etc/rc2.d), which contains a list of services that should be started or halted if we boot up in the runlevel or abandon it. Within the directory, we will find a series of scripts or links to the scripts that control the service.

Each script has a number related to the service, an S or K initial that indicates whether it is the script for Starting (S) or Killing (K) the service, and a number that shows the order in which the services will be executed.

A series of system commands help us to handle the runlevels.

3.2.1. The System V problem

The main problem with dependency-based init systems like System V is that they do not recognising the dynamic nature of modern Linux systems. i.e. If a dependency-based init system wished to start say MySQL, it would first start all the dependent services that MySQL needed, which appears to be reasonable. Now consider how such a system would approach the problem of dealing with a user who plugs in an external monitor. Maybe we'd like our system to display some sort of configuration dialogue so the user can choose how they want to use their new monitor in combination with their existing laptop display. This can only be hacked with a dependency-based init system since it is unknown when the new screen will be plugged. So, the System V choices are either:

What is really needed is a system that detects such events and when the conditions are right for a service to run, the service is started. The GNU/Linux community was torn between two event based init systems for a longtime, Upstart from Canonical Ltd (Ubuntu) and systemd from RedHat. The Debian committee swung in favour of systemd as the replacementnit system for system V. Mark Shuttleworth in his blog article "Loosing Graciously" (14/2/2014) indicated that Ubuntu would follow the Debian lead and transition Ubuntu to systemd.

3.3. Upstart

Upstart is an event-based replacement for the traditional init daemon. It uses configuration files in /etc/init to determine the starting daemons. It was written by Scott James Remnant of Canonical Ltd (Ubuntu). Upstart is the startup daemon used on Ubuntu since Ubuntu v6.10 and is included in Ubuntu family distributions like Linux Mint. Along with systemd it was under consideration by Debian to replace System V. In February 2014 however the Debian committee decided in favour of systemd and Canonical agreed to migrate Ubuntu to systemd as well.

Upstart emits events which services can register an interest in. When an event, or combination of events is emitted that satisfies some service's requirements, Upstart will automatically start or stop that service. If multiple jobs have the same start on condition, Upstart will start those jobs in parallel. Upstart handles starting the dependent services itself, this is not handled by the service file itself as it is with dependency-based systems.

With Upstart there is no concept of runlevels, everything is event driven with dependencies. An upstart config is added to /etc/init and potentially source a config file in /etc/default to allow users to override default behaviour.

3.3.1. Upstart directories

3.3.2. Controlling Services

3.3.3. Rebooting and Powering off the system

3.3.4. Misc Upstart Commands

3.4. Systemd

systemd is another potential replacement for System V from the Redhat stable. It has existed in Fedora since version 15 and is also in Arch Linux and SuSE Linux. systemd starts up and supervises the entire system and is based around the idea of units consisting of a name, a type and matching configuration files.

systemd:

systemd uses a utility called systemctl to manage daemons.

Boot scripts in systemd are located in /etc/systemd/system/ and /lib/systemd/system/.

systemd required the kernel to be compiled with the following enabled:

The kernel command line in GRUB will include init=/bin/systemd.

3.4.1. Control Groups (cgroups)

cgroups are a key element of the systemd init that manifests as a kernel feature to limit, police and account for resource usage of certain process groups. Compared to other approaches like the nice command or /etc/security/limits.conf, cgroups are more flexible.

3.4.2. systemd Units

systemd acts on units, these can be:

When using systemd systemctl commands the complete name of the unit file, including its suffix is specified. i.e. sshd.socket. If the suffix is not included then systemd will assume it is a .service file. Mount points will be automatically translated into the appropriate mount point, i.e. /home = home.mount and devices will also be translated, i.e. /dev/hda1 = dev-hda1.device.

.service units

.service units are designed with deigned with dependencies and start-up types defined. Certain varuables are used in these files to indicate.

Dependency Meaning
Requires=<x> This unit requires <x> to be running
Wants=<x> This unit optionally Wants <x> to be running
After=<x> Used with Requires and Wants

The example below means this service requires the <x> to be running before it is started.

  [Service]
  Requires=<x>
  After=<x>

3.4.3. systemd Commands

Action Command
Start a service systemctl start <service name>.service
Stop a service systemctl stop <service name>.service
Stop and then start a service (bounce) systemctl restart <service name>.service
Reloads the config file without interrupting pending operations systemctl reload <service name>.service
Restarts if the service is already running systemctl condrestart <service name>.service
Tells whether a service is currently running. systemctl status <service name>.service
Turn the service on, for start at next boot, or other trigger. systemctl enable <service name>.service
Turn the service off for the next reboot, or any other trigger. systemctl disable <service name>.service
Used to check whether a service is configured to start or not in the current environment. systemctl is-enabled <service name>.service
Used to list what levels this service is configured on or off ls /etc/systemd/system/*.wants/<service name>.service
Used when you create a new service file or modify any configuration systemctl daemon-reload
Used to list the services that can be started or stopped (Note 1) systemctl list-unit-files --type=service
Print a table of services that lists which runlevels each is configured on or off (Note 2) systemctl list-unit-files --type=service

Note 1 : This is the same as the command: $ ls /lib/systemd/system/*.service /etc/systemd/system/*.service Note 2 : This is the same as the command: $ ls /etc/systemd/system/*.wants/

systemd has backwards compatability with System V and therefore the service and chkconfig commands will mostly continue to work.

3.4.4. systemd targets/runlevels

Systemd has a concept of targets which serve a similar purpose as runlevels but act a little different. Each target is named instead of numbered and is intended to serve a specific purpose. Some targets are implemented by inheriting all of the services of another target and adding additional services to it. There are systemd targets that mimic the common sysvinit runlevels so you can still switch targets using the familiar telinit RUNLEVEL command.

$ runlevel0.target is the same as $ sudo telinit 0 which is the same as $ sudo init 0 Halts the system
$ runlevel6.target is the same as $ sudo telinit 6 which is the same as $ sudo init 6 Reboots the system

However systemctrl has Power Management commands:

Action Command
Shut down and reboot the system $ systemctl reboot
Shut down and power-off the system $ systemctl poweroff
Suspend the system $ systemctl suspend
Put the system into hibernation $ systemctl hibernate
Put the system into hybrid-sleep state $ systemctl hybrid-sleep

4. Monitoring system state

One of the main daily tasks of the (root) administrator will be to verify that the system works properly and check for any possible errors or saturation of the machine's resources (memory, disks etc.).

4.1. System boot

When booting a GNU/Linux system, there is a large extraction of interesting information; when the system starts-up, the screen usually shows the data from the processes detecting the machine's characteristics, the devices, system services boots etc., and any problems that appear are mentioned.

In most distributions, this can be seen directly in the system's console during the booting process. However, either the speed of the messages or some of the modern distributions that hide the messages behind graphics can stop us from seeing the messages properly, which means that we need a series of tools for this process.

Basically, we can use:

4.1.1. /proc

When booting up, the kernel starts up a pseudo-file system called /proc, in which it dumps the information compiled on the machine, as well as many other internal data, during the execution. The /proc directory is implemented on memory and not saved to disk. The contained data are both static and dynamic (they vary during execution). Note also that as /proc is heavily depends on the kernel, the structure tends to depend on the system's kernel and the included structure and files can change.

File Description
/proc/bus Directory with information on the PCI and USB buses.
/proc/cmdline Kernel startup line
/proc/cpuinfo CPU data
/proc/devices List of system character devices or block devices
/proc/drive Information on some hardware kernel modules
/proc/filesystems Systems of enabled files in the kernel
/proc/ide Directory of information on the IDE bus, disks characteristics
/proc/interrups Map of the hardware interrupt requests (IRQ) used
/proc/ioports I/O ports used
/proc/meminfo Data on memory usage
/proc/modules Modules of the kernel
/proc/mounts File systems currently mounted
/proc/net Directory with all the network information
/proc/scsi Directory of SCSI devices or IDEs emulated by SCSI
/proc/sys Access to the dynamically configurable parameters of the kernel
/proc/version Version and date of the kernel

As of kernel version 2.6, a progressive transition of procfs (/proc) to sysfs (/sys) has begun, in order to migrate all the information that is not related to the processes, especially the devices and their drivers (modules of the kernel) to the /sys system.

4.1.2. /sys

The sys system is in charge of making the information on devices and drivers, which is in the kernel, available to the user space so that other APIs or applications can access the information on the devices (or their drivers) in a more flexible manner. It is usually used by layers such as HAL and the udev service for monitoring and dynamically configuring the devices.

4.2. Processes

4.3. System Logs

Both the kernel and many of the service daemons, as well as the different GNU/Linux applications or subsystems, can generate messages that are sent to log files, either to obtain the trace of the system's functioning or to detect errors or fault warnings or critical situations. These types of logs are essential in many cases for administrative tasks and much of the administrator's time is spent processing and analysing their contents. Typically logs are stored in /var/log directory.

4.3.1. syslogd

This syslogd daemon is standardised by the Internet Engineering Task Force (IETF), Request For Comment (RFC) 5424. It is in charge of receiving the messages sent by the kernel and other service daemons and sends them to a log file that is located in */var/log/messages. Its function can be controlled using the /etc/syslog.conf file).

Messages are labeled with a facility code (one of: auth, authpriv, daemon, cron, ftp, lpr, kern, mail, news, syslog, user, uucp, local0 ... local7) indicating the type of software that generated the messages, and are assigned a severity (one of: Emergency, Alert, Critical, Error, Warning, Notice, Info, Debug).

rsyslogd

Many GNU/Linux systems implement RFC 3164, this is the BSD implementation of the syslog Protocol. This incorporates the syslogd functionality with additional extensions like timestamp with millisecond granularity and timezone information. Another important point is that systemd has its own event logging system (a binary file) similar to rsyslogd. The administrator can choose to log system events with either systemd or rsyslogd.

5. Memory

Where the system's memory is concerned, we must remember that we have:

  1. The physical memory of the machine itself (Random Access Memory (RAM))
  2. Virtual memory that can by addressed by the processes (called /swap partition (or file))

When building a GNU/Linux system this is a handy rule of thumb for deciding /swap partition size based on system RAM.

RAM Memory SWAP File Size
Up TO 4GB 2GB
4GB TO 16GB 4GB
16GB TO 64GB 8GB
64GB TO 256GB 16GB

To examine the information on the memory, we have various useful commands and methods:

6. Disks and file systems

To find out about the disks (or storage devices) present in the system, we can use the system boot information (dmesg), when those available are detected, such as the /dev/hdX for IDE devices or /dev/sdX for SCSI devices. Other devices, such as hard disks connected by USB, flash disks (pen drive types), removable units, external CD-ROMs etc., may be devices with some form of SCSI emulation, so they will appear as devices as this type.

To examine the structure of a known device or to change its structure by partitioning the disk, we can use the fdisk command.

$ sudo fdisk -I /dev/hda

  Disk /dev/hda: 20.5 GB, 20520493056 bytes 255 heads, 63 sectors/track, 2494 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
  
  Device		Boot	Start	End	Blocks		Id System
  /dev/hda1	*	1	1305	10482381	7 HPFS/NTFS
  /dev/hda2	*	1306	2429	9028530		83 Linux
  /dev/hda3		2430	2494	522112+		82 Linux swap

Information can be obtained in different ways:

Note: a file system must never be left at less than 10 or 15% (especially /). If any situations of file system saturation are detected:

7. File systems

In each machine with a GNU/Linux system, we will find different types of file systems. It is typical to find the actual Linux file systems created in various partitions of the disks. The typical configuration is to have two partitions: that corresponding to root / and the swap partition. In more complex configurations, it is usual to separate partitions like:

/ /boot /home /opt /tmp /usr /var swap

Note: Particularly /var as this directory tends to increase in size with logs. Also /home the directory of users presents a good candidate for a seperate partition.

Partition types in Linux are:

Compatible types are:

7.1. Mount point

Apart from the / root file system and its possible extra partitions (/usr /var /tmp /home), it should be remembered that it is possible to leave mount points prepared for mounting other file systems, whether they are disk partitions or other storage devices.

In the machines in which GNU/Linux shares the partition with other operating systems, through the bootloader GRUB (or LILO), there may be various partitions assigned to the different operating systems. It is often good to share data with these systems, whether for reading or modifying their files. Unlike other systems (that only register their own data and file systems and in some versions of which some of the actual file systems are not supported), GNU/Linux is able to handle a significant group of file systems from different operating systems and to share information between them.

Example to mount an NTFS partition:

$ sudo mount -t ntfs /dev/hda2 /mnt/winXP

7.2. Permissions

Another subject that we will have to control in the cases of files and directories is the permissions that we wish to establish for each of them, whilst remembering that that each file may have a series of permissions: rwxrwxrwx where they correspond with the owner rwx, the group rwx to which the user belongs, and the rwx for other users (others). In each one, we may establish the access rights for reading (r), writing (w) or executing (x). In the case of a directory, x denotes the permission for being able to access that directory.

Access rights commands:

The commands also provide the -R option, which is recursive if affecting a directory.

7.2.1. SUID/SGID/Sticky bit

Additionally to the User, Group and Other permissions there are three other permissions that can be applied to a directory or file.

SSSrwxrwxrwx where SSS represent SUID, SGID and Sticky bit. These can be set as registers as follows:

$ sudo chmod 6751 Some_file

The 6 (4+2+0) enables the SUID and SGID but disables the Sticky bit.

The 751 apply to the User (7) rwx, Group (5) rx and Other (1) x.

Another method to implement is to use the + and - with the chmod command.

Implementation of the Sticky bit

  # ls -la |grep myFile.txt
  -rw-r--r--  1 dobriain dobriain       0 Apr 16 06:40 myFile.txt
  
  # chmod +t myFile.txt 
  # ls -la |grep myFile.txt
  -rw-r--r-T  1 dobriain dobriain       0 Apr 16 06:40 myFile.txt
  
  # chmod o+x myFile.txt 
  # ls -la |grep myFile.txt
  -rw-r--r-t  1 dobriain dobriain       0 Apr 16 06:40 myFile.txt
  

Note that in the ls -la a small t represents the execute permission is enabled as well as the sticky bit, whereas the capital T represents that execute permission are not enabled.

Implementation of the SUID

  # ls -la |grep myFile.txt
  -rw-r--r--  1 dobriain dobriain       0 Apr 16 06:40 myFile.txt
  
  # chmod u+s myFile.txt 
  # ls -la |grep myFile.txt
  -rwSr--r--  1 dobriain dobriain       0 Apr 16 06:40 myFile.txt
  
  # chmod u+x myFile.txt 
  # ls -la |grep myFile.txt
  -rwsr--r--  1 dobriain dobriain       0 Apr 16 06:40 myFile.txt

Note the capital S representing SUID set when user rights do not include x execute rights, and a lowercase s when execute rights are also configured.

Implementation of the SGID

  # ls -la |grep myFile.txt
  -rw-r--r--  1 dobriain dobriain       0 Apr 16 06:40 myFile.txt
  
  # chmod g+s myFile.txt 
  # ls -la |grep myFile.txt
  -rw-r-Sr--  1 dobriain dobriain       0 Apr 16 06:40 myFile.txt
  
  # chmod g+x myFile.txt 
  # ls -la |grep myFile.txt
  -rw-r-sr--  1 dobriain dobriain       0 Apr 16 06:40 myFile.txt

Note the capital S representing SGID set when user rights do not include x execute rights, and a lowercase s when execute rights are also configured.

Example: The crontab utility is used by users but with its actions are those of root. The user gets extremely limited root rights to all the creation of a crontab file. Note the SGID bit set (s for x as in r-s).

  $ ls -la /usr/bin |grep crontab
     36 -rwxr-sr-x 1 root     crontab     35984 Feb  9  2013 crontab

7.2.2. umask

Linux uses a three digit octal number to determine the file permission for newly created directories and files. umask specifies the permissions you DO NOT want given by default to newly created files and directories. Most utilities will specify a mode of 666 which allows any user to read or write to a file. The shell inherits its umask value from /etc/profile.

  # umask 022
  
  $ umask
  0022
  
  $ umask -S
  u=rwx,g=rx,o=rx 
Files Directories
default mode 666 777
umask value -022 -022
Result 644 755

This gives owner read,write and everyone else read permissions.

8. Users and groups

The users of a GNU/Linux system normally have an associated account (defined with some of their data and preferences) along with an allocated amount of space on the disk in which they can develop their files and directories. This space is allocated to the user and may only be used by the user (unless the permissions specify otherwise).

Among the accounts associated to users, we can find different types:

8.1. User account

A user account is normally created by specifying a name (or user identifier), a password and a personal associated directory (the account). The information on the system's users is included in the following files:

8.1.1. /etc/passwd

  dobriain:x:1000:1000:Diarmuid O'Briain,,,:/home/dobriain:/bin/bash
  root:x:0:0:root:/root:/bin/sh

where (if the :: appear together, the box is empty):

8.1.2. /etc/shadow

The /etc/passwd file does not contain the passwords, it used to but it was a security flaw so now only an x is, to indicate that they are located in another file, which can only be read by the root user, /etc/shadow, the contents of which may be something similar to the following:

dobriain:a1gNcs82ICst8CjVJS7ZFCVnu0N2pBcn/:12208:0:99999:7:::

where the user identifier is located, along with the encrypted password. In addition, they appear as spaces separated by ":":

8.1.3. /etc/group

/etc/group contains information on the user groups:

ireland:x:1000:

where we have:

name-group:password-group:identifier-of-group:list-users

The list of the users in the group may or may not be present; given that this information is already in /etc/passwd, it is not usually placed in /etc/group. If it is placed there, it usually appears as a list of users separated by commas. The groups may also posses an associated password (although this is not that common), as in the case of the user, there is also a shadow file: /etc/gshadow. An example where the user would be placed in the group would be the sudo'ers group. If the user is not associated with the sudo group then they do not have permissions to use the sudo command.

sudo:x:27:dobriain

8.1.4. /etc/skel

The files located in /etc/skel directory are skeletons, a template of files that are included in each user account when it is created. W

8.2. User management commands

These commands are used for the administration of users:

With regard to the administration of users and groups, what is mentioned here refers to the local administration of one sole machine. In systems with multiple machines that the users share, a different management system is used for the information on users. These systems, generically called Network Information Systems, such as NIS, NIS+ or Lightweight Directory Access Protocol (LDAP), use databases for storing the information on the users and groups, effectively using servers, where the database and other client machines are stored and where this information can be consulted. This makes it possible to have one single copy of the user data (or various synchronised copies) and makes it possible for them to enter any available machine of the set administered with these systems. At the same time, these systems incorporate additional concepts of hierarchies and/or domains/machine and resource zones, that make it possible to adequately represent the resources and their use in organisations with different organisational structures for their own personnel and internal departments.

It can be checked if a system is in a NIS-type environment by seeing if compat appears in the passwd line and group configuration file, /etc/nsswitch.conf, if the system is working with local files, or nis or nisplus according to the system. Generally, this does not involve any modification for the simple user, as the machines are managed transparently, more so if it is combined with files shared by Network File System (NFS) that makes the account available, regardless of the machine used. Most of the abovementioned commands can still be used without any problem under NIS or NIS+, in which they are equivalent, except for the command for changing the password, which, instead of passwd, is either yppasswd (NIS) or nispasswd (NIS+); although it is typical for the administrator to rename them to passwd, (through a link), which means that users will not notice the difference.

9. Printing services

The GNU/Linux has had multiple printing systems over the years, here are some of them:

Both (both CUPS and LPRng) are a type of higher-level system, but they are not all that perceptibly different for average users, with regard to the standard BSD and System V systems; for example, the same client commands (or compatible commands in the options) are used for printing. There are perceptible differences for the administrator, because the configuration systems are different. In one way, , lPRng and CUPS can be considered as new architectures for printing systems, which are compatible for users with regard to the old commands.

LPRng - CUPS

9.1. CUPS

CUPS is a new architecture for the printing system that is quite different; it has a layer of compatibility with BSD LPD, which means that it can interact with servers of this type. It also supports a new printing protocol called Internet Printing Protocol (IPP)** (based on http), but it is only available when the client and the server are CUPS-type clients and servers. In addition, it uses a type of driver called PostScript Printer Description (PPD) that identifies the printer's capacities; CUPS comes with some of these drivers and some manufacturers also offer them (HP and Epson).

CUPS has an administration system that is completely different, based on different files: /etc/cups/cupsd.conf centralises the configuration of the printing system, /etc/cups/printers.conf controls the definition of printers and /etc/cups/classes.conf the printer groups.

PDD files for particular printers can be obtained at: Open Printing website.

It has a simple, effective management engine that is presented to the user via a web browser page at the localhost on port 631.

10. Disk management

Storage devices have a series of associated devices, depending on the type of interface:

IDE: devices

SCSI: /dev/sda, dev/sdb etc.. following the numbering of the peripheral devices in the SCSI Bus. USB drives and sticks are labeled as SCSI devices.

With regard to the partitions, the number that follows the device indicates the partition index within the disk and it is treated as an independent device: /dev/hda1 first partition of the first IDE disk, or /dev/sdc2, second partition of the third SCSI device. In the case of the IDE disks, these allow four partitions, known as primary partitions, and a higher number of logical partitions. Therefore, if /dev/hdaN, N is less than or equal to 4, then it will be a primary partition; if not, it will be a logical partition with ** being greater than 4.

With the disks and the associated file systems, the basic processes that are carried out are included in:

$ mkfs

  mkfs           mkfs.ext2      mkfs.ext4dev   mkfs.msdos     mkfs.vfat
  mkfs.bfs       mkfs.ext3      mkfs.jfs       mkfs.ntfs      mkfs.xfs
  mkfs.cramfs    mkfs.ext4      mkfs.minix     mkfs.reiserfs 

$ fsck

  fsck           fsck.ext3      fsck.jfs       fsck.nfs       fsck.xfs
  fsck.cramfs    fsck.ext4      fsck.minix     fsck.reiserfs  
  fsck.ext2      fsck.ext4dev   fsck.msdos     fsck.vfat 

11. RAID software

The configuration of disks using RAID levels is currently one of the most widely-used high-availability storage schemes, when various disks are available for implementing the file systems.

The main focus on the different existing techniques is based on a fault-tolerance that is provided from the level of the device and the set of disks, to different potential errors, both physical or in the system, to avoid the loss of data or the lack of coherence in the system. As well as in some schemes that are designed to increase the performance of the disk system, increasing the bandwidth of these available for the system and applications.

In general, this hardware is in the form of cards (or integrated with the machine) of RAID-type disk drivers, which implement the management of one or more levels (of the RAID specification) over a set of disks administered with this driver.

In RAID a series of levels (or possible configurations) are distinguished, which can be provided (each manufacturer of specific hardware or software may support one or more of these levels). Each RAID level is applied over a set of disks, sometimes called RAID array (or RAID disk matrix), which are usually disks with equal sizes (or equal to group sizes). For example, in the case of an array, four 1 TB disks could be used or, in another case, 2 groups (at 500 GB) of 2 disks, one 1 TB disk and one 250 GB disk. In some cases of hardware drivers, the disks (or groups) cannot have different sizes; in others, they can, but the array is defined by the size of the smallest disk (or group).

Here is a description of the basic concepts on some levels in the following list:

Some points that should be taken into account with regard to RAID in general:

In GNU/Linux, RAID hardware is supported through various kernel modules, associated to different sets of manufacturers or chipsets of these RAID drivers. This permits the system to abstract itself from the hardware mechanisms and to make them transparent to the system and the end user. In any case, these kernel modules allow access to the details of these drivers and to configure their parameters at a very low level, which in some cases may be beneficial for tuning the disks system that the server uses in order to maximise the system's performance.

The other option that can be analysed is that of carrying out these processes through software components, specifically GNU/Linux's RAID software component.

GNU/Linux has a kernel of the so-called Multiple Device (md) kind, which can be considered as a support through the driver of the kernel for RAID. Through this driver RAID levels 0,1,4,5 can be implemented and nested RAID levels (such as RAID 10) on different block devices such as IDE or SCSI disks. There is also the linear level, where there is a lineal combination of the available disks (it doesn't matter if they have different sizes), which means that disks are written on consecutively.

In order to use RAID software in Linux, we must have RAID support in the kernel, and, if applicable, the md modules activated The preferred method for implementing arrays of RAID disks through the RAID software offered by Linux is either during the installation or through the mdadm utility. This utility allows for the creation and management of the arrays.

Let's look at some examples (we will assume we are working with some SCSI /dev/sda, /dev/sdb disks... in which we have various partitions available for implementing RAID):

11.1. Creation of a linear array

$ sudo mdadm –create –verbose /dev/md0 –level=linear –raid-devices=2 /dev/sda1 /dev/sdb1

A linear array is created based on the first partitions of /dev/sda and /dev/sdb, creating the new device /dev/md0, which can already be used as a new disk. Assuming the mount point /media/diskRAID exists:

$ sudo mkfs.ext2fs /dev/md0 # mount /dev/md0 /media/diskRAID

For a RAID 0 or RAID 1, we can simply change the level (-level) to raid0 or raid1. With $ sudo mdadm –detail /dev/md0, we can check the parameters of the newly created array.

The mdstat entry in /proc can be consulted to determine the active arrays and their parameters.

The mdadm utility provides many options that allow us to examine and manage the different RAID software arrays created (we can see a description and examples in man mdadm).

12. Logical Volume Manager (LVM)

There is a need to abstract from the physical disk system and its configuration and number of devices, so that the (operating) system can take care of this work and we do not have to worry about these parameters directly. In this sense, the logical volume management system can be seen as a layer of storage virtualisation that provides a simpler view, making it simpler and smoother to use.

In the Linux kernel, there is an LVM (logical volume manager), which is based on ideas developed from the storage volume managers used in HP-UX (HP's proprietary implementation of UNIX). There are currently two versions and LVM2 is the most widely used due to a series of added features.

The architecture of an LVM typically consists of the (main) components:

By using logical volumes, available storage space can be treated (which may have a large number of different disks and partitions) more flexibly, according to the needs that arise, and the space can be managed by more appropriate identifiers and by operations that permit adaption of the space to the needs that arise.

Logical Volume Management allows us to:

12.1. Configure LVM

12.2. Display LVM elements

$ sudo pvdisplay

    --- Physical volume --- 
    PV Name               /dev/sda4 
    VG Name               vg0 
    PV Size               134.89 GiB / not usable 3.00 MiB 
    Allocatable           yes 
    PE Size               4.00 MiB 
    Total PE              34532 
    Free PE               34532 
    Allocated PE          0 
    PV UUID               rrL4Q5-RWIz-wM5a-N7QL-M3to-B1zO-0wK0yF 
  

$ sudo vgdisplay

    --- Volume group --- 
    VG Name               vg0 
    System ID             
    Format                lvm2 
    Metadata Areas        1 
    Metadata Sequence No  1 
    VG Access             read/write 
    VG Status             resizable 
    MAX LV                0 
    Cur LV                0 
    Open LV               0 
    Max PV                0 
    Cur PV                1 
    Act PV                1 
    VG Size               134.89 GiB 
    PE Size               4.00 MiB 
    Total PE              34532 
    Alloc PE / Size       0 / 0   
    Free  PE / Size       34532 / 134.89 GiB 
    VG UUID               9gYEvb-Lhvy-BBry-vgBU-nqVs-fIeT-e00HaP
  

$ sudo lvdisplay

    --- Logical volume --- 
    LV Path                /dev/vg0/lv0
    LV Name                lv0
    VG Name                vg0 
    LV UUID                irpKXm-pzKb-yvQA-t8Vw-gZ19-Ir9k-vRLL0k 
    LV Write Access        read/write 
    LV Creation host, time OB-Xen, 2013-10-27 20:14:19 +0000 
    LV Status              available 
    # open                 0 
    LV Size                20.00 GiB 
    Current LE             5120 
    Segments               1 
    Allocation             inherit 
    Read ahead sectors     auto 
    - currently set to     256 
    Block device           254:4 
  

13. Updating Software

The administration of the installation or the update of software in a GNU/Linux system, depends on the type of software packages used:

There are various graphical tools for handling these packages, such as RPM: Kpackage; DEB: Synaptic, Gnome-apt; Tgz: Kpackage,or from the actual graphic file manager itself (in Gnome or KDE). There are also usually package conversion utilities. For example, in Debian we have the alien command, with which we can change RPM packages to DEB packages. Although it is necessary to take the appropriate precautions, so that the package does not unexpectedly modify any behaviour or file system, as it has a different destination distribution.

Depending on the use of the types of packages or tools: it will be possible to update or install the software in our system in different ways:

  1. From the actual system installation CDs; normally, all the distributions search for the software on the CDs. But the software should be checked to ensure that it is not old and does not, therefore, include some patches like updates or new versions with more features; consequently, if a CD is used for installation, it is standard practice to check that it is the latest version and that no more recent version exists.
  2. Through updating or software search services, whether they are free, as is the case with Debian's apt-get tool or yum in Fedora, or through subscription services (paid services or services with basic facilities), such as the Red Hat Network of the commercial Red Hat versions.
  3. Through software repositories that offer pre-built software packages for a determined distribution.
  4. From the actual creator or distributor of the software, who may offer a series of software installation packages. We may find that we are unable to locate the type of packages that we need for our distribution.
  5. Unpackaged software or with compression only, without any type of dependencies.
  6. Only source code, in the form of a package or compressed file.

14. Batch jobs

In administration tasks, it is usually necessary to execute certain tasks at regular intervals, either because it is necessary to program the tasks so that they take place when the machine is least being used or due to the periodic nature of the tasks that have to be performed.

There are various systems that allow us to set up a task schedule (planning task execution) for performing these tasks out-of-hours, such as periodic or programmed services:

14.1. nohup

nohup: is perhaps the simplest command used by users, as it permits the execution of a non-interactive task once they have logged out from their account. Normally, when users log out, they lose their processes; nohup allows them to leave the processes executing even though the user has logged out.

14.2. at

at: permits us to launch a task for later, programming the determined point in time at which we wish for it to start, specifying the time (hh:mm) and date, or specifying whether it will be today or tomorrow. Examples:

14.3. cron

cron permits the establishment of a list of tasks that will be performed.

To enable cron for users either an empty /etc/cron.d/cron.deny file is created or add specific users in a /etc/cron.d/cron.allow file, otherwise only root can run cron.

$ touch /etc/cron.d/cron.deny

cron configuration is saved in /etc/crontab; specifically, in each entry in the file, there is:

Note: * is a wildcard indicating any.

To edit as a user execute the crontab command with the -e (edit) switch. It will open the personal crontab file if the user is allowed in the cron.allow/cron.deny files by root. It will use either the default editor (vim) or one defined by the environment variable $EDITOR or $VISUAL.

$ crontab -e

  # m h  dom mon dow   command
  30 * * * *	   /usr/bin/fetchmail
  :wq!
  crontab: installing new crontab

To review the crontab for the user use the -l switch.

$ crontab -l

  # m h  dom mon dow   command
  30 * * * *	   /usr/bin/fetchmail

Examples:


<< Back HOME Next >>