Novell is now a part of Micro Focus

Cheat Sheet

Articles and Tips: article

Emmett Dulaney

01 Aug 2005


Part Three of a Four-Part Series

During the past two issues of Novell Connection, I introduced the certifications from LPI?the Linux Professional Institute (www.lpi.org). As I said in the past articles, Novell has embraced Linux in every way. They also support, and more or less endorse, the Level 1 and Level 2 certifications from LPI. If you have these vendor-neutral certifications, you show that you not only know and understand, but also have a working knowledge of Linux administration basics. The last two articles offered an overview look of the certifications and focused on the two exams you must pass in order to become certified at Level I (junior level administrator).

After you pass Level I, you can start the trek toward Level II (intermediate level administrator). Like Level I, it requires passing two exams. The exams are predominantly multiple-choice, but include enough fill-in-the-blank questions to assure that you really know your stuff. If you struggled to reach Level I, you'll need to really buckle down and study for Level II as these exams hit hard on real-world experience and your ability to work with complex commands. Not only are you expected to have administrative experience with Linux to obtain this certification, but that experience should also be in medium-sized sites offering a number of different services and include supervision responsibilities.

Like the other exams, the topics aren't evenly weighted. If you get in a pinch, take weighting into account and focus your study time accordingly. The following shows the weighting (roughly equivalent to percentages) of each of the eight topics on this exam:


Filesystem

20

Hardware

16

File and Service Sharing

16

Troubleshooting

12

Linux Kernel

10

System Startup

10

System Maintenance

8

System Customization and Automation

6

Note: The numbers add to 98 rather than to 100 because of an inexact correlation between the weighting and percentages.

Note: Many topics are similar to those on the Level 1 exams. For example, "System Startup" herein is similar to "Boot, Initialization, Shutdown and Runlevels" on Exam 102. Therefore, it's good to study for these exams right after passing the Level 1 tests.

Break the areas to study down even further by weighting each objective and organizing them by the most important. (See Table 1.)

Let's look at the concepts and utilities and the purposes for them that you that must know for this exam. They are in order of their overall weighting on the exam.

Filesystem

As the most heavily weighted topic on this exam, you must know the basics of the Linux filesystem and disk structure to pass. Some topics are repetitive from the introductory exams, but are now covered in more depth.

You need at least one partition on the disk to install Linux on a hard disk. A partition is a portion of the disk (some or all) that has been properly formatted for storing data. Just because a partition exists doesn't mean it's usable. It must be formatted properly.

Partitions must be referenced in the /dev directory, and the first partition on the first disk is one of the following:

  1. hda1: The first disk on the primary IDE controller (the first disk on the secondary IDE controller is hdc)

  2. sda1: SCSI

The name of the device can always be broken into the four fields that each character stands for:

  1. The type of drive: h for IDE and s for SCSI

  2. The type of device: d for disk

  3. The number of the disk expressed in alphabetic format: a for the first, b for the second, and so on. This is on a per controller basis.

  4. The number of the partition. Numbers 1-4 are for use on primary partitions, whether or not you have that many, and the logical drives start numbering with 5.

If you run out of room in the partition (or haven't configured it), a swap file will always be created for the same purpose, but it's preferable to use the partition. Don't run a Linux system without swap space unless you have a lot of memory (several GB), and even then you might have problems. Swap files are used, for example, if you add memory to a machine but don't have free disk space to assign to additional swap partitions. If necessary, swap files should be created on primary partitions for performance reasons. Use the swapon and swapoff commands to enable and disable devices and files for swapping.

Disks are partitioned during installation, and you should only need to create additional partitions if you add new disks to your system. The primary tool to use in creating disk partitions is fdisk. The fdisk utility will divide the disk into partitions and write the partition table in sector 0 (known as the superblock). When run without parameters, fdisk brings up a menu. You can avoid the menu and run fdisk with these options:

  1. -l to just list the partition tables

  2. -v to print the version of fdisk only

The fdisk utility does not have a default device and displays a usage message if you try to start it without specifying a device. Also, don't ever specify a partition to fdisk (hda1); you can only specify devices because it is a disk partitioner. Given that changes are not done until you write your changes, you can always experiment and exit without breaking anything as long as you don't write the changes to disk.

After all changes have been written to the disk, you can quit fdisk and then format any partitions you need to. If you write the changes, an alert will appear indicating that the partition table has been altered and the disks will be synchronized. You should reboot your system to ensure that the table is properly updated. Use the sync command to flush filesystem buffers before rebooting.

Format the partitions with the mkfs (Make Filesystem) utility or mkreiserfs. The mkfs utility is a wrapper program that drives other, filesystem-specific utilities named mkfs.ext2 (default), mkfs.reiserfs, and so on. The -t option tells it which to execute. The mkfs utility, by default, creates ext2 partitions, and mkreiserfs only creates reiserfs partitions. Use the mkisofs utility to make ISO9660 filesystems or mke2fs utility to make an ext2/3 filesystem.

Because of these actions, use options with this utility to indicate the type of filesystem to make (-t), the device, size and any options you want.

As stated previously, you can choose from several different filesystems that are supported in Linux. Regardless of the number, you can create two types of local, physical filesystems: journaling and traditional. Journaling filesystems keep track of their pending actions and store them in a log file to ensure integrity and capture lost data if the hard drive crashes. Journaling filesystems include ReiserFS (the Novell Linux Desktop default) and ext3.

Traditional filesystems include ext2, vfat and so on. You would only use vfat or MSDOS filesystems on systems where you are dual-booting Windows or embedded systems that only know about MSDOS.

Virtual filesystems are hybrids that are only created by the system, with the exception of loopback filesystems (which are totally different from tmpfs and sysfs filesystems). Virtual filesystems are recreated each time the system boots and are used internally by the system for process, resource, shared memory, and hardware management and interfacing.

After the filesystem has been created, you can gather information about it and perform troubleshooting using fsck-the filesystem check utility. Not only does it check the filesystem, but if it finds errors, you can also use it to correct them. The utility utilizes entries in the /etc/fstab file to tell it which filesystems to check during startup if configured to run automatically. The -A option also tells the utility to use this file.

The fsck utility uses the sixth field in each /etc/fstab entry to identify the sequence in which filesystems are checked when a Linux system boots. If this field contains a ?0', the filesystem will not be checked. (The /etc/fstab file is always read by fsck and related utilities but never written to. As an administrator, you need to update the file, placing each filesystem on its own line, when you want to make modifications to the operation of system utilities.)

When you run fsck, it acts as a wrapper program and runs the appropriate filesystem-specific utility based on the filesystem type information in /etc/fstab. (Filesystems should be unmounted before you run fsck on them.)

The mount command is used without parameters to show what filesystems are currently mounted (available).

The first pass looks, among other things, at inodes. An inode is a table entry that contains information about a single file or directory or disk space allocated to a file or directory. Thousands of inodes exist on each partition and the inodes are filesystem-specific. For example, each filesystem has inodes numbered 1, 2, 3 and so on. Every item that appears in a directory listing has an inode associated with it and directories also have associated inodes. The inode holds the following types of information:

  1. a unique inode number

  2. the type of entry that it is (file, directory, pipe and so on)

  3. permissions on the file in numerical format

  4. the physical size of the file

  5. the number of links to the entry

  6. the owner of the file

  7. the group owning the file

  8. times of creation, modification and access

  9. a pointer to the physical location of the data on the disk

The inode numbers begin with 1 and increment from there, causing files copied during installation to have small numbers, and recently created files to have much larger numbers. When files and directories are deleted, their associated inode number is marked as usable once more. Many inodes can be associated with a single file if it is large. Also, some filesystems store file data less than 4K in an inode to simplify allocation.

When corruption occurs, files are dumped to the /lost+found directory, using their inode number as names. Files placed in /lost+found don't appear to have been deleted, but are not linked into any directory on the current filesystem.

There will always be a local filesystem; that's where Linux is installed. If the filesystem is large enough to hold everything, that's all you need. However, in most cases the local filesystem is not sufficient to hold everything you need. If you run out of space on your system, you can add another disk, partition it, and mount those partitions to enable your system to access the new space.

The mount command is used without parameters to show what filesystems are currently mounted (available). This reads from the dynamic file /etc/mtab and relays the device, the mount point, the type of filesystem and the permissions (rw is read/write). In addition to read/write, filesystems can be mounted as read only (ro), not allowing users (nouser), only able to run binaries (exec) or not (noexec), not able to set user ID upon execution (nosuid), or controllable by all users (user), and interpret special devices on the filesystem (dev).

If you always want to mount certain filesystems, add their entries to this file. The command mount -a will read the fstab file and mount/remount all entries found within it. If you don't want certain filesystems to always mount, you can dynamically load other filesystems using the device name with the mount utility.

The /mnt directory contains mount points, which are Linux directories with meaningful names such as for the CD-ROM and floppy drives. Options that can be used with the mount command are:

  1. -a to read through the /etc/fstab file and mount all entries

  2. -f to see if a filesystem can be mounted, but not mount it. If you get an error message, it means it can't be found. No error message means it was found in /etc/fstab or /etc/mtab.

  3. -n prevents the /etc/mtab file from being dynamically updated when a filesystem is added

  4. -o to apply additional arguments that are comma-delimited

  5. -r mounts the filesystem as read-only

  6. -t allows you to specify the type of filesystem being mounted

  7. -w mounts the filesystem as read/write (the default)

The opposite of mounting a filesystem when it is needed is unmounting it when it is no longer needed. Do this with the umount utility. Options that can be used with the umount utility are:

  1. -a unload every entry in /etc/mtab

  2. -n unload but not update /etc/mtab

  3. -r if the unload fails, remount it as read-only.

  4. -t unload all entries of a specific file type.

In most cases, the only filesystems that you would unmount while a system is active are network filesystems or those associated with removable storage.

Other tools you should know about for this section include:

  1. badblocks: used to check the device for bad blocks

  2. dd: commonly used for cloning media or partitions, or for translating files from one format or blocksize to another

  3. debuge2fs/ dumpe2fs/tune2fs: tools for working with the e2fs file system and changing parameters with it, such as frequency of checks.

Hardware

Hardware-related information is stored in temporary files beneath /proc and can be viewed from there. Most filenames telegraph the information they hold. For example, -dma, holds information on the Direct Memory Access modules. In the 2.6 Linux kernel, block, bus, class, devices, firmware and power are in /sys and not /proc.

Use the hwinfo utility to see a list of information about the installed devices. If you use the -log option, you can specify a file for the information to be written to. Use the hdparm utility to see and change hard disk parameters. A plethora of options are available, and just entering hdparm without any other parameters will list them. Many of the hdparm parameters can be dangerous if misused. Read the man page for this utility very carefully before using it.

Loadable modules are kernel components that are not directly linked or included in the kernel. The module developer compiles them separately and the administrator can insert or remove them into the running kernel. The scope of actual modules available has grown considerably and now includes filesystems, Ethernet card drivers, tape drivers, PCMCIA, parallel port IDE and many printer drivers. Use loadable modules with the commands in Table 2.

The commands affect the modules in the currently running kernel and will also review information available in the /proc filesystem. The lsmod command is used to list the currently loaded modules.

The output of lsmod includes header text identifying different data columns. Column 1 is the loaded module. Column 2 is the module size in bytes. Column 3 is the number of references to the module. And column 4 specifies the modules that call this module.

Removing a module with rmmod requires that you specify the module to be removed. The modinfo command is used to query information from the module file and report to the user. Use the modinfo command to report the:

  • module author

  • module description

  • typed parameters the module accepts

Remember that many modules don't report any information at all. If the module author does not provide the information when the module is developed, there is nothing for modinfo to report. You can use the -a option to get as much information as possible, but some modules will still report none.

You can also use the depmod and modprobe commands to load modules. Although the previous method can be tedious and frustrating, it's important to understand the relationships between the modules. Use depmod to determine the dependencies between modules.

The tools and utilities in Table 3 are useful for working with hardware-related issues.

File and Service Sharing

This topic focuses on how Linux interacts with the non-homogeneous world around it. Samba and NFS are the two main topics, and you need to know that Samba includes both the smbd daemon and nmbd daemon to allow resources to be referenced and accessed. Windows users can click on the Network Neighborhood icon and see the resources that exist on Linux machines.

The smb.conf file is used for configuration of all Samba parameters including those for the NMB daemon. When the objects can be reached, they are mounted to make them accessible, and you use them as if they resided locally. SMB support is native to most current distributions, but consult www.samba.org for any information on configuration or troubleshooting of this service. You should also be familiar with the utilities listed in Table 4.

NFS is the Network File System, and how partitions are mounted and shared across a Linux network. NFS is a Remote Procedure Call (RPC) service. NFS uses three daemons: nfsd, portmap and rpc.mountd, and loads partitions configured in the /etc/exports file. This file exists on every host, by default, but is empty (or contains nothing but comments which point you to the correct man page to find the syntax used within the file). Within the file, specify the directories to be exported and the rights to it. The exportfs command is used to maintain the list of exported file systems seen in this configuration file.

Start the NFS server with the /etc/init.d/nfsserver start command and the daemons rpc.nfsd and rpc.mountd are spawned. The rpc.nfsd daemon is the service daemon, while rpc.mountd acts as the mount daemon. NFS problems usually fall into two categories: errors in the /etc/exports listings (fixed by editing and correcting them), and problems with the daemons. The most common problem with the daemons is that they don't start in the right order. The portmap daemon must begin first, and is the only daemon needed on a host that accesses shares but does not offer any. Shares can only be mounted through /etc/fstab or manually, using the mount command.

The nfsstat utility is used to show statistics on NFS client/server activity, while showmount shows the mount information and stat information for an NFS server.

Troubleshooting

If there was ever a potpourri category, this is it. Many utilities in this category are discussed in other topics. Among those that aren't, know the tools in Table 5 and when and how to use them.

Linux Kernel

The /etc/modprobe.conf file is a text-based file used to store information that affects the operation of depmod and modprobe. When modifying or reading this file, remember:

  • All empty lines and all text on a line after a # are ignored.

  • Lines may be continued by ending the line with a backslash (\).

  • The lines specifying the module information must fall into one of the following formats:

    • keep

    • parameter=value

    • options module symbol=value ...

    • alias module real_name

    • pre-install module command ...

    • install module command ...

    • post-install module command ...

    • pre-remove module command ...

    • remove module command ...

    • post-remove module command ...

In the preceding list, all values in the parameter lines will be processed by a shell, which means that "shell tricks" like wildcards and commands enclosed in back quotes can be used during module processing. For example:

path[misc]=/lib/modules/1.1.5?path[net]=/lib/modules/'uname -r'

These have the effect of values to the path to look for the system modules. Table 6 lists the legal/allowed parameters.

If the configuration file is missing, or if any parameter is not overridden, the defaults are assumed. SUSE LINUX-based systems always look in /lib/modules/kernel- version-default, then in /lib/modules/kernel-version-override, and then any other directories that you've specified using path commands in modprobe.conf.

For the exam, use the patch utility to apply a diff file to an original file.

System Startup

Two bootloaders are currently used in Linux: LILO and GRUB. The Linux Loader (LILO) lets Linux coexist on your machine with other operating systems: Up to 16 images can be swapped back and forth to designate what operating system will be loaded on the next boot. While GRUB is the default choice in many current distributions, the exam focuses on LILO because it is applicable to all distributions.

By default, LILO boots the default operating system each time, but you can enter the name of another operating system at the BOOT: prompt or force the prompt to appear by pressing Shift, Ctrl or Alt during the boot sequence. Entering a question mark or pressing Tab will show the available operating systems as defined in the /etc/lilo.conf file, which is a text file that can range from simple to complex, based on the number of OSs you have. Changes can be made to the file and are active when you run /sbin/lilo.

You can use different options with the lilo command:

  • -b to specify the boot device

  • -C to use a different configuration file

  • -D to use a kernel with a specified name

  • -d to specify how long the wait should be in deciseconds

  • -I to be prompted for the kernel path

  • -i to specify the file boot sector

  • -m to specify the name of the map file to use

  • -q to list the names of the kernels, which are held in the /boot/map file

  • -R to set as a default for the next reboot

  • -S to overwrite the existing file

  • -s to tell LILO where to store the old boot sector

  • -t to test

  • -u to uninstall LILO

  • -v to change to verbose mode

In Linux, the operating system can run at seven different levels of functionality. These levels are shown in Table 7.

Runlevels 2, 3 and 5 are operational states of the computer, which means it is up and running and users can conduct business. All other runlevels, except 4, involve some sort of maintenance or shutdown operation preventing users from processing, which differs across implementations.

Two commands can be used to change the runlevel at which the machine is currently operating from the command line: shutdown and init. Both utilities reside in the /sbin directory. As a general rule, use shutdown to reduce the current runlevel to 0 or 1, and use init to raise it after performing administrative operations. You can use the telinit utility in place of init as it is just a link to init.

The /etc/inittab (initialization table) file is the main file for determining what takes place at different runlevels. This colon-delimited text file is divided into four fields. The first field is a short ID, and the second identifies the runlevel at which the action is to take place (blank means all). The third field is the action to take place, and the last field is the command to execute. The ?si' entry runs the system's initialization script before the runlevel scripts.

The shell script rc (beneath /etc/rc.d) looks for other scripts within subdirectories of /etc/rc.d based on the runlevel. For example, /etc/rc.d/rc0.d and /etc/rc.d/rc1.d and so on. Within those subdirectories are script files that start with either an S or a K. Scripts that start with K identify processes and daemons that must be killed when changing to this runlevel. Scripts starting with S identify processes and daemons that must be started when changing to this runlevel. Startup and kill scripts are executed in numeric order.

System Maintenance

You should be able to modify syslog.conf to configure syslogd to act as a central network log server and send output to a central log server. Once the information is in a log file, you should know how to use standard tools (grep, cat and so on) to look for entries that need attention.

Know how to build and work with a package in both RPM and DEB formats. Also know how to work with backups. It's important to understand the difference between backups and archives. Archives are files you copy from your system to store elsewhere and would not put back on the system if it crashes. Backups are files on the system that you need and would put back on if the system crashed.

Several strategies exist on how to back up data to tapes:

  • Daily: Copy all the files changed each day to a tape.

  • Full: Copy all files.

  • Incremental: Copy all files added or changed since the last full or incremental backup.

  • Differential: Copy all files added or changed since the last full backup.

Most real backup plans use some combination of these types. For example, full backups are the best, but require the most time to run. For that reason, you might run a full backup every Sunday, and an incremental backup every other evening of the week. The time it takes to run the incremental backups will be much shorter and might be the same each night. However, if the system crashes on Friday, it will take quite a while to restore as you restore the full tape from Sunday, then the incremental tapes from Monday, Tuesday, Wednesday, and Thursday (a total of five tapes).

Another possibility would be to do a full backup on Sunday and a differential each night. The amount of time to do the differentials will get longer each night, but if the system crashes on Friday, you only need two tapes: Sunday's full and Thursday's differential.

There are also two other types of backups recognized: copy and partial. A copy backup is simply a copy of a file to the media (think of copying one file to a floppy), whereas a partial backup is just a copy of all the files within a single directory.

Just as important as a good backup strategy and adherence to it is the knowledge that you can restore the data if you have to. This can only come from verifying that on a regular basis. Every so often, when a backup is completed, run a restore operation and verify that you can read back the data in its original form.

System Customization and Automation

While this is the least heavily weighted topic on the exam, it's diverse enough to make it difficult. Not only are you expected to know of the at/cron/crontab structure, but you should also be able to write Perl scripts (know perl -MCPAN -e shell) and be able to work with awk and sed. The sed editor appeared on the Level I certification (Exam 101), but awk makes its first appearance here.

One of the most powerful data processing engines in existence is awk-not only in Linux, but anywhere. The limits to what can be done with this programming and data-manipulation language are the boundaries of one's own knowledge. It allows you to create short programs that read input files, sort data, process it, perform arithmetic on the input and generate reports, among a myriad of other things. Be sure to peruse the man pages on awk before signing up for the exam.

Summary

This article focused on the first exam-of two-that you must pass to become certified as an Intermediate-level Administrator. You can find sample questions for the 201 exam on the LPI site.

A follow-up article will look at the second exam in LPI Level 2, and the topics you must pass on exam 202 to obtain that certification.

Emmett Dulaney is the author of the Novell Certified Linux Professional (CLP) Study Guide (ISBN: 0-672-32719-8) by Novell Press. He is also finishing the Novell Linux Desktop 9 Administrator's Handbook (ISBN: 0-672-32790-2) for Novell Press. He holds a number of other certifications in addition to LPI.

Table 1: The Weighting of Each Exam Objective


Objective
Topic Area
Weight (dbl for percent)

Configuring a Samba server

Files and Service Sharing

5

Maintaining a Linux filesystem

Filesystem

4

System recovery

System Startup

3

Operating the Linux filesystem

Filesystem

3

Creating and configuring filesystem options

Filesystem

3

Adding new hardware

Hardware

3

Configuring an NFS server

File and Service Sharing

3

Automating tasks using scripts

System Customization and Automation

3

Patching a kernel

Linux Kernel

2

Customizing system startup and boot processes

System Startup

2

Configuring RAID

Hardware

2

Software and kernel configuration

Hardware

2

Backup operations

System Maintenance

2

Kernel components

Linux Kernel

1

Compiling a kernel

Linux Kernel

1

Customizing a kernel

Linux Kernel

1

Configuring PCMCIA devices

Hardware

1

System logging

System Maintenance

1

Packaging software

System Maintenance

1

Creating recovery disks

Troubleshooting

1

Identifying boot stages

Troubleshooting

1

Troubleshooting LILO

Troubleshooting

1

General troubleshooting

Troubleshooting

1

Troubleshooting system resources

Troubleshooting

1

Troubleshooting environment configurations

Troubleshooting

1

Table 2


Command
Description

Ismod

List the loaded modules

insmod

Install a module

rmmmod

Remove a module

modinfo

Print information about a module

modprobe

Probe and install a module and its dependents

depmod

Determine module dependencies

Table 3


Utility
Default Purpose

cardctl

Controls PCMCIA cards, allows you to suspend, restore and resume power to a socket as well as eject

cardmgr

Monitors PCMCIA sockets for events (insertion and removal)

lsdev

Shows resource information (I/O, IRQ, DMA) about installed hardware by looking in the /proc directory

lspci

Shows information about the PCI buses

mkraid

Sets up block devices in arrays for redundancy, uses the /etc/raidtab file as its configuration file

setserial

Shows (and can be used to set) information related to serial ports

sysctl

Used (only by developers) to configure runtime kernel parameters

usbview

Not found in many distributions, shows the devices plugged in to the USB bus

Table 4


Utility
Default Purpose

nmblookup

Used to look up NetBIOS names and map them to IP addresses

smbpasswd

Used to change the SMB password for a user, or root can use it to add users to the local smbpasswd file (located in /etc/samba)

smbstatus

Displays the current status of Samba connections

Table 5


Command
Description

cat

View the contents of a file

chroot

Run a command with a different root directory

cron

Run jobs at a specified time in unattended mode

crontab

Work with the configuration files for cron jobs

dmesg

Print out the bootup messages

init

Change the runlevel

ldconfig

Updates and maintains the cache of shared library data and symbols for the dynamic linker

lilo

Configure the Linux Loader

ln

Create a link to a file

lsof

Display a list of files that are open

ltrace

Run a command and trace its library calls

rdev

Query or set an image root device (as found /etc/mtab), RAM disk or video mode

rm

Remove a file or directory

strace

Trace the system calls and signals that are made by a command

strings

Display the characters in a file that are printable

uname

Display system information such as the kernel name, release and so on

Table 6


Parameter
Description

keep

If this word is found before any lines containing a path description, the default set of paths will be saved, and thus added to. Otherwise the normal behavior is to replace the default set of paths with those defined on the configuration file.

depfile=DEPFILE_PATH

This is the path to the dependency file created by depmod and used by modprobe.

Path=SOME_PATH

The path parameter specifies a directory to search for the modules.

path[tag]=SOME_PATH

The path parameter can carry an optional tag. This tells us a little more about the purpose of the modules in this directory and allows some automated operations by modprobe. The tag is appended to the path keyword enclosed in square brackets. If the tag is missing, the tag misc is assumed. One very useful tag is boot, which can be used to mark all modules that should be loaded at boot time.

Options

Define options required for specific modules

Alias

Provides an alternate name for a module.

pre-install module command...

Execute a command prior to loading the module.

install module command...

Install the following module.

post-install module command...

Execute a command after the module is loaded.

pre-remove module command...

Execute a command prior to removing the module.

remove module command...

Remove the named module.

post-remove module command...

Execute a command after the module has been removed.

Table 7


Runlevel
Description

0

The system is down.

1

Only one user is allowed in.

2

Multiple users are allowed in, but without NFS.

3

Multiple users and NFS.

4

Is not used by default.

5

Full multiuser environment with networking and X.

6

Reboot.

* Originally published in Novell Connection Magazine


Disclaimer

The origin of this information may be internal or external to Novell. While Novell makes all reasonable efforts to verify this information, Novell does not make explicit or implied claims to its validity.

© Copyright Micro Focus or one of its affiliates