Novell is now a part of Micro Focus

NetWare Workstation Security Architecture

Articles and Tips: article

RICH LEE
Senior Research Consultant
Systems Research Department

DOUG HALE
Consulting Engineer
NetWare Products Engineering

01 Mar 1995


In the September 1994 issue of the Application Notes, we discussed Novell's global security architecture (NGSA). In the April 1994 Cooperative Research Report on security and audit, there was a discussion on the implications of applying security designs to networks in general, and the development of security threats in particular (pp. 15-60). In this Application Note, we get very specific about the need for designers and evaluators to approach the evolving network industry and supply customers with the security features which will meet corporate system objectives. This AppNote describes several workstation architectures and the Novell/Cordant offering for technical soundness in workstation security while promoting security in a commercial environment.

PREVIOUS APPNOTES IN THIS SERIES Apr 94 "Building and Auditing a Trusted Network Environment with NetWare 4" Aug 94 "An introduction to Novell's Open Security Architecture" Oct 94 "Understanding the Role of Identification and Authentication in NetWare 4"

Introduction

What is security? A single clear definition is difficult to obtain since security is a topic with different definitions for different audiences. In this AppNote we refine a definition based upon the underlying security of workstation operating systems. Here security is not just passwords or access control lists, but the underlying mechanisms in the operating systems which protect and secure these features from error or tampering.

We also look at security as a matter of operating system design - not just features and functions. It addresses the "process" Novell is involved in for what we believe to be a Class C2 evaluation of the workstation. It looks at the design considerations surrounding the criteria, and it addresses several of the concerns customers have about the security assurance provided by the operating system.

Security is a complex topic. Offerings in the marketplace often make it look and feel far less complex than it really is. Still, the enormity of creating secure information processing environments can be staggering. When implementing system security in networks, it is important to examine, restructure, and implement systems security.

Evaluating an entire operating system's security as deployed across a network is an impossible situation for a small company. Having enough money to do this in a large corporation can be equally implausible. This AppNote discloses the necessity for independently evaluated security, and the need for evaluation criteria not tied to the technology. The Cordant/Novell workstation security architectures and solution will be discussed in particular.

Individuals and companies are faced with migrating to PC-based network infrastructures. Organizations are evolving critical processes (applications) to lower cost platforms and providing greater application functionality. All of this requires better security.

Additionally, we see Internet activity at an unparalleled high which introduces new problems of mutually competitive (that is, hostile) software. With international organizations looking for more secure ways to transact business across international boundaries, it is essential to recognize the potential damage from hostile and malicious software.

While larger companies may seem to have more difficulty, smaller companies may not even have budget for security. Both need security to protect their critical processes in a global communica-tion structure, but communication security - what has been traditionally termed COMSEC - will not be enough since it pertains specifically to protecting the communication channel and not the end-point systems involved.

While this might be seen as an immediate problem centered around large businesses with multiple platforms and heterogenous operating systems - for which, there are extremely limited solutions - it will soon be as great a problem for small businesses that use electronic communication channels to maintain their competitive edge.

The security problem boils down to one of cost. Where can one obtain an inexpensive solution to systems security in an Intel processor world with workstations running DOS and Windows applications. Yet to design and implement an inexpensive solution which actually works, can be expensive.

Rationale for Doing the Class C2 Evaluation

An independently evaluated product is an advantage provided to all potential consumers by National Computer Security Center (NCSC) evaluations at every level from Class C1 to Class A1. No one wants to pay to repeat security evaluations. It is beneficial to have it done once by a central security clearinghouse such as the NCSC. It also helps to have a readily discernable classification system; in the case of NCSC, there are six objective classes for evaluation. These classes provide solutions to the different discernable levels a customer might want, and the differences between them do not require a high level of expertise to understand.

By having a common source for specific levels of security evaluation - each with an ascribed assurance level of performance - buyers, sponsors, vendors, and users can obtain standardized products which NCSC has determined meet their security needs. This is important since it presents solutions to customer needs.

Where Do You Start?

Starting from scratch is a big problem and can be too overwhelming for companies to face. It is easier to follow the trade journals and do what the pack is doing. Understanding the rationale and depth of the evaluation program provided by the NCSC takes time to comprehend. It is hard to overcome the problem of not knowing how much security to apply and where to apply it.

Many system administrators opt for redundant backups with hope for a fast restore should "something" go wrong. Others may install a virus checker, or configure their systems to occlude sensitive information by setting up discretionary access controls. Even these (sometimes expensive) measures are insufficient in meeting current and future network security needs, especially against the presence of malicious software.

Whatever you choose to do - proceed with your own people, or use outside experts - you are confronted with a separate knowledge base concerning what principles constitute computer security. You need to determine "new criteria" about the security needs of your systems. "New Criteria," or unfamiliar evaluations, may lead to unproven implementations that you may not clearly understand. However, the independent objective criteria used by the NCSC may be to your advantage since the result is clearly defined architecture (configurations) with an assurance from the NCSC on the strength of mechanism and continued reliability.

It is important to recognize that someone trying to sell you a security system is going to present reasons why their system will fit your needs. This may be done by pointing out product features or stating results of a performed evaluation. These evaluations are often used to support individual product claims regarding their own features and functions.

While these systems often look good and may include significant test results, the sales approach behind them often lacks the objectivity needed in evaluating operating system security. Novell is trying to ensure objective independent evaluation by participating in the NCSC's Class C2 evaluation.

While obtaining a reliable system may be your primary goal, learning complex security concepts and cryptographic techniques may not. This AppNote does not attempt to explain the criteria, but focuses on the methods used in creating a secure workstation architecture. While the trusted evaluation criteria is not discussed at length, this AppNote does show the implications derived from applying this criteria and developing the security architecture.

Novell's Approach to Security Evaluation

It is important to understand the difference between the features presented for independent evaluations, and the criteria used to evaluate their effectiveness in both commercial and government systems. Novell is currently involved in evaluations with both the U.S. Government and the European ITSEC. Both of these organizations have defined evaluation criteria, and are seeking to create a common basis for reciprocating evaluations.

To that end, Novell is presenting NetWare 4.1 for evaluation against the various standards present in these organizations. Our purpose is to evaluate NetWare 4.1 using independently derived standards for evaluating the commercially relevant security present in a commodity system.

Novell is presenting a very network-oriented platform for evaluation. This presentation differentiates itself from many previous evaluations. Rather than a monolith, Novell is presenting a system built of components. The network system is not the granular level where the NetWare 4.1 evaluation occurs.

Novell's approach is far more modular and allows for individual components to be evaluated in the context of an entire evaluated configuration. Novell presents the entire network as components based upon the U.S. Government's acceptance of the Trusted Network Interpretation (TNI) for network component evaluation. This allows vendors to submit alternative components in the systems architecture which can be evaluated without the entire architecture being re-evaluated.

While the European evaluation is just starting and the details of Novell's plan are not yet public, the U.S. evaluation utilizes the same criteria which the U.S. Government has used for years to evaluate dedicated high-level and multi-level security needs for various government departments. Major parts of the commercial sector also find this criteria to be applicable as well.

To that end, Novell will be providing a security evaluated product - Trusted NetWare 4 (including a workstation component) - and an overall architecture for evaluation. Novell's developers and partners may provide additional specific security products such as Identification and Authentication (I&A), and devices such as biometrics, tokens, and so on, as add-on features.

It is extremely important that vendors and developers have the ability to create security add-on products compatible with Trusted NetWare 4.1 security architecture. These products have to be designed to receive evaluated status from the NCSC without Novell having to be a direct participant in the process.

This assures that Trusted NetWare 4 will not have to be "re-built" for each vendor's add-on security product offering. Adding biometrics, such as retinal scanners, for user identification is one example. Producing a different workstation to attach to Trusted NetWare is another example, and that is the primary focus of this article.

Additional information on how to utilize NetWare 4.1 Class C2 security architecture is available in Novell's Global Security Architecture Document (NGSA), and is directly linked to NetWare 4.1's evaluation as a network operating system built from components such as workstations, servers, and media. This concept is clearly defined in the TNI (U.S. Government "Red Book"), as a method for building additional structure on already evaluated parts.

Using the TNI interpretations while doing a standard Class C2 evaluation will provide the marketplace with a component basis for additional development, thus precluding the need for complete operating system re-evaluations.

It is the combination of the architecture and the criteria which enable development of different workstation architectures.

It is important to note that the criteria used for the evaluation is recognized and accepted throughout the information industry by evaluators. The evaluation criteria are available for public inspection, and they are readable. The criteria can be utilized by anyone with sufficient knowledge to use them, and they have withstood the tests of time and logic.

The defined criteria offer a generally accepted basis for evaluation. They are understood much like the generally accepted accounting practices (GAAP) in the accounting realm or the generally accepted audit procedures for internal and electronic data processing auditors (EDPA).

Most importantly, independent evaluation by the NCSC will clarify Novell's client workstation architecture in regard to the security at the server. This information will be available to implementers and administrators who may not be interested in developing a complete evaluation for network security on their own.

Why an Independent Evaluation

NCSC evaluation answers the basic assurance problem facing customers, vendors, sponsors, and users. Assurance is the confidence that security features will be preserved in the face of an identified threat. This is where the real points for creating a secure network system are acquired, and it makes it necessary to evaluate system robustness down to the system security level as it evolves with versions and platform changes.

With the advantages (and difficulties) of accepting an independent evaluation come some of the architectural representations created to meet and support Novell's proposed architecture for evaluation with the criteria.

Figure 1: NGSA architecture diagram.

While the evaluation criteria (U.S. Government "Orange Book") promotes the evaluation of an entire system with one policy model, Novell has taken the initiative to separate the components and provide each with its own security policy which integrates with the network policy model. This is in accord with the TCSEC as interpreted by the TNI.

Since the trusted workstation is one of the newest components (from Novell's point of view), it is essential that the security policy for the workstation and the workstation's abilities is clearly understood. For that reason, the rest of this AppNote is dedicated to exploring workstation architecture and introducng Novell's initial offering. We expect the Cordant/ Novell trusted workstation to successfully complete Class C2 evaluation.

Workstation Security

As workstation technologies replace terminal technologies, security problems become increasingly difficult. The primary reason for this is that workstations are not usually designed for security, and are definitely not designed to meet a criteria.

This fact of architecture causes the workstation operating systems to be reviewed for changes in security assurance with each version of software or of hardware - a significant factor. If done as a complete system evaluation, the security of the entire network would need to be reviewed.

Here we see the need to examine what really is required for workstation security. The fastest way to do this is to look at the various methods of implementing workstation security.

Workstation Security Architectures

There are several design architectures which allow a generic secure network workstation. Each of these design architectures is dependent upon the establishment of the Trusted Computing Base (TCB) and upon the security features which exist in that particular architecture. Not all of these are commercially available today, but may be in the near future. Partners and developers may want to consider the feasibility of solutions outlined here. To understand secure workstation construction, it is necessary to see what parts of the criteria apply to the workstation.

To answer this we need to better define what we mean by:

  • Features

  • Assurance

We can define features as the identified requisites of a secure workstation. This would include some or all of the following:

  • Identification and Authentication (I&A)

  • Discretionary Access Controls (DAC)

  • Audit

  • Mandatory Access Controls (optional, and not required for Class C2).

The minimum requirement may be as simple as just presenting I&A.

The required assurance is arrived at by examining the integrity of those features, along with testing the specific code segments enabling them. These relate to the following:

  • Process Isolation - the ability to separate the activities of one user from another.

  • Object Reuse (OR) - preventing object from being persistent without protection.

  • Assurance - confidence of feature preservation.

Currently we find that while two features listed above are well understood, assurance - the confidence in the integrity of each feature - is usually not understood. This is a case where three classic security principles are most applicable:

  • Always invoked

  • Tamper proof

  • Verifiable

These three principles form the basis for feature integrity, and the basis for feature testing.

The Second College Edition of The American Heritage Dictionary defines integrity as "1. Rigid adherence to a code or standard of values. 2. The state of being unimpaired; soundness. 3. The quality or condition of being whole or undivided; completeness."

This definition applies to establishing integrity of the Trusted Computing Base (TCB) boundary between the TCB itself and untrusted elements of the system, as well as the proper functionality of the TCB. In order to do that, we need to discuss the definition of a TCB boundary and how it is established in a trusted way.

Defining the Trusted Computing Base Boundary

Software alone cannot establish a TCB boundary. Software (including its data) is soft changeable or modifiable - hardware is not, and that makes it self-protecting. Software can be relocated in memory - its locations can change, its operation can be influenced by the values of the data it uses, and new parts can be linked in. Because software is not static, hardware becomes the basis for boundary definition.

The hardware basis for the boundary definition is a logical choice. Hardware is physical. It operates in a certain manner predictably, and you can determine whether it is working properly (i.e., it can be tested). Hardware by its very nature must be reliable, testable, and maintainable. This gives rise to the fact that hardware can be separated into parts which are statically inspectable, unlike software which is not statically inspectable.

With a hardware boundary presented as the basis for enforcing the TCB boundary of the workstation, the next question is what software will be used on the hardware. Everyone thinks that they will be running safe, secure, reliable software, but everyone also knows that there is the chance of malicious software (similar to viruses) running unbeknownst to the user.

Since this is a possibility, it is necessary to know our operating system has not been corrupted, and to keep it segregated from "potentially dangerous" application software. It is important to do this before the two ever come in contact.

The workstation motherboard should not be considered a definite limit to the workstation TCB. For instance, the hardware could be non-CPU hardware, such as add-on cards, which Novell will employ in its initial Class C2 evaluation. It could be a different CPU (other than 80386, 80486) such as a Virtual Machine Monitor (VMM) solution.

To understand these two different approaches to implementing workstation security, it is necessary to back up and clarify what a Trusted Computing Base (TCB) is and what it does.

Purpose of a Trusted Computing Base. The basic purpose of a TCB is to isolate trusted features from untrusted areas of the computer such as isolating trusted address space from untrusted address space. For Novell's evaluation, which comprisess a full network - workstation, server, transmission media, and administrative console - this means completely detailing the structure, operation, and effectiveness for each feature present in that component, whether it be workstation, server, media, or administrative console for the evaluation (Class C2).

For NetWare 4.1 at a Class C2 level, those features provided by the TCB are:

  • Identification

  • Audit

  • Discretionary Access Controls

These features, as applied to a distributed network architecture (workstations and servers), are given in the Trusted Network Interpretation (Red Book). They are also discussed in the NetWare Global Security Architecture (NGSA - which is available to the public) and described in the NetWare Systems Architecture Design (NSAD - which is not publicly available).

The NSAD is presented to the government (for Class C2), and describes the integrated architecture of security features in providing assured protection of the network component.

The Trusted Network Interpretation (TNI) is a complete document interpreting all of the TCSEC criteria, as it applies to the construction of networks. While the TCSEC defines the actual criteria on which evaluations are performed, the TNI identifies how the criteria is applied when it is distributed across network components. Four major components of security policy are described in the TNI (and are sometimes referred to as MAID):

  • Mandatory Access Controls (M)

  • Audit (A)

  • Identification and Authentication (I)

  • Discretionary Access Controls (D)

All system components - servers, workstations, and so on - must aggregate to each evaluation class: the criteria for object reuse, systems architecture, and assurance. Where the Orange Book (TCSEC) says all must be met, the U.S. Government's Red Book (TNI) says that the features must follow the criteria and can be distributed between the various components making up the network system.

Not every feature needs to be present in a workstation at all times. There are instances in untrusted systems where the features are outside of the trusted area of the computer. It is the workstation TCB which participates in those policies - not the untrusted portion. At the Class C2 level, the Mandatory Access Controls (M in MAID) are not present.

The TNI discusses network implementation features such as Mandatory Access Controls (not required for the Class C2 evaluation). Interestingly, the TNI allows the NSAD, which is Novell's Systems Architecture Document, to be evaluated at Class C2 with additional features presented, even though these features may not be required as part of the document at the Class C2 level of evaluation.

This means that Novell can describe Mandatory Access Control documentation in the NSAD for Trusted NetWare without the necessity of having it evaluated at Class C2. It allows for the provision of constructs within Trusted NetWare, which developers can understand and work with.

Establishing a TCB. A TCB boundary means that each feature present in the operating system must be protected. For example, a Discretionary Access Control (DAC) check for file access control would require that the executable portion of the code, the resources it needs, and all downstreams pertaining to the operation of that code must be protected for assurance at Class C2.

If just the DAC data and the check are protected but not the downstream accesses to the DAC data, then there could be protected files with the access control information that still be accessible through direct access to the BIOS interfaces.

Note: Since DOS does not have any substantial assurance for the access controls built into the filing system, this is a moot point for an unsecured workstation. However, if several users are to use a single workstation with the need for access control to local files, this becomes an important issue. When the workstation becomes part of a network, then it is extremely important to protect the local resources from unauthorized access by users or other software.

Other issues such as audit, the point at which the audit is cut, and all of the downstream access to audit files, are another example. Preventing audit information from being deleted from the disk or from being modified before it is written to disk are additional examples of how the integrity must be enforced to assure true and accurate function of the code.

Identification and authentication (I&A) is interesting because of the identity secrets (keys and passwords). When and if those secrets become visible, they can be used without the user knowing it - thus allowing authentication of the user without his or her knowledge. It is necessary that all of the identification and authentication (I&A) secrets be protected at the same time as the processing code and mechanisms which are utilized.

As shown previously, the security function has to make the feature "tamper proof." While the security feature of I&A may be preventing unauthorized users from accessing controlled objects on the server - thus protecting the server by identification of each user (I&A) - the execution of the feature is the target of tamper proofing.

This protection of the components is accomplished by following an architecture and mechanism for what is and what is not acceptable in accomplishing the required protection. For Novell's Class C2 evaluation, the systems architecture is designed by Novell to meet the TCSEC and the TNI.

Once a correspondence between the identified features and the criteria has been established for the workstation, it is possible to evaluate the assurance provided for features based on the system architecture.

Since a workstation may or may not have all of the features as outlined in the TNI, it is important to identify which components exist in an interface and protocol description of how those features present at the workstation interact with the other components of the network.

Note: While the actual NSAD document is currently Novell Confidential and not for general distribution, many of the interactions specified in it are mentioned in the AppNote titled "An Introduction to Novell's Open Security Architecture" in the August 1994 Novell Application Notes.

It is important to note that as more MAID features are added to a component, the number of security-relevant objects may grow (mostly in C2/B1). While there is no direct one-to-one relationship between the number of security objects present and the hardware to implement the TCB boundary, the number of downstream objects requiring examination may increase.

One ramification stemming from increased features is the increased need to protect those features and any data they depend upon. This applies to all of the security-related objects and creates a need to separate them from untrusted objects or untrusted code running in the workstation. One implication is that the security policy extends to the physical media where these objects are stored, and indicates a necessity to keep protected objects isolated from untrusted code.

This implication extends far beyond creating separate media for trusted and untrusted objects. It points out the need for a trusted operating system coming from a protected place, and operating in a trusted address space (an operating system of known content and without possibility of modification). It shows the need to isolate trusted code from the operating system once untrusted objects or code has been accessed.

The contention that a trusted process, one that is part of the TCB, can no longer be assumed to be trusted once untrusted objects have been accessed is a point described in two concepts:

  • Relinquished control

  • Temporal trust

Relinquished control means that once control is relinquished from the initialization of the processor to another process, you have lost control and therefore, can potentially lose trust.

If you do not want to lose trust in the operating system, then the operating system must come from a protected area and can only be trusted until it relinquishes control to an untrusted process. Logically, you need a trusted (secure) place from which the operating system loads, a place which cannot be accessed by untrusted code once the OS and its data are in memory. The implication is that the loaded operating system is only trusted until control is relinquished unless it is part of a separate domain.

How the TCB Is Established

The goal in establishing a Trusted Computer Base (TCB) is similar to creating an unbreachable fortress wall. In the TCB, one-way gates become filtering interfaces that filter every request. This allows the TCB to act as an agent for untrusted code - doing in a trusted way what the untrusted code wants done.

Untrusted code requests must be subject to TCB mediation which is basically a filter, preventing untrusted operations from manipulating security objects or from modifying the TCB behavior. This keeps untrusted code from bypassing the TCB functions.

The TCB must also provide an interface for untrusted code to communicate, such as a one-way trusted gate. But how is this established?

Here is where the hardware boundary comes into play. The easiest way to establish a separation of trusted from untrusted code is to establish a separate domain - putting the TCB in one address space and the untrusted objects (code) in another. There can be no overlay of the two, and communication between them cannot be direct, if the trusted side is to remain trusted.

This constraint brings about the need for levels of indirection in the communication between the trusted and untrusted sides of the code. If the two sides are mutually exclusive, then there is no communication between them and further processing cannot take place. So it is necessary to form an indirect communication, in which the TCB resolves a request from the untrusted side, as okay to do or not okay to do. This means that the user's code (untrusted) cannot take any direct action on the TCB space. It can only request that the TCB do something for it.

We can look at this concept in terms of the Cordant workstation, which is the easiest model to conceptualize:

  • One processor with memory (which is the user)

  • Another processor (which is the TCB address space)

In the Cordant model, all TCB functions are preformed with the TCB processor and a communication channel between the TCB and the untrusted code address spaces. The user address space has the program and the memory of the user, and none of the user objects require access mediation. The TCB contains the security related objects and the access mediation (DAC checks) to those objects.

In the Cordant example, the security-related objects are file system objects - files and directories - and the access control information about them - identification and authentication (I&A) info and the secrets, audit logs, info about users, and what they do. Then there is the physical storage media and the process of physically storing these objects on a hard disk.

In the Cordant example, we see two features implemented at the workstation: an "I" component (Identification and Authentication) and a "D" component (Discretionary Access Control). While there is an "A" component (Audit), the Novell/Cordant architecture is only "ID." In any case, the security-related objects for the "I" and the "D" components are stored locally with a subsequent inclusion of the local hard disk into the TCB.

The Cordant architecture makes no use of the systems architecture of the untrusted system and uses a completely redundant system as the TCB. This is the ultimate in security. There is very little chance of penetration. There is no way around this TCB. The TCB is entirely in control of its own resources. However, this method requires an add-on card to implement the solution.

While providing the basis for establishing the TCB, this example shows the basic difficulty in handling trusted versus untrusted security objects at the workstation. Separation into components can cause increased hardware expense and additional CPU cycles when implemented. In the Cordant solution there is a requisite level of redundancy - two memories, two I/O systems, and the Cordant system disables some of the workstation operating system and replaces it with its own.

This solution establishes two complete computers - you end up with copies of things in two places. While this solution might introduce additional latency, the increased expense has been kept to a minimum. There are other methods by which a secure workstation can be created. One example is using the same processor as the workstation to create a virtual machine. This option will be discussed in a later section, after clarifying what the TCB must actually do, and how it must provide the required protection.

Reducing Cost of Securing Workstations

While some incremental expense to the workstation may be necessary in any solution, there are several alternatives which Novell developers and implementers may want to explore. These alternatives stem from two different paths to the expense problem created in the solution using two CPUs. While Cordant has done an excellent job of keeping expenses to a minimum while providing a very effective DOS and Windows capability, there may be partners and developer that will produce less expensive (and somewhat less secure) solutions which may include smaller TCBs within them. Smaller TCBs may not provide the depth of service at the workstation or be able to account for the local file system's activity.

There are three methods for reducing costs when providing alternative secure workstations. These three basic solutions are:

  1. Switch to a more integrated solution. This may increase engineering expense.

  2. Build the TCB into the LAN chip. A subset of this solution is to use a low-cost commodity-style controller inserted between the card edge and the connector.

  3. Reduce the cost without reducing the functionality. This can be done by reducing the redundancy - by having one processor do the context switching.

The Second Solution Is "I"

Of course, it is possible to implement a smaller TCB if some feature is removed from the workstation TCB. If one were to implement a smaller workstation with only the "I" component (Identification and Authentication), there are significantly fewer security objects in the TCB. This leaves the workstation with just a secret and the code to convince other members that the workstation knows the secret. This would result in a smaller TCB.

With a smaller TCB, the system boundary is smaller, and the cost to create such a workstation is significantly less.

From Figure 2, one can see hardware- and nonhardware-based implementations, yet in actuality there is always hardware. The major question is, "Can the workstation CPU (client) be utilized for the implementation of the TCB or is there another place for the TCB?" This question drives the majority of cost-based solutions for implementing Class C2 security and is discussed in the next section.

Figure 2: The Cordant solution.

Using the Cordant solution as the basis for implementing a trusted workstation, we have a point of comparison for the smaller TCB of the "I" policy machine. Initially it is important to note each implements a different feature set at the workstation - this is not a feature to feature implementation comparison. It is an "apples to oranges" comparison of secure network architectures for the workstation.

One of the Cordant solutions implements an "ID" set of features, utilizing an "ID" policy for the workstation. The "I" only workstation implements with an "I" policy model. At this point, one recognizes that the other features are enabled somewhere else in the network, not gone entirely (see Figure 3).

Figure 3: Encrypted LAN Card.

It is less expensive to put more features in the server - a one-time cost - than in the workstations where a many-to-one relationship exists. However, one must also recognize that this Cordant solution provides for accessibility of the workstation even when there is no server present. This factor is often overlooked, and reducing the workstation policy model to an "I" (only) comes with the following additional considerations:

  • The workstation must be a diskless or "read-only" workstation.

  • The workstation will not boot without a server.

While reducing the total amount of information in the TCB, and thus decreasing the need for processing power, there can be no local file system present. By reducing the workstation policy to "I" only, there could not be a trusted file system at the workstation. Since there is no file component at the workstation, there is no reason to audit. Without the ability to audit (A), no secret can be stored at the workstation. Thus if a secret is to be used at the workstation, it must exist and be removed before it can be accessed by untrusted software (object reuse), or all subsequent software must be trusted. This also implies that the "I" component requires the server to participate in the identification and authentication cycle.

The concept introduced here is that of a temporally trusted TCB boundary, commonly called a TCB extension of the server TCB. Basically, the address space in the TCB is only trusted at a certain time. Here trusted code will remove any accessible security object or make security objects inaccessible (in an irreversible manner) and then give control to untrusted code- outside of the TCB.

Establishing the Trusted Path (Hard Reset)

To accomplish this we need to start at a known state in the hardware. For the "I" machine this state would occur after Power-On Self-Test (POST) and before turning over the address space to an untrusted operating system.

Note: The POST state of the processor is consider the preliminary point of control from which the trusted path can be followed. Any relinquishment in control, like that of the Microsoft NT workstation (Ctrl+Alt+Del), would be unacceptable.

Following the order of events, hard reset would enable a cold boot POST cycle, and BIOS extension code can then own the entire machine address space. It is a point which can be trusted, and it is the appropriate time in which the I&A secret can be passed in a trusted manner on an "I" based machine.

In the "I" example above, the TCB is trusted and then the TCB is removed (protected from further access) relinquishing control to untrusted application code. The mechanism for this could be activated through BIOS extensions provided for a special LAN chip. Once the authentication secret is confirmed by the server, the LAN chip can be reset. At this point, untrusted processing can take place since no file operations will happen at the client while the server is providing the other appropriate MAID features.

The underlying mechanism in this approach is to protect the TCB when the TCB is finished with authentication. This is accomplished using a "set only" latch in the special LAN chip, such that the TCB portions are locked down irreversibly in the chip.

The problem overcome by this design is the difficulty of losing the authentication secret. Here the goal is to have the workstation temporally trusted while acquiring the authentication secret, and then plug in some binding relationship with the server that is verifiable, while removing the authentication secret from accessibility, retransmission, or any attempt to filter-off the secret.

This process requires a lock down mechanism in the NIC chip. The binding relationship is the NIC Encryption Key conveyed to the workstation during trusted boot, and is placed in the workstation NIC chip. Then the authentication secret is removed from the address space, and access to the encryption keys are locked down on the NIC chip.

No trusted address space is available to the workstation on the NIC card or the workstation. The NIC chip's trusted address space is protected from further access by the workstation allowing an untrusted operating system with an untrusted protocol stack to boot.

Explanation of What Is Really Going On

What the workstation does is identify itself and obtain a connection number from the server. This is a communication in which the workstation says, "This is who I am. I am #3, because you said I was #3, and to prove who I am here is my IPX address."

The IPX source address and connection number reside in the untrusted protocol stack running in the untrusted workstation. A generic untrusted workstation could lie about the connection number and about its own IPX source address.

The reason the server can trust the IPX source address from the untrusted protocol stack is the IPX source address was delivered (encrypted) by the "special" NIC chip with a key (know secret) associated to the trusted MAC source address (outside of the workstations control). The server uses the trusted MAC address to select a decryption key and decrypts (compares) the IPX source address. If the MAC and the IPX do not match, the packet is dropped and is never seen by the NCP connection.

Other Considerations

If a different workstation using the special NIC chip created a source IPX address belonging to another workstation, the server would recognize this because it would not compare to the workstation trusted MAC address.

If other software on the "I" workstation modified the protocol stack and changed the IPX address, it would not match the address at the server and the server would drop the packet. Therefore a trusted protocol stack would not be required.

This solution also handles the problem where the user is security authenticated to the first server, but not to a second or third server. We overcome this by asking the first server to surrogate the authentication to the second server for us.

Additionally where routers are present, the MAC layer becomes invisible on the second layer. We solve this by getting the router to authenticate the packet (encrypted routers) and trust the router. Routers can additionally encrypt the MAC header themselves.

The Third Solution

Another solution in creating a commercially feasible secure workstation is to reduce the cost of the workstation without reducing the functionality by reducing the redundancy in the workstation. This requires one processor capable of context switching and I/O mediation.

Context switching means a single memory system can be partitioned into two address spaces of trusted and untrusted by allowing access mediation to the I/O system.

This can be performed in the following ways:

  1. Using full virtual windows enhanced mode machines.

  2. Using an existing mode in the 486SLC chips.

  3. Using the emulation solution.

Full Virtual Windows Enhanced Mode Machines. The first alternative in this solution is to use a single processor with two memory spaces. This is the Virtual Machine Monitor (VMM) approach. An approach which the Intel 80486 can do, but unfortunately 80486 can only create virtual 8086 processors (not Windows capable). This solution requires a different chip capable of establishing two virtual processors and supporting two address spaces and a secured indirect communication channel capable of supporting independent operating systems in the two address spaces.

Figure 4: Virtual machine.

The job of the VMM is to context switch and provide a virtual IPX network between the two virtual machines similar to the interface between the two processors in the Cordant solution above.

System Management Mode. An alternative to solution A is to use an existing mode in the 486SLC chips called System Management Mode. System Management Mode has the ability to hide portions of the physical address space from the processor's normal mode. We place hardware in this portion of the address space that is capable of partitioning the memory and I/O address space into trusted and untrusted spaces.

Note: This alternative to solution A requires modification to the Intel architecture.

When System Management Mode allows normal mode to access that hardware, this is the trusted address space. The trusted address space uses that hardware to establish an untrusted work space and then uses System Management Mode to hide the hardware and switch to untrusted work space.

The Emulation Solution. The emulation solution is Windows on a PowerPC. The emulating address is the trusted address and the emulated address is the untrusted address. TCB is the one that is emulating the other environment. Here one environment is used to simulate the second and the second is under the control of the first.

Figure 5: Emulated DOS/Windows solution.

Conclusion

While Novell developers and partners will be able to create secure workstation alternatives to work with Trusted NetWare, it is important to mention that customers will receive the immediate benefits from Novell's Class C2 evaluation with the NCSC.

Customers can obtain a very clear picture of the security provided by Trusted NetWare through independent evaluation. Customers will have the added benefit of being able to recognize alternative secure workstation architectures as they become available, and of knowing that this is part of Novell's plan and architecture.

Ultimately customers will receive some level of actual protection, previously not available for many operating system, against malicious software. Developers will benefit from a clearer understanding of Novell's Open Security Architecture and the enabling paradigm for development of secure clients.

* Originally published in Novell AppNotes


Disclaimer

The origin of this information may be internal or external to Novell. While Novell makes all reasonable efforts to verify this information, Novell does not make explicit or implied claims to its validity.

© Copyright Micro Focus or one of its affiliates