Novell is now a part of Micro Focus

Security Issues for International Commerce

Articles and Tips: article

01 Nov 1997


This article has been edited since it was originally published.

Outlines various technologies that will be needed to create a secure infrastructure for upcoming security applications such as global electronic commerce

Introduction

The basic technology exists today for protecting secrets and sensitive data, and developers and vendors are generally capable of responding to security needs. However, the commercial products and services that will become widely available to users will be primarily the result of market forces. As the commercial developer of the world's most widely-used network operating system, Novell recognizes the needs of the user as the driving force for providing truly secure network services and security features. Customers, in turn, must continue to communicate their needs for security and for secure services if they are to reap the benefits of secure networking.

To that end, this AppNote covers some of the issues and challenges that will play increasingly vital roles as the network computing industry evolves over the next few years. It covers the following areas:

  • Accountability issues for electronic commerce

  • Standards for the future

  • An architectural rationale for a security infrastructure

Accountability Issues for Electronic Commerce

When performing electronic transactions, it is absolutely essential to have the technological means to provide rock-solid authentication, identification, accountability, and liability. There is an accountability issue with computers and people that is especially critical for electronic commerce. In a transaction environment, you need to know not only which computer ran the transaction, but also which person handled the transaction. However, getting user-access accountability across computer networks is a technical issue unlike those previously addressed by I&A.

It is important to differentiate absolute-nature accountability from I&A issues for the following reasons:

  • I&A is a gradable mechanism with regard to trust.

  • I&A, though it is the essence of accountability, is affected by temporal events.

  • I&A is not usually cognizant of application-driven events that occur after a user has been identified and authenticated.

Graded Authentication

In an interconnected network, end-users access from a variety of locations--not always with the same operating systems, protocols, or hardware. These variances that underlie indisputable end-user identification provide an invaluable observation: not all authentications are equal. I&A is gradable in nature. It looks only at the workstation and does not take into account the specifics of the server operating systems--NetWare, Windows NT, or OS/2. Therefore, there must be some concern that identification "secrets" (such as passwords) are going to be passed from the end-user to the keyboard, relayed from the keyboard to the processor, and even perhaps stored in memory after the identification process has occurred.

Identification secrets that the end-user memorizes are definitely more secure than secrets that the end-user writes down. However, there is no basis for trusting the workstation operating system to maintain secrecy. Instances of secrets being duplicated, even for only a short period of time, are commonplace. Windows 95, for example, stores identification secrets at the workstation. UNIX, in many instances, passes passwords unencrypted across the wire. These are good examples of diminishing trust in an identification secret.

This points out the need for a graded quality in the authentication process. In graded authentication, different levels of authorization are granted, depending upon the user's entry location--such as from a local workstation, the Internet, or a dial-in connection. Ultimately a graded authentication provides a more realistic picture of how "good" the authentication was when performed. It corresponds to those factors which influence the verifiability of an end-user's identity as that user comes onto the network.

Graded authentication is not much of an issue in networks where information is not sensitive. Yet when there are levels of sensitive materials, these levels can correspond to the amount of trust in a graded authentication process. Certainly end-users who transmit login passwords to the network "in the clear" should not be given the same grade for authenticated status as end-users who operate from secure equipment in an approved manner.

Temporal Nature of Authentication

While some degree of accountability is established in most systems through I&A, it is affected by events which can and do occur prior to the I&A process. If you have no idea what is going on before authentication takes place, you really cannot assert with any level of confidence that authentication mechanisms are occurring in proper order or that secrets are being handled correctly. For example, password-scavenging software has been available on the Internet for some time that will effectively make a copy of the identification secrets and store them. This will allow someone else to forge an end-user's identification by password.

On the other hand, if you know (from evaluation) that the Power On Self Test (POST) and BIOS load are the only two events that can occur before I&A, then you are relatively assured (by evaluation) that the I&A mechanism will load and perform as tested. This is why providing NetWare secrets on a Class C2 trusted workstation is considered a reliable mechanism.

So the second point is that I&A is effective only up to the point where it occurs, and then only if you know how the identification secrets are really handled, and if you ascribe to those methods some level of trust. If some unevaluated or unknown process is allowed to operate where identification secrets are present, you have considerably diminished the value and reliability of those secrets.

Full-Session Authentication

Identification is good, but only if we know what has gone on before. This means that after running I&A, we expect to know who is at the other end. If any unknown process occurs after I&A, you cannot be certain of the identity of the end-user. This is the third point: Accountability is based on knowing who the user is throughout the session, from the beginning right up through to the end. The threat of malicious software and maliciously inclined users intervening in an established session might give you cause for a decreased level of trust in accountability even after a positive ID by I&A.

While microcomputer networks allow a broad range of activities, network managers and corporate administrators should give consideration to developing levels of accountability commensurate with the company's need for controlled or regulated access. Only by providing networks with the ability to prove or disapprove accountability will companies be able to successfully use them for electronic commerce.

Standards for the Future

While significant work is being done by various standards bodies today, no standards currently exist to support solutions to the international problems of implementing electronic commerce. Electronic business cannot be confined to the country of origin. As we move into the future, e-commerce must transcend national boundaries. Clearly, government and business sectors must work together to develop practical security standards that are viable for the next century.

Labeling

These standards should include a labeling standard for data and systems. Such a standard must support wide communication about the security attributes of data and systems. It would allow users to designate the sensitivity of their information to disclosure or destruction, with the expectation that recipients of the labeled information would have a common understanding of how to protect the information and what other parties should see or alter it.

A labeling standard is a very desirable alternative to having all data treated as an undifferentiated mass with respect to security and protection. It might be coupled with prototypical policies and operational standards for systems and networks, along with technical standards for trusted system security measures to enforce the labels. Trusted systems that enforce labeling offer the primary means for greater assurance of security against malicious software. Ultimately, it is the owners of data who must determine the protection their data requires. A labeling standard would provide a convenient tool for users to categorize their protection requirements and for the implementers of the global information infrastructure to distinguish the sorts of protection it would provide.

Public Key Infrastructure (PKI)

Public Key Infrastructure, or PKI, is rapidly becoming a significant issue for corporations desiring authentication and privacy in e-mail, electronic commerce, and other Internet/intranet applications. This is actually a collection of services which may be integrated into a single product structure. PKI services are used to implement X.509 certificates, which use public key cryptography. PKI uses a directory services infrastructure to provide authentication, authorization, encryption, and key management for digital signatures in a variety of applications.

It is important that interoperability standards be established to enable a global PKI infrastructure that can support the public Internet as well as private intranets and extranets. Novell's NDS has a PKI for use with Novell products. In fact, the NDS PKI produces certificates for use within the system.

An Architectural Rationale for a Security Infrastructure

The key issue that needs to be addressed before electronic commerce can deliver on its promise is the formation of a global infrasturcture that all of the major players can agree upon. In most circles this infrastructure is called the Global Information Infrastructure (GII). This section outlines six key architectural aspects of such an infrastructure.

Modular Cryptography

Novell's NetWare operating system was one of the first in the industry to make use of public-key cryptography for authenticating users, for providing packet integrity for the NetWare Core Protocols (NCPs), and for product license control functions. In fact, with approximately 15 million licensed servers and approximately 60 million users, there is reason to believe that Novell software has created more public-key certificates than all of the rest of the computer industry put together.

However, times change and solutions that were valid in the past must be continuously revised and improved. When networks were operated within the secure confines of a single enterprise and under the control of an operating system which enforced user access controls, the previous functionality was generally sufficient to meet our customer's security needs. But as enterprise networks are extended to include connections to the Internet and a wider range of potentially hostile systems, those assumptions are no longer valid. Additional means of providing data security (confidentiality and integrity) are required. Because access to the customer's physical network can no longer be assumed to be reasonably secure, and since information may pass through untrusted processors and switches as it is routed to its ultimate destination, it is imperative that encryption techniques be used to guarantee confidentiality, and that message authentication and digital signature techniques be used to guarantee integrity, even in a hostile environment.

In addition to the steadily evolving state of the art in encryption technology, there have also been significant developments in the field of cryptanalysis--that branch of cryptography which seeks to "break" encryption algorithms and systems. In cryptographic systems, keys (variable length numbers encoded as bits) are used to change plain text into encrypted messages. A key size that may have been considered sufficiently strong to resist yesterday's threat may be barely adequate today, and far too weak to meet tomorrow's challenges. For instance, 40-bit encryption was once considered to be at least passably strong, but it has recently been broken within 3.5 hours of a public challenge having been issued. Unfortunately, one cannot assume that hostile attackers of computer systems will broadcast their successes. Rather, many will quietly capitalize on them, and so an additional measure of security is required to compensate for the unknown.

From an architectural perspective, the ability to prevent unauthorized disclosure via cryptography means that encryption technology must be able to be replaced easily. It must be possible to cleanly separate the cryptography from applications or the operating system to reduce interdependencies and complexity. Additionally, some customers may need to replace or use proprietary algorithms for particular applications, requiring independent sources of cryptography (that is, cryptography capable of being developed and incorporated independently from the infrastructure), thus achieving an independence between the application and the specific cryptography used by the application.

These considerations result in a generally accepted requirement for all commercial uses of cryptography where international regulations and restrictions apply, especially in purveying electronic commerce:

(1) It must be possible for third-party developers, including foreign developers, to develop and "plug" cryptography into the commercial infrastructure at any time, with minimum disruption to or knowledge of existing infrastructure concepts, operating systems, and applications. The creator of the infrastructure must not know the contents of those engines or need to provide specific assistance to those developers.

Import/Export Controls

Another important factor underlying confidentiality and its subsequent use in cryptography is the fact that governments, including the United States, have traditionally considered the use of cryptography to be their exclusive province, especially as applied to the secure communications for the military and the diplomatic corps.

Within the United States, the export of cryptography is strictly controlled under the International Traffic In Arms Regulations (ITARs), where cryptographic hardware, software, and technical data are considered to be munitions. With a few specific exceptions, the export of cryptography that is considered even reasonably strong requires a case-by-case export license from the Department of State. Even with recent announcements by the U.S. administration transferring the responsibility for export license issue to the Bureau of Export Administration of the Department of Commerce, there is still need for review by government whenever cryptography is shipped out of the country.

Although not as well known as the U.S. export requirements, a number of foreign countries impose various restrictions on the import and/or use of cryptography within their country. For instance, the French government's position is well-known internationally in this regard. However, Russia and several Asian countries also impose import restrictions on the type of algorithm and/or the key length as well. Several other countries may also be pursuing import restrictions.

For this reason, there is need for a construct which allows applications to be shipped internationally without an existing cryptography to influence their ability to be imported. It is essential that such applications do nothing but allow the functionality of cryptography to be exercised transparently, even when there is no real cryptography within.

For product developers, it is important to recognize that it is impossible to sell applications which contain cryptography worldwide in one version, as long as the right of foreign governments to be sovereign (i.e., to pass such laws and regulations as they see fit) exists. Exporting a product containing cryptography from the U.S. in violation of the U.S. export laws, or importing such a product into a country in violation of their import restrictions, would immediately subject the developer and /or the importer to stiff fines, being debarred from future sales, and even criminal prosecution of the individuals involved. Developers have an obligation to know their customers and must take the appropriate steps to ensure that such export/import violations do not occur.

One way to deal with these various import and export restrictions would be to develop multiple independent versions of every application which required cryptography--one for each different regime. However, in addition to the enormous cost of both developing and maintaining separate versions of all that software, this would impose a significant interoperability burden on those customers who operate international or multinational businesses. It would be difficult for all of these implementations to keep track of which algorithms and key lengths were permitted for what countries and destination users. As a result, it would be almost impossible for an office in the United States to converse with offices in other countries, and vice versa.

Of course, the implementation of cryptography within each application requiring data confidentiality or integrity also represents a significant expenditure of development resources for the developer and their partners. In addition, it adds to disk storage and memory requirements in servers and workstations, which increases the cost of those systems. And the direct incorporation of cryptography within each application would mean that every such application would require an export and/or import license, as well as incur the additional costs of preparing legal documents and costs to tailoring application specific country requirements. As a result, developers might be inclined to develop applications for only a specific regional market, to everyone's detriment.

Obviously if cryptography is implemented in a core piece of infrastructure, it is beneficial to end-users, application developers and infrastructure providers. Having cryptography available in the infrastructure could free applications from cryptographic development costs and most or all governmental controls. However, it would still appear to require the development and support of multiple different versions of cryptography in the infrastructure to comply with the various export/import restrictions.

However, there are additional difficulties in providing the necessary assurance to the export/import authorities that their controls cannot be bypassed without a very considerable amount of effort, approximately the amount of effort that would be required to develop comparable cryptographic capability from scratch in each different version. Novell's basic licensing mechanisms provide this type of functionality and have been accepted world-wide. However, there are some major differences between the licensing technology and the acceptable applications to cryptography. In practice, non-substitutable and un-modifiable cryptography must be provided to comply with government regulations. This gives rise to a second international requirement for cryptography in commerce:

(2) Infrastructure-based cryptography must have no substitutable cryptography, and must proactively manage cryptographic policies that control which cryptographic algorithms and key sizes apply on a country-by-country basis.

This requirement implies that a fundamental part of the cryptography enforces access to low-level cryptography via a cryptographic manager which is also responsible for abstracting the capabilities of the cryptography and forms a low level Application Programming Interface (API). Cryptographic APIs formed in this manner should provide a sufficiently well managed and non-substitutable basis for constructing virtually any kind of cryptographic functionality that might reasonably be required.

Cryptographic Function Libraries

Creating a cryptographic infrastructure which is sufficiently tough to avoid unauthorized alteration and sufficiently flexible to allow for incorporation of new approved cryptographies, generally gives rise to another set of problems. Some of the capabilities provided by a low-level API may provide more capability than is required for certain applications. In particular, the export/import agencies which must review software are often concerned about what is called "Crypto With A Hole." Given a controlled environment such as the one we have outlined and API used by applications, they are concerned that the application software end-user might try to replace the infrastructure-based cryptography with an uncontrolled system that would make use of the already implemented application interface.

There are certainly groups who would substitute the entire cryptographic base with stronger algorithms than are permitted, and thus circumvent any country-by-country controls that were imposed. However, if the API calls that a particular application uses are demonstrably safe (e.g., they only do digital signature verification), or if they never deal with "raw" keys but only with keys that have previously been "wrapped" or encrypted, or if they only support specific protocols such as the Secure Electronic Transaction (SET) protocols being standardized for electronic credit card transactions world-wide by MasterCard and Visa, then such applications might be subject to a less stringent examination, or might not require export review at all.

This gives rise to a third international commerce cryptography requirement:

(3) Application programs are not allowed to directly access infrastructure cryptography APIs or the cryptographies directly. Instead, all such interfaces must be mediated from support libraries.

Only support libraries are allowed to call the cryptographic management functions directly. Since a support library must be dynamically invoked by name, it is a simple matter to inspect a given application to see what libraries it references, simply by looking at the module map. The libraries can be segregated by the functions implemented as to APIs they contain, thus further enhancing the ease with which the application interface can be inspected.

As both general-purpose and specialized, protocol-specific libraries are developed, application developers can use them to provide both confidentiality, through the use of encryption, and integrity, through the use of digital signatures, with the confidence that properly-written applications will not have to be rewritten for export purposes, or as newer algorithms are invented and modifications to key lengths are approved. Developers do not then have to develop cryptography.

Key Backup and Recovery

The widespread use of encryption within an organization is not without its dangers. If critical messages or files were encrypted in a key known only to a single individual, the death or disability of that individual could conceivably jeopardize the continuity of business of the entire organization. In addition, it is not beyond the realm of possibility that a misguided individual might attempt to hold an organization for ransom by threatening to destroy the key to one or more encrypted files. Likewise, individuals have been known to commit various kinds of fraud or misuse of an organization's resources, and/or disclose company proprietary information to unauthorized individuals, etc., while attempting to cover their tracks through the use of encryption.

Organizations therefore require the ability to be able to access the encryption keys to encrypted data which they own and control, should such circumstances arise, without allowing a well-intentioned but naive system administrator to defeat such provisions. However, organizations are also justifiably sensitive to even the remotest possibility of misuse of privilege by individuals, and would be very hesitant to allow even the organization's system administrator to access the confidential files of the President or the Human Resources department, for example, without appropriate controls being in place.

This concern for an organization's continuity of business leads to a fourth requirement for international cryptography:

(4) Duly authorized individuals, but only duly authorized individuals, must be able to recover an organization's or user's cryptographic keys when required. While the infrastructure provider's assistance in this process may be necessary, it must not be sole access for the ability to decrypt a user customer's encrypted information.

To achieve this goal, certain encryption master keys must be periodically written to a protected key backup or archive file on the server's disk. Keys used in this manner must not be recorded in the clear, but instead encrypted in the public key of a Key Recovery Center, for eventual storage. While commercially viable Trusted Third Parties in the computer cryptography industry are emerging as the eventual recipients for these keys, there is still need on behalf of customers to be able to avail themselves of these managed keys under strictly controlled operational security practices. Thus, as appropriately requested (e.g., a notarized request from a duly-authorized representative of an organization), the private key(s) necessary to decrypt encrypted key backup files must be available to those individuals, together with the tools necessary to recover the encrypted keys stored on the organization's server.

Key Distribution

In the past, cryptographic algorithms such as the Data Encryption Standard were often used to provide data confidentiality and/or message authentication between two locations. Such algorithms depended on the secure physical distribution of a secret key to the two (or sometimes more) locations, typically by a highly trusted courier. Assuming that the distribution process was not compromised and that the hardware (or software, in some cases) and the operational environment could be trusted to protect the key from disclosure, the secret-key method could be quite secure--at least as secure as the strength of the algorithm and the key sizes used would permit.

However, as the number of locations that are required to communicate securely begins to grow, the security of this technique begins to diminish rapidly if the same key is used, and if multiple keys are used the difficulty of distributing the keys securely increases significantly.

Public-Key Certificate Validation. The use of public-key cryptography can substantially ease this problem, because the originator of a message only has to reliably know the public key of the recipient. Only the recipient, who knows the corresponding private key, can decrypt the message. However, anyone could have created a public/private key pair and distributed it in the form of a certificate, which is typically in the format specified by the X.509 standard. In order to have confidence that the recipient really is who he or she claims to be, it is generally necessary to make use of a trusted third party called a Certificate Authority (CA), who can be relied upon to bind the user's public key to a claimed identity and/or other attributes used to authorize access to information.

Only the public key of the top-level Certificate Authority (often called the trusted root) has to be distributed by a reliable, trusted, and normally non-electronic method. Of course, the same problem exists with respect to validating a digital signature as in the case of encryption--in this case, it is the recipient of the message who needs to be able to verify the bona fides of the alleged originator of a document.

An increasing number of public Certificate Authorities are now going into business. However, it is somewhat difficult to establish the competency of such CAs without having a detailed understanding of their principles and practices as described in a Certification Practice Statement. To date, few CAs have published such a statement. The principles to be used for writing this document are still evolving, and the terminology and approach are likely to vary considerably.

In addition, it is likely that numerous semi-public CAs will arise, operated by organizations of varying sizes and credibility for the benefit of those organizations and their employees or members. Although intended for use within the closed user group (the Intranet) of that organization, in same cases such a certificate could be used to validate a signature on correspondence that was sent outside of that group, and must therefore be validated by an external individual or organization.

The practices of all of these different kinds of CAs may be expected to vary widely, depending on the requirements of the users they are serving. One CA might require a full-blown due-diligence investigation of a company or individual that would almost be the equivalent of a security clearance, while another might be satisfied with a cursory examination of a driver's license or other form of identification. Some CAs may offer anonymous certificates, where the "name" that is included is either completely fictitious or selected by the individual as a "handle". Other CAs may offer certificates which bind the user's e-mail address to his public key, but provide little assurance that the person who controls the corresponding private key is the registered user of that e-mail address.

Ideally, there would be an industry-wide standard that would reduce all of the variables involved in issuing a certificate to one standard set of practices, so that certificates could be easily compared in terms of quality. Unfortunately, such an industry-wide consensus has not yet emerged. In the absence of some well-accepted form of accreditation, licensing, bonding, or other standards setting mechanisms, whether imposed by legislative act as in the case of the trail-blazing Utah Digital Signature Act, or by the voluntary adoption of such standards together with appropriate auditing, it would be useful to have at least some form of de facto standards which could be applied to public CAs.

In addition to these issues of certificate quality (i.e., the quality of the due-diligence applied to the binding of the identity and/or other attributes to the user's public key), there are two other factors which ought to be taken into account in evaluating the reliability of an X.509-style certificate, and hence the believability of a digital signature and/or the confidence that ought to be accorded a recipient's certificate before trusting that public key to be used to encrypt sensitive information.

Computer Security Rating. The next issue which must be taken into account in evaluating the quality of a certificate, the meaning of a digital signature, and the security of an encryption system, is the computer security rating of the underlying hardware and software, often called the Trusted Computing Base (TCB). Although it may not be obvious initially, it does little good to encrypt a message in a strong algorithm if the computer system used by the recipient for that purpose could be compromised by a virus or Trojan Horse program which leaks the encryption keys to an outsider or other unauthorized individual. The bits of the message may be securely protected by the originator and while in transit, only to have them spill all over the floor, so to speak, at the destination end.

Likewise, although a Certificate Authority may implement the strongest possible processes and procedures to ensure that the individual identified in a certificate really is who he or she appears to be, if the computer security of the platform used to create the certificate is inadequate, the confidence in the entire system is compromised because of the possibility that other certificates may have been issued to unauthorized individuals, or that the identity and authorization attributes that were presented to the individual responsible for issuing a certificate were not precisely what was included within the certificate. Ideally, a competent accrediting authority would review not only the computer security of the operating system, but the implementation of the application(s) used in issuing certificates, processing encrypted information, or digitally signing documents.

Cryptographic Module Implementation Quality. The third issue that must be dealt with in order to be able to reasonably be able to evaluate the overall security of the system, whether with respect to the security of encryption or the trustworthiness of a digital signature, has to do with the technical competency of the implementation of the cryptography itself.

Unfortunately, it is not sufficient to merely implement a given cryptographic algorithm correctly. It is perfectly possible to implement an algorithm such that the correct encrypted bits appear on the wire or in storage, and yet the system is highly vulnerable to a number of different classes of attack. The kinds of attacks that may be possible range from being able to guess or partially guess the encryption keys because of a faulty random number key generator, to implementations of the algorithms that would leave a residue of the key exposed, to incorrectly implemented protocols that may contain an exploitable weakness despite the correct implementation of the cryptographic algorithm itself, to various physical vulnerabilities, and so on.

To date there has been relatively little discussion or focus on this problem within the industry, much less a worldwide consensus as to what the appropriate quality standards should be. However, an excellent standard in this area has been promulgated by the National Institute of Standards and Technology of the United States government in the form of Federal Information Processing Standard FIPS 140-1, Security Requirements For Cryptographic Modules. FIPS 140-1 defines four increasing levels of security, and addresses various security-relevant factors including:

  • The basic design and documentation

  • Module interfaces

  • Authorized roles and services, including operator authentication

  • Finite state machine models

  • Physical security

  • Environmental failure protection/testing

  • Software security and operating system security

  • Key management

  • Cryptographic algorithms

  • Electromagnetic interference/electromagnetic compatibility

  • Self-testing

Both software and hardware implementations are addressed at the various security levels. Although FIPS 140-1 is not intended to apply to classified, military-use cryptographic systems, it is comprehensive in its coverage and well-suited for use a standard for commercial purposes until some worldwide standard is defined to supplement or supplant it.

We therefore have three independent measures of quality which must be addressed at a minimum in order to have a reasonably firm basis for making a decision regarding the security of a particular system, whether we are concerned with encryption for confidentiality or a digital signature for integrity purposes. These are:

  • The Certificate Quality

  • The Computer Security Rating

  • The Cryptographic Module Rating

All but the Certificate Quality measure applies to the end-user's system, and all apply to the entire chain of certificate issued by a hierarchy of Certification Authorities. Although it might appear that only the end-user's system is of particular concern, the fact of the matter is that if a Certification Authority's certificate issuing procedures, or the computer security or the cryptographic module implementation of the certificate issuing system is insufficient or compromised somehow, then any and all of the certificates issued by that CA and any subordinate CAs and/or end users must be regarded as potentially suspect. In other words, a chain is only as strong as its weakest link.

This brings us to the fifth design requirement for international commerce:

(5) An overall measure of the security of the entire distributed information processing system shall be provided by calculating a Greatest Lower Bound of Certificate Quality, Computer Security Rating, and Cryptographic Module Rating, respectively, from the end-user's system to the trusted root, for those end-users and Certification Authorities who agree to conform to the copyrighted and/or trademarked quality rating definitions.

Global and Persistent Registry of Attributes and Names

In addition to the architectural criteria defined above, there is a emerging requirement that has not yet firmly materialized but which may become very important in the relatively near future. Version 3 of the X.509 standard has been well-accepted throughout industry as a means of associating or binding the identity of an entity (e.g., a corporation, individual, or an application process or machine), and/or a set of attributes associated with such an entity (whether specifically identified or not), with the public-key component of a public-private key pair, where the private key is held by, under the effective control of, or otherwise closely associated with the actual entity itself.

However, although the relevant standards associations (the International Standards Organization (ISO) and the International Telecommunications Union (ITU), formerly the CCITT) have ratified the standard, including the definition of certain so-called "useful attributes" and various extension attributes, those standards associations lack any enforcement mechanisms, and there may be significant legal issues associated with claiming conformance with the "standard" definition of some particular attribute. In particular, the standards are considerably more concerned with the syntax representation of the various attributes than they are with their precise and unambiguous semantic definition.

As a result, we believe that until legislative initiatives such as the Utah Digital Signature Act and similar laws are passed in various states, or comparable regulatory initiatives are undertaken in other jurisdictions, and/or the case law in the common law jurisdictions becomes more solidified, the legal status of some of these "standard" attribute definitions (and the right of some organization to claim that they apply) may be uncertain.

But even more importantly, X.509 version 3 also allows the inclusion of "private" attributes; that is, attributes which are defined by, and presumably used locally to or by, the defining organization. The only technical requirement is that such attributes be identified by an "object identifier" which is globally unique, so as to prevent confusion.

Unfortunately, no standards-based mechanism has yet been proposed which would grant the exclusive control or right to use such an attribute definition to the defining organization. The problem, then, is that one organization might grant an entity some particular privilege or capability as specified by a particular attribute, yet be unable to enforce exclusive control over such an attribute to prevent it from being misused or misappropriated.

As a hypothetical example, consider a private attribute extension which gives an employee the right to enter the workplace during normal working hours. Access control software might be written which would check for the presence of that attribute in a user's certificate, and if the user's smart card, for example, could digitally sign an appropriate challenge, and the certificate containing that attribute is appropriately validated, the access control software would open the door.

Unfortunately, once such an application had been written, the likely turn of events would be for some other organization which required that same functionality to make use of the same attribute definition, even though it was created by the other organization in a manner which at least implicitly meant, "Open the doors to our building, for our employees."

So long as the certificate chain which is used to validate the employee's certificates terminates in a trusted root which is under the exclusive control of that organization, and assuming that the access control software validates the certificate chain up to and including that root, then the system should be secure.

But what if the organization decides against implementing its own Certification Authority, and instead chooses to rely on some public CA? And suppose some other organization does likewise, and as a result they both share a common trusted root key? If both organizations instruct the CA to enable the access-granting attribute for their employees, then the attributes are essentially interchangeable, and an employee of one organization could potentially gain access to another organization's facilities. (This is a fictitious example for illustrative purposes only. Presumably any real access control software would validate the organizational name of the user, and would apply name subordination rules or other tests to prevent exactly this kind of misuse.)

This is a simple example of what may become an increasingly common problem. The same sort of thing may happen with respect to information security labels, to membership in a particular group, etc., etc., as either unintentional or intentional collisions in the attribute name space occur. (In a sense, we are already seeing a manifestation of this name-space collision problem with the increasingly contentious arguments as to "owns" or has the right to use a given domain name or web site name. In the case of an international colossus such as International Business Machines vs. a fictitious "Itty-Bitty Movers," the ruling of the courts can easily be anticipated. But should a question arise in a case such as Apple Records (founded by the Beatles) vs. Apple Computer, which company would have the right to use "www.apple.com" might not be nearly so obvious.)

As another example, suppose that an organization such as General Motors decides to implement an information security policy that would be enforced at all of the gateways to their network. Information security labels might be devised that would allow certain information to be released to the public, while proprietary information would be restricted to employees of GM and their subsidiaries. That is already a fairly difficult problem, because some subsidiaries (for example, EDS) may be bought or sold over time, and some of those subsidiaries themselves have subsidiaries and other complex ownership arrangements. It may be necessary to subdivide the information so that some subsidiaries which do not have a need to know. EDS employees presumably don't need to know the details of new-car roll-out dates that employees of GM itself might need to know, but maybe some of the more senior executives within EDS do need to know that information--perhaps because it is included in a memo from the Chairman of the Board, which addresses other high-level corporate issues as well.

It isn't clear how such subsidiaries would be identified as such in a certificate. As a rule, subsidiaries are corporations, and the generally-accepted legal view (or perhaps legal fiction, depending on whether you are the plaintiff or the defendant) is that subsidiaries are completely separate legal entities. In fact, one of the more important reasons why such subsidiaries are established is to provide a liability firewall to shield the parent company from a suit for damages against the subsidiary. For this reason, it is highly unlikely that a subsidiary would be named as an organization unit of another organization (to use X.520 terminology), so that name subordination rules would apply. Likewise, it seems somewhat unlikely that the parent corporation would operate a Certification Authority which would issue certificates to the subsidiary, for that might imply more of a liability relationship than would be desirable. Instead, the various subsidiaries will probably obtain their organizational certificates from some public CA, and perhaps even different public CAs. How then can the affinity relationship be confirmed between these organizations?

As if this problem weren't difficult enough already, there may exist a gray area of need-to-know information that lies between strictly public information and proprietary data and is even more difficult to quantify. But presumably GM needs to inform its dealers, its vendors, the automotive press, etc., regarding plans which they would prefer (for advertising and competitive reasons) their competition not be aware of. Maybe they need to prepare their dealers of a potentially significant safety defect so they can make preparations to deal with it, but they don't want to alarm potential customers before they have a solution in hand. How can all of these individuals and organizations be identified without having to have GM issue them all GM certificates?

The solution to this problem would seem to be a trusted Registry of affiliations. Under appropriate contractual arrangements, GM in our example could presumably instruct the Registry to issue a "GM-affiliate" attribute to various organizations and individuals. But what would prevent some other CA from issuing a certificate containing such an attribute to someone whom GM had not authorized?

Although the register-trademark approach described in the section dealing with certificate quality issues is one possible approach, we believe that the only really strong way to enforce ownership and control over such intangible assets as an attribute which is only identified by an object ID (which is basically a string of numbers, perhaps separated by periods) is through contractual agreements with a global Registry. Whether such a registry is a for profit or non-profit private organization or a governmental agency doesn't matter too much, so long as the registry organization is able to contractually control the definition and use of any attributes which it registers.

In the absence of any other mechanism, and assuming that the attribute contain a trademarked identification phrase or brand, then presumably the Registry could sue for infringement and/or obtain a cease and desist order in order to enforce its right to uniquely control those attributes. However, if the Registry were to also act as a top-level Certification Authority, it could use contractual arrangements rather than relying on infringement suits to control the usage of its attributes. And unlike the case of a printed document, where it would be very difficult to tell by just looking at the document itself whether a trademarked phrase was used by permission or not, the Registry could control such usage by refusing to issue a certificate to any subordinate Certification Authority with which they did not have a legally binding agreement, including the appropriate flow-down provisions to control yet lower-level CAs.

Alternatively, if the Registry does not serve the dual function of acting as a Certification Authority (perhaps because of the potential liability and other considerations that might be involved in that role), it might be possible for the Registry to digitally sign a particular attribute value, and that digitally signed attribute could then be given to an authorized user for whatever purpose is required. In particular, the authorized user of that attribute could present it to his or her CA for incorporation within the user's X.509 certificate. In effect, it would be a mini-certificate within a certificate. Although the concept of a digitally-signed attribute which is contained within a certificate has not even been proposed within the standards community to the best of our knowledge, there does not seem to be anything in the existing standards which would prohibit such a concept or make it particularly difficult to implement.

This rather lengthy and perhaps tedious justification is the basis for the sixth and final architectural requirement:

(6) The infrastructure must provide for the existence of one or more top-level, global and persistent Registries of certificate attribute syntax and semantic definitions, including the ability if necessary to reject or ignore attributes which are included within a certificate but not validated as a proper use by the global Registry.

Conclusion

This AppNote has dealt with some of the key issues and challenges the networking industry faces as organizations embrace electronic commerce across the Internet. Knowing what issues lie ahead and what some of the envisioned solutions are can help in determining a long-term security strategy.

* Originally published in Novell AppNotes


Disclaimer

The origin of this information may be internal or external to Novell. While Novell makes all reasonable efforts to verify this information, Novell does not make explicit or implied claims to its validity.

© Copyright Micro Focus or one of its affiliates