Network Security for the 21st Century: Concepts and Issues
Articles and Tips: article
01 Nov 1997
This article has been edited since it was originally published.
Covers basic security concepts and summarizes the critical issues that the industry is currently facing with regard to network security
- Introduction
- What is Network Security?
- Cryptography Terminology
- Security Policy: Environment, Risk, and Assurance
- New Issues in Network Security
- New Threats: Paranoia Becoming Reality
- Barriers to Effective Security
- A Prescriptive Approach to Network Security
- Conclusion
Introduction
Network security exists because of the vital business need to electronically communicate and share information. If information only needs to be accessed by one person, simple isolation techniques alone would eliminate the need for any further levels of security. However, networks are all about the sharing of programs and data, both internally and externally. In this environment, implementing and maintaining security is a prerequisite to protect the data and systems. Of course, the security should not get in the way of the sharing. Its primary goal is to prevent unexpected loss or unauthorized access while allowing all necessary business processes to take place with a minimum impact on the users.
Various forms of computer security are familiar to many in the networking industry. Password protection, virus checkers, and firewalls are a few examples. While these are important aspects of implemented security procedures, they are only methods and devices belonging to a much greater endeavor--that of establishing a sufficient level of security in the overall network environment.
This AppNote begins with a discussion of basic security concepts to provide a common context for understanding the various aspects of security covered within this issue. It also discusses some of the key issues that must be understood in order to design and implement secure networks in today's openly connected business environment.
What Is Network Security?
Defined in its simplest form, security is "freedom from danger or risk of loss; safety." Thus network security can be any effort made to protect a network from danger or risk of loss; or in other words, to make the network safe from intruders, errors, and other threats. While this definition may be an over-simplification, it establishes two underlying assumptions about network security: (1) that the network contains data and resources necessary for the successful operation of the organization; and (2) that the network's data and resources are of sufficient value to be worth protecting.
Confidentiality, Integrity and Availability
Computer security is concerned with three main areas: maintaining the confidentiality, integrity, and availability of electronic data.
Systems have confidentiality when only those who have been authorized to access particular electronic data are able to.
Systems have integrity when the electronic data can only be modified by those who are allowed to do so.
Systems have availability when the electronic data can be accessed when needed by those authorized to do so. Adequate backup and disaster-recovery plans play a big part here, as does ensuring that a system is not susceptible to denial-of-service attacks.
Identification and Authentication
To ensure that only authorized persons or computers can access or modify data on a network, there must be a method for establishing a user's identity on the system, along with a means to verify the identity. This is where identification and authentication come in.
Although these two terms are closely interrelated and are often referred to jointly by the acronym I&A, they describe two separate functions:
Identification refers to the process that occurs during initial login whereby a person provides some type of security token, typically a unique username or user ID, to identify that user on the system. In effect, identification is akin to telling the system, "This is who I am."
Authentication is a verification process that requires the user to provide another token, typically a password known only to the user, to affirm that the identity is being assumed by its rightful owner. The user is essentially telling the system, "Here is some private information to prove that I am who I claim to be."
While other forms of authentication have been proposed, such as smart cards and biometrics (fingerprint or retinal scanners, for example), passwords continue to be used almost exclusively in today's networks. Thus an important part of security is protecting the authentication secrets (passwords) themselves so that they cannot be misappropriated by any unauthorized user.
Accountability
Based on the user identity provided during the login, the system can determine which of the network's resources the user is authorized to access and at what permission level. A key goal of I&A is to enable individual accountability: the ability of the system to reliably determine who you are when you log in and to track your actions while you are logged in. When the I&A mechanisms are sufficiently strong, you can be reasonably assured that any actions taken under a particular user account were in fact performed by the authorized user of that account.
Access Controls
In a network, access controls are applied to the data based on the users' system identities. These controls rely on identification and authentication, since users must be identified and have their identities authenticated before the system can enforce any access controls.
In security discussions you may frequently hear access controls categorized as either discretionary or mandatory. Discretionary access controls restrict access to files or directories based upon the identity of the user. They are discretionary in that the owner of a file (or any user with sufficient rights) is free to assign or revoke the access rights of other users to that file. Mandatory access controls, on the other hand, restrict access by means of special attributes that are set by the security administrator and enforced by the operating system. These controls cannot be bypassed or changed at the discretion of non-privileged users.
Mandatory access controls are typically based on labeling, where the administrator labels each item of information with a classification name such as Unclassified, Confidential, Secret, and so on. Each user is assigned a clearance level from the same set of classifications. The operating system then controls access to information based upon these classifications.
Audit
Being able to restrict unauthorized access is important, but a secure system must also be able to audit authorized access, which means keeping a record of who did what and when, and who was prevented from doing what and when. Auditing is typically done by maintaining audit trails, or logs of significant events and user actions that take place within the system. Of course, sufficient protection must be provided for the auditing information itself to prevent intruders from being able to erase all traces of their actions.
A Note About MAID
When dealing with secure systems, you might encounter the acronym "MAID" or portions thereof. In security nomenclature, system components are characterized by which of the following functions they provide:
M = Mandatory access controls
A = Audit
I = Identification and authentication
D = Discretionary access controls
A network server, for example, generally provides all of the A, I, and D features and can therefore be referred to as an IAD component. Other components may provide all, some, or none of these security functions. A component that provides none of these functions, such as network cabling, is considered a "nil" component.
Non-repudiation
Occasionally, it might be necessary to prove that a particular user made a particular transaction long after it happened. Since the ability to modify data often includes the ability for users to remove their own access rights, it is possible for an unscrupulous user to change a piece of data, remove his or her rights to that data, and then deny any involvement in the change. A security system must therefore provide something beyond mere access control. With non-repudiation, irrefutable evidence of the transaction is created and stored. The entity which made the access or modification cannot falsely deny (or repudiate) its actions later. (Non-repudiation is common in secure e-mail systems, where it is used to prove that a particular user did in fact send a message purporting to come from him, and that a user did get a message he might deny having received.)
Cryptography Terminology
For security functions, many applications use cryptography, the art of converting a legible message into some unintelligible form to keep its contents safe from prying eyes. Cryptography has a jargon all its own that can be confusing to the uninitiated. Before we proceed further into our discussion of network security, it might be helpful to review some of the basic terms and concepts surrounding cryptography and data encryption.
Cryptographic Algorithms
The use of codes is nearly as old as writing itself, and various forms of "secret writing" were used by the ancient Egyptians, Hebrews, and Indians. While the modern science of cryptography had its roots in the military arena, encryption technology is now widely used in computer systems to maintain confidentiality and privacy in electronic communications.
If you wanted to protect some private data using cryptography, you could design your own code (in the form of a mathematical algorithm) to encrypt the data, and then provide a corresponding algorithm to decrypt the data. The next time you want to send some private data, you are concerned that someone might have already figured out your first code. So you design a different algorithm and use it to encrypt the data. The problem with this approach is that devising secure cryptographic algorithms is not easy. Professional cryptographers spend their lives studying difficult mathematics and poring over other peoples' algorithms, and they still have trouble coming up with sufficiently crack-proof algorithms. It would be much better if you could design and implement an algorithm once, and somehow have it run differently every time you used it.
Cryptographers have found a way to do exactly that. All it requires is an algorithm that takes an additional input (besides the data to be processed), and then varies what it does to the data depending on the value of this extra input. This additional input is called a key. In much the same way as a single lock design (on a particular model of automobile, for example) can be implemented so that one owner's key will not open another owner's lock, keys provide a value to the cryptographic system which identifies data as belonging to a particular user and allows the locked (encrypted) information to be opened.
What Exactly Is a Key?
In computer systems, a key is simply a number with a specified length in bits. As explained above, the key is used as an additional input that is factored into the mathematical encryption algorithm to produce a unique result for a particular user. In key-based encryption, the key is required for both the encryption and decryption processes; an encrypted message can be decrypted only if the key being used to decrypt it matches the encryption key.
In the same way that you could eventually open a combination lock if you try all possible combinations, almost any encryption method can be cracked simply by trying enough of the possible keys. All it takes is a "brute force" approach where a computer (or network of computers) tries each of the possible numerical combinations until the right one is found. This is why key length is such an important consideration. The longer the key, the more time and computing power it takes to crack the encryption method. The difficulty increases exponentially as you add bits, with each single bit doubling the key's strength.
To demonstrate the effect of key length on the strength of encryption, RSA Data Security, Inc., an encryption software provider, issued a series of challenges in early 1997 and offered cash rewards for cracking the algorithm. Here are the results so far:
The first challenge used a 40-bit key and was cracked in 3 hours.
The second challenge used a 48-bit key and was cracked in 13 days.
The third challenge used a 56-bit key and the U.S. government's Data Encryption Standard (DES) algorithm. It took about 6 months to crack.
The fourth challenge also used a 56-bit key but with RSA's stronger RC5 algorithm. In March of 1997, a world-wide effort consisting of 4,000 teams using tens of thousands of computers linked over the Internet began trying the more than 72 quadrillion possible combinations. They succeeded in cracking the code within 7 months, after trying only 47 percent of the possibilities.
It might seem an obvious solution to simply increase the key length to deter hackers. After all, the whole idea behind encryption is not to make the algorithms crack-proof, but to make it more costly to break the encryption than the information being protected is worth. While using keys thousands of bits long would provide an exceptional amount of security, it would require far too much time and processing power to perform all of the necessary algorithmic computations. In balancing security and practicality, the goal is to have keys that are long enough to be secure but short enough not to require excessive time to compute.
Secret Key vs. Public Key Encryption
Secret-key or symmetric encryption uses encryption and decryption functions together with a single key to reversibly transform plain text input into ciphertext. A simple example of secret-key encryption is the Caesar cipher, where the encryption function is to move each letter a certain number of positions up in the alphabet (rolling around to A after Z). The decryption function rolls each letter back the same number of positions. For instance, if the number chosen is 2 for this cipher, the word SECRET would be encrypted as UGETGV. Obviously, this encryption system is relatively to break.
Another secret-key encryption system is to exclusive-OR the plain text with a randomly chosen value. In this case, the encryption and decryption functions are the same. Again, this is not a very secure system. More useful secret-key encryption systems incorporate the government's DES and RSA's RC2 algorithms.
By contrast, public-key or asymmetrical encryption systems use two keys: one which can be made public, and the other which must be kept secret (private). The public key, which may be freely given out to anyone, is used to encrypt messages but not to decrypt them. Only the intended recipient can decrypt the message using his or her private key (see Figure 1).
Figure 1: Public-key encryption.
Probably the most well-known public-key encryption system is the RSA system, named for its inventors Rivest, Shamir, and Adleman.
Keeping Track of Keys. Both secret-key and public-key encryption systems have advantages and disadvantages. In secret-key encryption, the biggest problem is key distribution. If two users want to communicate without anyone else eavesdropping, they need a secret key which only they know. As you add more users to the community, the number of keys needed for any two members to be able to communicate securely grows as the square of the size of the community.
The usual solution to this problem is to have a Key Distribution Center (KDC), which shares a certain number of secret keys with each of member of the community. If two of these members want to communicate, the KDC will generate a temporary random session key, used only for this communication session, and send it (encrypted) to the two members. They can then encrypt and decrypt all of their conversations with this key. But there are several problems with using a KDC: first, it must be online whenever two members want to establish session keys; and second, if the KDC is compromised, all communication within the community is also compromised.
Public-key systems have advantages over private-key systems when communities of users want to communicate securely. Instead of needing an on-line KDC to issue secret keys, the users' public keys are published to the community. Anyone wanting to send a message securely to a user simply obtains the user's public key and encrypts the message with it. Only the destination user is able to decrypt the message. Public-key algorithms have withstood scrutiny for almost 20 years and have proven themselves quite solid.
Secret-key algorithms, on the other hand, are generally much faster to execute on a computer, making encryption and decryption fast with today's hardware and software. Public-key systems tend to use much slower operations, such as raising large numbers to large powers using modular arithmetic. As a result, it is common for two systems to agree on a symmetric session key, which is encrypted with a public-key system and then transmitted between the two. All subsequent communications are then encrypted with the session key, which is much faster.
Digital Signatures. Public-key algorithms can be used to create digital signatures that are the electronic equivalent of physical signatures on paper documents. Digital signatures are used to verify that a message did in fact come from the claimed sender, and to certify that a public key belongs to a particular individual. The process uses public and private keys together with signing and verification functions to transform a message into a signature in such a way that only the holder of the private key could have created the signature.
A digital signature for a document is usually created by taking a piece of text from the document and computing a message digest using cryptographic hash functions. A hash function is a one-way operation that compresses the bits of a message into a fixed-size hash value in such a way that it is extremely difficult to reverse the process. Information about the signer is added, along with a time stamp and other data. The resulting string is then encrypted using the signer's private key. This allows the recipient of the message to verify the sender of the message without the sender having to include his private key with the message. Well-known message digest or cryptographic hash functions include MD2, MD4, and MD5.
Certificates and Certificate Authorities. One problem with public-key encryption is that it's hard to be sure a person's public key is not a forgery. By replacing a genuine public key with his own counterfeit version, a malicious user could trick someone into sending him confidential data, which the malicious user could then decrypt. However, if a trusted authority digitally signs the user's pubic key, the sender can be sure he has the right public key. The public key is stored in a structure called a certificate. The trusted authority is called a Certificate Authority, or CA. Certificates could also be used to verify the authenticity of a piece of downloadable code.
Security Policy: Environment, Risk, and Assurance
One of the most crucial factors in implementing a secure network is the creation of a security policy. This is a written document stating what the organization's policies are with regard to information security. It contains general statements of the goals, objectives, beliefs, and responsibilities that will guide the actual implementation of security products and procedures. An organization's security policy thus becomes an important part of the network operation, and it should be referenced and updated often.
Note: The term "security policy" is also used to refer to a statement of the access control rules enforced by a computer system. It is important to distinguish between the written information security policy and the electronic security policies that are implemented within a system component.
There are three main areas to consider in writing a security policy: environment, risk, and assurance (sometimes referred to by the acronym ERA). The potential for security breaches exposed though your network's operating environment (environment) and the consequences of compromising valuable information (risk) are critical factors in choosing system software. Assurance of protection can only come through an objective, comprehensive testing and evaluation of a product's security features.
Environment: Network Connectivity
When devising a security policy for your organization, the first area of concern is the environment in which the data exists and the users work. This involves getting a clear picture of your network's connections, both internally and externally. NetWare allows interconnection among any number of individual networks, thus forming an internetwork. The operating system itself can function as a connecting point in the internetwork through its built-in routing capabilities. Various add-on services are available to provide external communications via modem pooling, Internet gateways, and mainframe attachment devices.
Before you embark on any security implementation, you need to understand what the implications are of providing access to your network, in the context of both existing and future connectivity.
Risk: Threats and Vulnerabilities
The second area to consider in devising a security policy is risk. In the distributed environment of networks, data stored on computers is exposed to a variety of threats. As part of the planning process, you should identify which data is potentially at risk and what constitutes a loss. Risks need to be assessed in terms of how much damage would be incurred to the organization, in a worst-case scenario, if a particular threat were to materialize. It is a fact of human nature that the evolution of threats into real risks accelerates as the value of the information being protected increases. The more your information is worth, the greater the potential reward to attackers.
You also need to identify the vulnerabilities or weaknesses within your system that would allow any of the potential threats to take place. Only then can you determine what the appropriate security policies and countermeasures should be.
Types of Threats. The general threat in a networked system is the unauthorized disclosure or modification of data stored on network servers and clients. However, there are numerous other types of threats you should be aware of. Following are the threats listed in the NetWare Enhanced Security Administration manual:
Viruses, Worms, and Bombs. A virus is code that replicates itself within other programs (boot sector code, applications, and so on). Viruses are commonly spread by booting with infected diskettes, or by copying and running infected programs from an electronic bulletin board. A worm is an independent program similar to a virus, but it has the ability to replicate itself from one site to another. A bomb is malicious code that is planted within an application or operating system and triggered by a clock time or an external event.
Trojan Horses. A Trojan Horse is malicious code that hides within a legitimate program and performs a disguised function.
Trap Doors. A trap door is a mechanism built in to a system by its designer or administrator to give that person subsequent unauthorized access to the system.
Spoofing. Packet spoofing is a classic problem in distributed systems. In a packet spoofing attack, a user at a network node generates packets that purport to be from an authorized network user.
Wiretapping. An intruder performs a wiretap attack by connecting to the network and reading (passive wiretapping) or writing or modifying packets in transit (active wiretapping).
Browsing. Novell Directory Services (NDS) performs the necessary role of a name server within a multiple-server environment. During login, NDS provides essential name services to unauthenticated users so they can find trees or servers. Depending upon the configuration of the NDS [Public] object, users can browse names in the NDS Directory or can query NDS for the presence of objects having the specified name. Authenticated users can also browse NDS and file system objects according to the user's rights to those objects. The ability for unauthenticated users to see object names becomes a security threat in that it allows potential attackers to gather information that may be useful in another type of attack.
Unauthorized Use. The various resources on a network are subject to use by unauthorized persons if sufficient safeguards are not put in place to prevent it. Physical access to the server console is an especially serious concern, as it leaves the server wide open for numerous types of attacks.
Denial of Service. Denial of service is a consideration for any shared resource, such as disk space, communications bandwidth, or printers. This threat comes in many forms, ranging from physical destruction of the resource to flooding the resource with so many requests that it is unable to service legitimate users.
Effective Countermeasures. A countermeasure is any process or device instituted to counter a security threat to a component in the network. For instance, network countermeasures could include methods of authentication or the use of cryptography when communicating and storing information. Additional countermeasures could include audit trails, intruder detection mechanisms in the operating system, and internal consistency checks. Countermeasures can extend all the way from non-technical to esoterically complex technical methods.
Whenever you employ security technology, you need to consider the value of what you're protecting, the threats you're protecting it from, what countermeasures will be effective against those threats, and how long you expect your protective measures to resist attack. It's no use putting expensive security measures in place to protect against certain types of attacks if you leave your network wide open to attacks in other areas. Security is a "weakest link" technology: the overall strength of protection is equal to that of its weakest component. Thus, no aspect of the system design should be overlooked when choosing security solutions.
New Issues in Network Security
Reliable communication systems are vital to the successful flow of information in business. Yet just as vital is the ability to protect valuable information and intellectual assets. As commercial networking extends beyond normal service LANs, as we have known them, into broadly interconnected and distributed networks, security becomes an even greater concern. For those who are tasked with protecting valuable data in the interconnected environment, it is increasingly vital to understand key security issues, to recognize how to protect information assets through an effective security policy, and to know which solutions exist to enforce the security policy.
The Evolution of Network Security
Computer security has long been a concern for both government and business. In the late 1960s, mainframe system suppliers such as IBM, Digital Equipment Corporation, and others launched a major effort to implement active security within their operating systems. Representatives from industry, government, and academia proposed features needed to build secure mainframe operating systems, and billions of dollars were spent trying to achieve the elusive goal of creating secure systems to store, process, and exchange information. These efforts resulted in the traditional mainframe data center, a highly controlled environment in which dumb terminals were attached to secure hosts that were located in fortress-like rooms with carefully guarded access. For more than twenty years, these "big iron" systems served as the chief platform for government and military data operations, as well as servicing the majority of commercial financial transactions.
The introduction of PC-based networks into the computing picture was initially disdained by those who were accustomed to these secure "fortress" systems. Given the diversity of hardware, operating systems, applications software, and physical media used in local area networks, few believed that client-server systems could ever offer the levels of security that had been achieved in fortress systems. It quickly became clear that any migration to an "open" computing environment would require operating systems with security capabilities equivalent to those provided by host systems.
Despite these initial doubts, PC-based networks proved themselves to be reasonably secure for many general uses. As a result, networks have grown in popularity until they are now the "backbone" of many companies. Virtually all aspects of everyday work are dependent on communications such as voice, data, or imaging. Networks help meet these communication needs by moving and processing information, and by providing such services as messaging, multimedia, and application development. Thanks to a symbiotic relationship between the computing and telecommunications industries, individual and workgroup LANs have been connected across great distances via a wide variety of means, including leased lines, switched public networks, and satellite links. Technological advances in the areas of network capacity, bandwidth, and functionality have made it possible for PC-based products to become major players in enterprise and global networks. This have given cost-conscious businesses the ability to "downsize" from mainframe-based hardware systems to microcomputer-based systems and to "rightsize" their operations with the proper software base.
Of course, the use of network computing technology is not without danger. The information sharing environment in which data is stored, retrieved, processed, and transmitted is not inherently secure--especially at the workstation level. The components of networks (microcomputers, connectivity devices, and so on) were not initially built with integrated security features and thus do not provide an acceptable level of trust "out of the box." The need for network security services is clear as network operating systems are employed to fulfill an ever-increasing number of business requirements.
NetWare's Evolution
From its very first release in the early 1980s, the NetWare network operating system has included a wide variety of security features to control access to network data and resources. The NetWare 3 operating system built on the foundation of its predecessors to provide increased security capabilities, along with connectivity and interoperability features that elevated LANs to a new level of respectability in corporate environments.
NetWare 4, with its X.500-like Novell Directory Services (NDS) component and extended file system security controls, raised network security to the level of most host implementations. In bringing the NetWare 4 operating system to market, Novell recognized that customers wanted systems that were easy to implement, provided enterprise networking and excellent performance, and offered mainframe-like computing control features in a distributed network computing environment. With the advent of NetWare 4, Novell has made major strides in helping network and security administrators, auditors, and users in their quest for a secure system. NetWare 4 offers extensive fault tolerance, security, and audit features which provide a strong line of defense against threats that arise from common daily errors as well as focused malicious attacks and network intrusions.
NetWare's security and control features provide the basis for building a secure network computing system. NetWare 4 offers flexible, multilevel security that controls access to both NDS and file system resources. These features allow the system administrator to determine who can access the network, what resources users can access, how users can utilize the resources, and who can perform tasks at the server console. Current developments in crytography, PKI services, and authentication services make NetWare a leader in network security.
New Security Challenges
Up until recently, the physical and logical isolation of NetWare LANs generally protected them against easy invasion from the outside world. NetWare servers ran only the standard IPX protocol and were thus incapable of communicating over the TCP/IP-based Internet. Each local area network in a large enterprise was essentially a self-contained entity, connected to other self-contained networks by means of routers and dedicated WAN links. Connections to mainframe hosts were achieved through traditional devices that had strict access controls (see Figure 2).
Figure 2: The LANs of the past were "islands" of security with controlled connections to other systems.
From a security perspective, LANs were essentially "islands" of security within globally distributed networks. Inside each island, sensitive data was protected from unauthorized internal access by isolating it on specific servers, by instituting login and password restrictions, and by implementing access controls such as file and directory permissions.
As networks have become more distributed, information storage has been concentrated at strategic locations, while data processing has been distributed throughout the organization. This shift has increased the reliance on data communications and moved much of the responsibility for the information to the desktop and to the individual user. Along with this client-oriented focus has come new levels of connectivity, as businesses provide end-users with greater communications capabilities by connecting their existing LANs to "open" public networks such as the Internet. The effects of this migration are becoming even more evident as users gain the ability to remotely connect from home or elsewhere via the Internet. Security protections in the form of firewalls and Virtual Private Networks are being widely deployed as a means to control access through public networks (see Figure 3).
Figure 3: Today's LANs are being connected to public networks such as the Internet, requiring new forms of security protection.
There are numerous aspects of today's Internet-connected, client-focused networks that should cause concern for the security-minded. We will summarize these under four main areas:
The additional opportunities for indirect connections
The possibility of "unknown" users accessing your network
The transmission of data across unregulated processors and switches
The increased use of "off-the-shelf" operating systems and applications
This list is noninclusive, as there are undoubtedly many other areas of concern that can and will emerge. However, these five points are illustrative of the general considerations that should be weighed in any serious discussion of security in open networks.
More Indirect Connections. Any way you look at it, connectivity to the "outside" is risky. Whereas previously businesses had their employees directly accessing business information, the Internet provides access to networks for any user who can attach. Organizations need to review the increased risks associated with "open" accessibility to their networks before they establish this level of interconnectivity.
Even with firewall protection, there are several potential vulnerabilities. Common devices such as modems, for example, can provide an access point to your network that completely bypasses the firewall, as shown in Figure 4. What's worse, well-meaning but misinformed users may be convinced they need a modem connection to the Internet precisely because of the access restrictions imposed upon them by the firewall.
Figure 4: The addition of a modem can completely bypass existing security measures such as firewalls.
Even access to internal intranets and web servers by browser technology should be a concern. While many companies are looking toward using technologies like the Internet and intranet browsers to decrease the cost of servicing both customers and employees, they need to understand the security risks involved in providing web services. Web servers are among the easiest targets for hackers, as evidenced by the recent attacks on well-known government and industry web sites that have received much coverage in the media. The increasing trend toward having web servers interact with in-house databases necessitates a link to internal networks, making the web site another access point into your network.
The "Unknown" User. Interconnectivity through the Internet introduces a new class of "unknown" or "anonymous" user into your system through the indirect access points described above. With current network controls, it is possible to inadvertently give this user the same rights as any other user on the network. Responsible companies will want to know whether they have information of value that needs to be protected from even accidental disclosure to this new group of non-company users. You may need to think in terms of an "us" versus "them" distinction as your network's potential user community expands beyond company borders.
Unregulated Processors and Switches. In the past, even the most far-flung enterprise networks were typically connected by routers across dedicated WAN links that were controllable. The Internet, by contrast, is composed of unregulated processors and switches that are out of the control of internal IT departments. Data sent across the Internet passes through many different systems, ranging from local Internet Service Providers to host servers on college campuses to big-name telecommunications providers such as Sprint or MCI (see Figure 5).
Figure 5: Data sent across the Internet passes through many unregulated processors and switches.
Without special software and encryption, all Internet traffic travels in the clear and can be easily monitored by anyone with freely-available "packet sniffing" software. This amplifies the level of uncertainty and risk as data travels from site to site across the Internet, and especially when confidential information, such as credit card numbers or personal details, is exchanged between a business and its customers or partners.
Off-the-Shelf Software. The complexity of the new computing paradigm is compounded by the fact that PC-based networks have grown up as "technology in a box" and wires in the wall. "Off-the-shelf" products have become the norm for high tech business environments, replacing the highly controlled and customized systems of the past. Companies routinely purchase and install low-priced commercial applications out of the box, mindless of the fact that the software's operating properties have never been verified by any independent and objective body. This issue becomes increasingly critical in light of the recent publicity about the potential for harm that can be facilitated with controls such as Microsoft's ActiveX and, to a lesser extent, Java applets.
Real solutions to prevent actual information attacks are not incorporated into COTS hardware and operating systems for several reasons, including cost, historical oversight, and time lag to introduce hardware features where they need to be. While there are many products that were designed with security in mind, the lack of industry-wide security standards makes it difficult to regulate and unify multi-vendor products in a networked environment.
Electronic Commerce and a Global Information Infrastructure
Today's information infrastructure, which is focused on individual workgroups and enterprises, seems destined to evolve into one that is national, multinational and, ultimately, truly global in scope. The concepts being espoused by the U.S. government for the National Information Infrastructure, or "Information Superhighway," include extensive use of distributed computing to transmit and manipulate sensitive data between organizations. When this paradigm is extended to the global level as a global information infrastructure, it is illustrative of the potential environment for electronic commerce (e-commerce) that Novell's customers, partners, and competitors will be dealing with as we move into the 21st century.
Today, businesses of all sizes are looking to establish electronic commerce as a means of reaching new customers, increasing convenience for their existing customers, and decreasing the costs of product packing and distribution. There are many opportunities and advantages for electronic financial transactions and electronic software distribution within a global information infrastructure. Such an infrastructure also promises a new level of electronic interaction between businesses and their partners, suppliers, and customers.
Adequate security is essential to the success of these endeavors. The need to protect financial assets such as bank accounts and major business transactions is obvious. Other examples of the types of sensitive commercial information being shared among the offices of multinational corporations include financial data, customer lists, sales projections, product development plans, and patent applications. It is also vital that private information such as credit card numbers and medical case files be protected as fiercely as commercial financial data and trade secrets.
New Threats: Paranoia Becoming Reality
Some skeptics dismiss the concerns of security-minded people as paranoid delusions. However, those responsible for protecting valuable corporate information would do well to heed the popular computer industry sentiment, "only the paranoid survive." Few companies can afford to put their business at risk by not preparing for the potential threats that are becoming more and more real every day. As the value of the information processed by networks increases, so will the threats to the availability, integrity, and confidentiality of that information.
Following are some of the "new" threats to intellectual and personal property as distributed computing becomes the norm, and especially as businesses connect to the Internet and pursue electronic commerce.
Increasing Financial Motivation for Attacks
When asked to identify the primary threat to the security of networked computer systems, many system administrators point the finger at high school- aged "hackers" armed with personal computers and modems. While many such cases have been reported by the media, the biggest threat does not come from amateurs who try to break into your system for the fun of it. As the value of the information stored on networked systems increases, a far greater threat comes from dedicated individuals and groups seeking ill-gotten financial gain.
The commercial value of the intellectual property (products and trade secrets) that will become available electronically will provide a strong motivation for new classes of hostile attackers. As companies embrace the Internet, some of these deliberate, malicious attacks will undoubtedly originate with agents external to the organizations that own and use the networked systems and resources.
Existence of Malicious Software
In a distributed computing environment, there is significant opportunity for malicious software to intervene between user and data, thus subverting the intent of well-meaning users. Although much attention has been focused on various computer viruses that have caused noticeable damage to personal computers, the greatest threat today comes from the more sophisticated form of virus known as a "Trojan Horse" (see Figure 6).
Figure 6: A Trojan Horse is the most worrisome type of malicious software, as it hides within what otherwise appears to be useful software.
By definition, a Trojan Horse is a virus-like piece of code hidden within what otherwise appears to be useful software such as a spreadsheet program or library utility. It is present without the knowledge of the system's user, and once activated it operates with the user's authorizations to perform functions that the user neither desires or intends.
In networked environments where information is shared and communicated over wide areas, Trojan Horses are easily distributed and have ample opportunity to cause harm. For example, a Trojan Horse could be introduced as an authorized user prepares to download intellectual property such as a computer program or entertainment product. The intruding software could surreptitiously transmit the program to a third party, thus undermining the integrity of the licensed transaction and making it impossible to tell exactly who has gained access to the information and what actions an individual actually took and is accountable for.
In many cases, the credibility of malicious software is questioned, especially when there is a lack of first-hand experience with such as attack. Of course, this makes the malicious software attack even more appealing to an interested group. The reality is that malicious software attacks do happen, whether they are detected or not.
Easy Indirect Access Leads to More Opportunity
Many companies are rushing to connect to the Internet without realizing the ramifications of this "open" connectivity and therefore failing to provide adequate protection against the increased opportunity for attack through indirect access. Unprotected computer systems and networks will prove easy prey to those seeking unauthorized access to valuable information. As shown in the Trojan Horse example above, the legitimate purchasers of intellectual property may serve as unwitting conduits through which hostile software is able to hijack the property and forward it on for further unauthorized distribution without the legitimate owner ever being compensated.
Download at Your Own Risk
Another concern is that there is no effective way to protect against malicious logic when downloading information across the Internet. There are few restrictions on what types of software users can download, and even fewer means of detecting whether this software is doing something other than the application it was intended to perform. With out-of-the-box interconnectivity to the Internet, you can set up a system in which users can easily seek out programs originating on untrusted servers, transfer them across unregulated network connections outside of IT control, and load these unknown executables on their local hard disks. Thus what has long been a security concern for diplomatic and military branches of government--the problem of unknown software operating autonomously at the client--is now an almost everyday occurrence.
The E-Mail Security Hole
E-mail programs have great potential for bypassing security mechanisms altogether. Currently the majority of network administrators rely on commercial network operating systems such as NetWare for the interconnectivity between users. This closed interconnectivity allows data trafficking from one part of the network to another. With e-mail packages, one user on the network can send another user information from directories which the other user may not normally have access to.
In most organizations using LAN technologies, this transmission of company information in the form of e-mail is done without much consideration as to the sensitivity of the information. For that matter, organizations expect that the products they use adequately and reliable reference where (who) information is coming from and going to (whom). Yet for the majority of current networks, this is not necessarily true, nor is it capable of being substantiated.
Barriers to Effective Security
While many technological advances have been made in the area of network security, effective protection has lagged behind the new challenges that are continually coming to light. This section describes some of the detrimental attitudes and technological barriers that must be overcome in order to provide effective security solutions for open networks.
Old Protections Are No Longer Adequate
In the brave new world of open networks, many of the traditional protections that the industry has come to rely on will prove largely ineffective. For example, it is nearly impossible to physically isolate data when pervasive interconnection is the basis of commercial activities. Likewise, it is difficult to manually review information before it is released, when most information is being distributed in an electronic form that is not readable by humans. Those responsible for security solutions must recognize that new protections are necessary to meet new challenges.
Low Level of Security Awareness
Most organizations are only barely aware of the information security needs that are evolving with every new generation of software. The lack of information disseminated about the relative insecurity of corporate information has been a major limiting factor in the production and placement of real security features. Many companies have not encountered any of the current information threats, and therefore falsely assume that their existing security countermeasures must be working. They have never really considered the fundamental differences in electronically mediated digital exchanges versus the paper-based analog exchanges used in the past. Some have even begun conducting electronic commerce from unevaluated components in open environments. These factors all point to the critical need for increased security awareness.
Futility of "Penetrate and Patch"
One of the most naive and dangerous (yet persistent) beliefs is that one can provide meaningful security against the growing hostile threat by having "experts" try to penetrate the security controls, and then patching any holes that are found. This is a fundamentally flawed method, especially in the face of malicious software. The scale is seriously unbalanced in favor of the attacker, since the attacker need find only one hole while the defender must essentially find all possible holes.
Reliance on Cryptography as a "Magic Pill"
Encryption is a known technology that has been demonstrated to work for many applications. Yet just because a product features encryption doesn't necessarily mean it is secure. There are many weak cryptographic implementations where the intended protection can be easily compromised by malicious software. For example, a Trojan Horse could be interjected into a financial transaction to display one dollar amount to the user while digitally signing another amount, thus undermining the integrity of the transaction.
Malicious software can also subvert confidentiality by leaking cryptographic keys. In this case, the loss of a few tens of bits can completely undermine the security of an entire system. Such leakage is especially devastating since the user is likely to assume that his or her actions are "secure" because they are protected by encryption.
Reliance solely on cryptography can lead to a false sense of security. As computers advance in power and connectivity, technologies that used to provide adequate protection no longer do. A good example is encryption with 40-bit keys. Three years ago, organizations were using 40-bit encryption which were intended to keep information secure for 20 to 30 years. However, recent advances in the science of cryptanalysis and dramatic increases in computing power have led to vastly improved methods used to compromise the security of computer systems. In the last year, a public challenge to break a 40-bit cryptographic construct was successfully met within hours. Thirty-year confidentiality of the 40-bit encrypted information can no longer be guaranteed.
Adverse Impact of Public Policy
Tradeoffs among national security, law enforcement, and information security may well result in a major impediment to the ability of suppliers and users to provide adequate security. Current governmental export policies on encryption in the United States and in other countries prevent suppliers from integrating commercial-quality cryptography into products that will be sold worldwide. Without the ability to provide strong encryption internationally, vendors' hands are tied. Valuable information exchanged on a global basis will be at risk and business will not be able to transmit and receive the various classes of sensitive information across national borders (see Figure 7).
Figure 7: Governmental import/export policies with regards to encryption are a major impediment to security in multinational organizations.
Lack of International Standards
There must be continued development of standards and processes for the independent, technically-based security evaluation of trusted system products. Existing government standards offer some of the information that system designers and integrators need to select and configure products. These standards also support end-users who could use them to guide their decisions as to which systems should be allowed to process their sensitive data.
However, to be useful to vendors, developing standards must build on the strength of standards that have proven to be successful in the past. Too many draft "Common Criteria" proposals lack essential elements such as a fundamental (reference monitor) foundation and incremental evaluation. Worse, they include complex and untried approaches and have a weak common basis for comparison across products.
The Trouble with "Single Point" Solutions
Some companies will persist in the belief that their information loss and disclosure problems can be adequately handled with "single point" or point-based solutions. These solutions typically involve a server which is "secure" only as a standalone component. While this may be true in certain environments, all it takes to put the company's information assets in danger is the simple addition of a modem on one computer. Worse, such solutions may have serious flaws: they may not scale, or they may prove difficult to maintain.
In light of these stark realities, it is extremely risky for any company to make the decision to implement an inter-company security strategy based on single point solutions from one vendor or another. Deploying such a solution now may lock you into a particular technology and may have serious ramifications in the future as you attempt to further leverage your electronic solutions. There is the possibility you may have to completely redo your security infrastructure at some point down the road.
Corporate developers in particular must carefully assess the feasibility of deploying potentially unscalable and unmaintainable technologies in an electronic business environment. Internet commerce techniques which were designed for mass-market use are not necessarily good business tools for intra- and inter-business activity, especially where corporate developers must create these inter-business products themselves. Creating applications for an "open" environment from single point solutions is difficult enough. In fact, it may lead to more difficulties in the form of additional communications and security requirements needed to maintain privacy while meeting international laws and regulations.
A Prescriptive Approach to Network Security
With this background on the concepts and issues behind network security, you are better prepared to tackle the challenges of designing and implementing your own system. As a summary, here is a suggested prescriptive approach.
1. Promote security awareness with the organization.
The escalating potential for information loss, disclosure or alteration, especially in an interconnected open network environment such as the Internet, should be a primary concern for any organization with valuable information. Managers, administrators, auditors, and end-users must all make security a top priority in product purchases and deployment.
2. Increase your understanding of risks and the critical need for security protections.
While the technologies used for protecting confidentiality have been making rapid advances, so have the technologies used for unauthorized access and disclosure. It is important that the community of users-- individuals, managers, and policy makers-- understand the threats to the confidentiality, availability, and integrity of their information, as well as the limitations of most currently available countermeasures.
3. Devise an information security policy that is meaningful and implementable.
The formation of corporate information security policy is a proactive, rather than a reactive, approach to enabling information security in your organization. An effective security policy not only considers the value of information assets, but also people and philosophies of protection.
4. Choose products and applications that help enforce your security policy.
Security problems need solutions that enforce your security policy; your policy should not be adapted to the limitations of products. A properly planned and implemented security architecture that covers the entire network--servers, workstations, and cabling--is of immense benefit.
Here are some additional points to consider in choosing products:
Recognize what technology can and can't do. It is a misconception to assume that the electronic world offers protections like those of the paper world even when the majority of electronic forms and documents are reproduced in paper. The broadly held belief that networks can be protected by technology alone is inaccurate, if not outright ludicrous. "Real" security services and features can only be obtained by building from a trusted base.
Demand security from vendors and developers. Affordable, secure commercial products will become widely available only when meeting a particular security level is made a significant purchase criterion among the customer base. Customers must clearly state to operating system manufacturers, as well as to application developers, their need for a clean and user-friendly interface to security services. A vendor's intended security directions should also be examined and compared with other offerings in the interconnected environment.
5. Seek assistance from qualified auditors.
The audit function has traditionally been relied upon to protect corporate executives in matters of security, and auditing personnel are commonly found in larger organizations. Yet because very little information is currently available in the area of information security in interconnected environments, most auditors are barely up to speed on the technologies they will have to confront as LANs become more interconnected. Electronic auditing in a distributed network is fundamentally different from previous forms of audit used for LANs, WANs, MANs, and mainframes. Electronic audit is a technology whose details are best understood by highly-trained IT auditors who specialize in distributed database integrity.
Help is available from the Information Systems Audit and Control Association (ISACA), a worldwide organization dedicated to improving audit, control and security practices, at http://www.isaca.org. While there, be sure to obtain information about the CobiT (Control Objectives for Information and Related Technology) project, a cooperative effort to link information technology and control practices to consolidate and harmonize standards from 18 dissimilar, prominent global sources into a critical resource for management, users, and IS auditors.
Conclusion
In this AppNote, we have endeavored to enlighten users concerning the security-related issues that must be dealt with as we near the beginning of another century. In doing so, we recognize that the public in general does not care to understand the precise nature of the security attacks we have discussed, nor are they likely to comprehend the intimate details of the protective mechanisms being developed. Neither is it our intent to hold up the specter of malicious software and unscrupulous hackers as scare tactics to bully customers into becoming security-minded. As far as end-users, product purchasers, and system administrators are concerned, much of the underlying security mechanisms should remain abstract and undisclosed. They should be reliable, testable, maintainable, and usable, without the user having to completely understand what's going on behind the scenes.
Customers need to actively participate in discussions about security and vocally support the implementation of security services through account representatives, buyers, and vendor contacts. Users must communicate clearly to product manufacturers their need for real, economical security--that if a product does not have security services and features that are implementable, maintainable and affordable, they will not buy it. User-friendly applications which employ security services and features coming directly from a secure base need to be at the top of customers' "must have" lists. Customers can only benefit when they include as a purchase criterion that security must be "real" in the environment it is intended for.
To remain competitive, companies must expand their communication systems by incorporating a trusted network environment. The continued shift to network computing in an enterprise model requires funding and major changes to the way data is handled. It also requires education. Thus, management and security personnel must form an integrated working partnership to provide the policy, procedures, and education necessary for secure business systems.
* Originally published in Novell AppNotes
Disclaimer
The origin of this information may be internal or external to Novell. While Novell makes all reasonable efforts to verify this information, Novell does not make explicit or implied claims to its validity.