Controlling Access to Open Systems with IntranetWare BorderManager
Articles and Tips: article
Chief Scientist & Vice President
Advanced Development Group
Advanced Development Group
Senior Software Engineer Consultant
Advanced Development Group
Consulting Software Engineer
Advanced Development Group
Senior Software Engineer
Advanced Development Group
01 Jun 1997
Frustrated about how to control or improve access to the Internet or your corporate intranet? This AppNote provides the inside scoop on Novell's hot new border services product, from the engineers who developed it.
Information has no intrinsic value unless it's shared. In fact, the roots of the word information mean to share knowledge. Whatever it is, it's not information unless you can also use it. Sharing information has been the stock and trade of publishers for centuries. At its core, Web publishing is no different from the classical (paper) publishing paradigmother than the fact that the Internet (standards and infrastructure) makes distribution so inexpensive that almost anyone can publish almost anything anywhere in the world.
This AppNote discusses the intrinsic values of the Internet (information) and the inherent problems that ensue (managing value), and Novell's unique approach to managing such information via BorderManager. The AppNote covers these topics:
Issues involved with connectivity (a customer's physical access to the information resource) and policy (does the customer have rights to that resource?).
Concerns related to access control in open systems
IntranetWare BorderManager as Novell's foundation for open systems access control, emphasizing the following services: Proxy Cache Server, IP Gateway, Internet Security Services, and Virtual Private Network (VPN).
For more information on BorderManager, visit the Products section of Novell's site on the World Wide Web:
Managing Internet Content
As the novelty of inexpensive, universal distribution wears off, Internet content publishers are searching aggressively for means of generating revenue through timely access to privileged information. The value point for Internet software vendors is to facilitate that timely, managed access. We have reached the point in the information age where access is no longer constrained by physical or financial barriers; there will soon only be policy barriers. As a vendor of networking software, Novell's opportunities for growth lie in building products that enable our customers to manage the value of their information through access control (the policy for sharing that information).
Access control policy is not strictly a function of content. Rather, it's a function of content in combination with the consumer. This makes it impractical to embed access control into the content itself. A book does not have implicit access control even though its content may not be appropriate for all audiences. Books may be locked in a vault, a special collections library, an adults-only section, or placed in general distribution according to policies derived from content and audience (and usually, revenue objectives). In each case, the access control mechanism is naturally separate from the information within the book itself.
A Web service mixing access control with content violates object normal form because the behavior dictated by the access control policy is functionally dependent on more than just content. A complete list of functional dependencies for Web services must include:
Object identity - the universal identifier for the object
Consumer identity - the universal identifier for the consumer requesting the object
Content- the content of the object
Meta-content - peripheral information related to the content; for example, access control lists (ACLs), PIC (Platform for Information Content) labels, or time-to-live (TTL) values
Access policy - the specifications declaring who may access which objects and under what constraints (the crucial piece of meta-content that is the focus of this paper)
Performance - the consumer must be able to access the object in an efficient, timely manner
Host-centric identity mechanisms are an inadequate attempt to encapsulate (read: truncate) consumer and object identities to the Web server's host domain. Unfortunately, most vendors' Web servers force publishers to employ host-centric identity. Deploying these truncated identity mechanisms constantly torments intranet administrators because both content and identity move and replicate independently. Practical policy administration for Internet publication is vastly more complex since movement and replication are effectively infinite.
Host-centric access control is similarly impractical. Specifically, as content migrates closer to the consumer (through strategic proxy caching), host-centric products leave consumer identity and meta-content at the original host--effectively prohibiting proxy caching. Replicating host-centric identities and meta-content among web and proxy servers quickly breeds an administrative quagmire.
We envision a future for the commercial Internet where authors develop content and business managers (publishers) tune access control to meet enterprise objectives (revenue, advertising, and so on). How could it be otherwise? This model has worked successfully for centuries. For their part, consumers will be able to locate and access information at historically low cost and high availability. This remarkable, low- cost information sharing is itself the catalyst of the information age. Yet, the administrative friction imposed by technologies that fail to isolate the functional dependencies of Web service artificially inflates the cost of information sharing, impeding the growth of this new epoch.
The key to a vital Internet is to isolate the functional dependencies governing information sharing over the Internet and allowing each to mature independently.
Uncoupling Content from Access Control
The current path of Internet evolution provides insight into strategies for uncoupling content from access control. However, before delving into technologies, let's take a brief look at the dimensions of access control. Specifically, we must be concerned with:
Connectivity - Does the consumer have effective physical access to the resource?
Policy - Is the consumer entitled to the resource?
The connectivity dimension is expressed in today's Internet. Innovative technologies and standards have produced a dynamic medium of open connectedness. Often, demand is so popular that performance and availability suffer, giving rise to questions of effective connectivity: Is the connection available and will it perform?
The policy dimension is an expression of business objectives. It's crucial to understand that policy implementation is not a technical matter. Mechanisms implementing business policy must be fluid enough to follow the course of intensely competitive business dynamics. All too often, the opposite is true: business policy must bend to the limitations of vendors' access control implementation. Administering these policies (and verifying their implementation) is a substantial portion of the cost of intranet/Internet access. Sophisticated yet intuitive user-interfaces for access control policies are among the most difficult to create. And even more crucial, most vendors implement these mechanisms as host-based services, forcing an impedance mismatch between host and intranet/Internet environments and leaving the user interface a confused mixture of host and network metaphors.
Remember: neither of these dimensions is a content issue. And more importantly, neither is a local issue.
Access Control in Open Systems
Practical implementations of access control must provide multiple levels of control: host, session, application, and content.
Host control determines which hosts can be accessed.
Session control determines which clients may establish sessions with which host processes.
Application-level control determines which applications can be accessed. For example, organizations may allow access to Web and FTP applications but deny access to news group applications.
Content control determines which resources may be accessed by which users and groups.
Furthermore, where the intranet/Internet boundary is concerned, access control implementations must be effective in two directions: inbound and outbound. Inbound access control protects intranet servers against unauthorized access from the Internet. Outbound access control constrains the view that intranet clients have of the Internet, limiting their access to resources considered necessary for enterprise objectives. For example, an organization may want to prohibit access to Internet sports sites during business hours.
Open system architectures recognize a three-layered access control model: packet filters, circuit-level proxies, and application-level proxies.
Packet filters use routers to filter traffic entering and leaving specific network segments. Packet filters check packet header information (source and destination addresses, protocols, and port numbers), only routing packets conforming to the access control policy.
Circuit-level proxies provide a virtual circuit between the Internet and intranet desktop applications, implementing access control at the session layer. Circuit-level proxies implement routing policies based on higher-level protocols encapsulated within the packet (for example, HTTP). These proxies maintain state information necessary to determine whether the next packet in a session dialog conforms to its protocol.
Application-level proxies intercept transmissions between Internet services and intranet desktop applications. These proxies apply application-level semantic access controls prior to relaying the data. Application protocol elements that don't conform to the application-layer access control policy are not forwarded.
Access control features can be classified in terms of their relation to the Open Systems Interconnection (OSI) model, as shown in Figure 1.
Figure 1: Open systems access control layers in relation to the OSI model.
In addition to sophisticated access control, intranet technologies should provide end-to-end information encryption. This is particularly important for Virtual Private Networks (VPN) and online transactions (see Figure 2). The encryption mechanism should support leading encryption standards for virtual networks and electronic commerce.
Figure 2: Virtual Private Networking with open systems.
VPNs enable organizations to use the Internet as a backbone for their enterprise networks. Enabling flexible and secure VPNs requires the following:
Configure multiple VPNs on a single enterprise network. Allow the network administrator to configure multiple VPNs consisting of different combinations of network resources, users, and groups, all on the same physical network.
Control who participates in each VPN. The administrator should be able to specify which users and user groups participate in each VPN.
Secure data sent over the public network. The solution should protect data sent over the Internet from unauthorized access.
Hide intranet topology from non-VPN users. To prevent unauthorized users from breaking into the intranet, hide the topology of the intranet from outside observers.
Finally, access control implementations should not degrade performance nor impede productivity. Administrators should be able to manage global access control policies from a single point of access anywhere in the network. This makes access policies easier to manage and reduces the risk that administrators will inadvertently leave holes in their implementation. The only practical means of effecting this administration is through a global, scalable directory such as Novell Directory Services (NDS).
BorderManager: The Foundation of Open System Access Control
To this point, we have painted our vision for open systems access control. How will this vision be implemented? IntranetWare BorderManager will emerge as the foundation of access control for open enterprise networking composed of intranet and Internet technologies. BorderManager includes a set of border services: Proxy Cache Server, Novell IP Gateway, Internet Security Services, and Virtual Private Network (VPN).
Proxy Cache technology enables a browser to direct requests at the Proxy Cache, which first determines the client's access rights and then looks for the requested object in its cache. If the object is already in cache, it is returned to the requesting browser at lightning speed. If the Proxy Cache doesn't have the requested object, the Proxy Cache obtains the object from its source, stores a copy in its cache, and then returns the object to the requesting browser. Newly cached objects can be shared among many browsers. This process reduces traffic to the object's source, reduces workload on the source's host platform, and delivers the object to other browsers at cached speeds.
Web servers normally provide aging information to their clients (browsers) indicating how long pages should be cached. The Novell Proxy Cache uses the same cache aging information provided by the Web server. Even for sites that dynamically generate HTML pages, the HTML text is only a small part of the transmitted data. The majority of the data consists of static information such as images and Java applets (which are cacheable). The Proxy Server provides additional mechanisms for fine-tuning aging policies.
The proxy promises dramatic improvements in effective access through:
Minimizing Internet bandwidth consumption by filling outbound web client requests from shared caches on the intranet/Internet border (web client accelerator)
Reducing web server load by filling incoming Internet requests from shared caches on the intranet/Internet border (web server accelerator)
Hierarchically replicating information across intranet/Internet domains to minimize request latency (network acceleration / ICP hierarchical caching)
Minimizing bandwidth consumption by having the local proxy reject requests for unavailable Internet resources (negative caching)
Filling requests when the source is off-line (high availability)
Ultra-efficient cache performance and scalability from blending the best performing file and communication engines
In addition to these crucial performance advantages, the proxy plays an important role in addressing the dimensions of access control detailed earlier. In conjunction with the proxy cache, other border services (such as the IP/IP and IP/IPX gateways) provide a sophisticated implementation of access control. Both outbound and inbound rules combine to determine access control policy. (The term outbound pertains to traffic leaving the intranet for the Internet; inbound means the opposite.)
In its first implementation, the proxy server enables administrators to define outbound application-level access control policy. The circuit-level proxy and packet filter implementations enable administrators to define both inbound and outbound access control policies at those layers. The term "firewall" is commonly applied to these access control mechanisms, either separately or in combination (for instance, firewall may mean a packet filter, or a packet filter combined with a circuit-level proxy, or the combination of packet filter, circuit-level proxy, and application-level proxy).
Examples of IntranetWare BorderManager access control include:
Administrators can restrict intranet clients from accessing URL sites through the Internet protocols HTTP, FTP, Gopher, and SSL. Through NDS, these restrictions can be conveniently applied to users or groups globally defined within the intranet (application-level proxy, outbound access control).
Administrators can restrict intranet clients from downloading executables through the intranet firewall. These restrictions can be conveniently applied to users or groups (application-level proxy, outbound access control).
Administrators can configure access control policies that discriminate among Internet content based on industry-standard ratings, such as PICS and other criteria, such as CyberPatrol(application-level proxy, outbound access control).
Administrators can restrict intranet clients from establishing any session with Internet hosts. These restrictions are specified according to IP packet header information: source and destination addresses, protocols, and port numbers (circuit-level proxy, outbound access control).
Administrators can restrict intranet clients from sending packets through the firewall to Internet hosts. These restrictions are specified according to IP packet header information (packet filter, outbound access control).
Administrators can restrict Internet clients from establishing HTTP, FTP, Gopher, and SSL sessions with intranet hosts. These restrictions are specified according to IP packet header information (circuit-level proxy, inbound access control).
Administrators can restrict Internet clients from sending packets through the firewall to intranet hosts. These restrictions are specified according to IP packet header information (packet filter, inbound access control).
In the first release, circuit-level and packet filter access control are based on conventional IP host identity (source and destination addresses, protocols, and port numbers). Intranet managers may browse the NDS tree to select host identity information (based on user or group) for configuring circuit-level access control.
Future releases will extend access control to support directory identity beyond the application-level proxy into the circuit-level and packet filter dimensions. Similarly, as global directory standards become commonly deployed (NDS, LDAP, X.500), inbound access control may take advantage of global identities.
IntranetWare Proxy Services extend the policy dimension of application-level outbound access control in a fundamental way. Resource identity, consumer identity, and access policy may be specified universallyrather than as host-centric characteristics. Identity is normalized with respect to the host platform. A relatively small collection of policies can be applied universally, making it practical to verify access control policy.
In the future, Novell will introduce other application-level proxies to provide sophisticated access control for services such as audio, multimedia, teleconferencing, and so on.
Managed Topologies: Channeling the Flood of Internet Traffic
Introducing IntranetWare BorderManager into the intranet enables administrators to establish control points within their open environments. (IntranetWare proxy cache's vigorous support of Internet standards HTTP, FTP, Gopher, and SSL make it possible to effectively cache content from any vendor's web server.)
In Figure 3, content publishers isolate Web servers on a non-routable intranet segment. The Web servers are configured to accept traffic from the BorderManager only. Intranet clients can only access these Web services through the proxy (whose address is the publically advertised access point for these Web servers). All intranet clients must go to the BorderManager to fill their requests.
In this scenario, the BorderManager grants access to Web resources based on NDS directory identity. Access may be universal, constrained by group, or limited to a few individuals. For example, the Legal department could post sensitive information, conveniently restricting its availability to the (minimal) necessary audience. Non-directory authenticated clients can still use the proxy as an application gateway. However, access control defaults to conventional host identity mechanisms.
Figure 3: Managed intranet web access.
In this type of configuration, web servers can be inexpensive, underpowered host platforms. The service speed is determined by the performance of the cache platform and the speed of the LAN connecting clients with the proxy.
Figure 4 places a BorderManager at the intranet/Intranet border. (The name "border services" is derived from this configuration.) Intranet clients get LAN-based, cached-performance accessing static information from the Internet. Intranet managers may conveniently determine access to Internet resources through NDS-based administration.
Figure 4: Managing the intranet/Internet border.
Figure 5 details a configuration where BorderManagers create a Virtual Private Network (VPN) through which local intranets form an enterprise network.
Figure 5: Managing the Internet as a Virtual Private Network (VPN).
NDS-style administration determines who has access to which VPN channels. As in the previous examples, these BorderManagers determine which users may access information from the Internet. Finally, the BorderManagers provide cached access for their local intranets.
Figure 6 illustrates how the BorderManager may be deployed as a firewall.
Figure 6: Intranet firewall configuration.
In this case, Web content is managed and accessed safely within the intranet. The BorderManager has privileged access to the web server for Internet clients. (No Internet traffic can penetrate this BorderManager boundary.)
A single BorderManager may be configured to implement both inbound and outbound access control policy. In Figure 6, the BorderManager may safely provide proxy caching for intranet clients requesting content from Internet servers while also playing the role of firewall.
Figure 7 illustrates how a corporation may deploy a hierarchical collection of BorderManagers to improve performance and access control throughout the enterprise.
Figure 7: Enterprise control points for accessing intranet/Internet services.
In this configuration, the corporate cache at the top of the hierarchy is the primary cache for information inbound from the Internet. Each building has its own cachechildren of the parent cache. Cache misses at the building level cause requests to be forwarded to the parent (corporate) cache which may either return information from its own cache or fetch it from the origin Web server out in the Internet. Similarly, individual departments within each of the buildings have their own caches. They may request service from their parent (or peer) caches as necessary.
In addition to being caching points in the enterprise topology, these BorderManagers are also control points. The uppermost border server may implement the most general rules for Internet access. For example, this server may filter requests for content from entertainment sites. Each border server down the hierarchy may focus access control more tightly, as determined by department policy. Remember, policies and identities are global in this administrative implementation, making it necessary for managers to specify relatively few rules to manage the whole network of border services.
Caches cooperate with one another through the Internet Cache Protocol (ICP). Administrators define a hierarchical cache topology by designating the peers and parents of each cache. When a proxy fails to locate a request in its own cache, it may forward the request to its peers. This is very useful in cases where peer caches are close (in response time) to the requesting proxy. Similarly, a proxy may request that its parent return the request from its parent cache. The subtle difference between parent and peer is that should the parent fail to locate the request in its cache, it may proceed to locate the request in either its peer or parent caches, proceeding hierarchically up the chain until the request is sent to the origin Web server (if it cannot be found in any cache). Peer caches simply return the requested object or notify the requestor of a cache miss.
Distributed Access Control: The Only Practical Choice
The evolution of intranet/Internet technology and standards leaves little doubt that content will eventually uncouple from access control, and that host-centric identities and policies will give way to their universal counterparts. Universal connectivity mediated through effective, universal policy implementation is the great attractor towards which intranet/Internet technology is moving. This is the prime user interest regarding networks (intranet or Internet). The crucial question is: How will this transition take place? What are the major obstacles affecting the course of this transition?
Host-centric Web server access control is a curse. The controlled content on these servers cannot be cached. Any request for privileged information must be serviced by the origin Web server, negating the performance benefits of caching. Vendors who prefer this arrangementnot surprisinglyembrace host-centric identities, leaving their customers to grapple with expensive policy management headaches. The solution to this headache is a sophisticated directory implementation.
A directory is a database of key/value associations. In its simplest form, a directory may be used to store email addresses where the key/value associations are user name/email address pairs. However, this directory concept may be generalized to become the essential administrative core for entire networks where the directory is a database of network service names (key) and their service address (value).
Novell Directory Services (NDS) implements a sophisticated vision of directory services, extending the directory paradigm in several important dimensions. NDS is an object-oriented, global, replicated administrative repository. The more obvious use of that repository is to implement an intranet naming service but not a flat, host-domain-oriented naming server. NDS implements a naming service that mirrors the hierarchical structure of the intranet itself.
The NDS name service encompasses all servers and resources in the network, defining the name space (syntax and semantics) for the entire network. A name space is a set of rules governing how network users and resources are named and identified. NDS implements an n-level hierarchical name space instead of the traditional, flat name space found in host-centric administrative contexts. A hierarchical object name resembles a complete file name in a hierarchical file system.
NDS is a distributed database because pieces of the database are dispersed among directory servers according to operational objectives such as network latency, pattern of use, access time, and so on. These pieces are called partitions; they save server disk space since individual servers are not forced to store the entire directory. Furthermore, these NDS partitions may be replicated on different servers, providing fault tolerance and low-latency access to network resources. Cooperation among directory servers allows users to access the directory as a single resource from anywhere within the intranet. The client (human or process) need not be aware of the directory's partition topology.
NDS is a foundation service for IntranetWare. It provides the name service used to translate network or resource names to network addresses. Administrators can modify attributes of directory objects (such as the service address of a network resource) without any user being aware of the change. The updated resource is simply accessed by its name.
NDS is the natural place to store and manage meta-content associated with the proxy cache itself and Web objects stored in the cache. For example, identity and authentication information for users and groups when stored in NDS are everywhere-accessible within the intranet. Administrators define policies for proxy caches (such as default time-to-live or site restriction values) as NDS objects and apply those policies uniformly to multiple proxies as easily they do for an individual proxy. This is a classic example of uncoupling meta-content from content, as described earlier. Meta-content (such as access control lists) is free to replicate throughout the intranet independently of content (such as HTML pages), which themselves are free to move throughout the cache topology. Global policies are consequently very easy to define and enforce uniformly. Specializing access policy is similarly simple, as administrators need only relate NDS proxy objects to more specialized access policy objects.
Enhanced Effective Access: Accelerating Intranet Services
The danger in introducing middle-tier points of control within the intranet is the potential for negatively effecting performanceeffective access. In reality, the opposite is true. Ultra-efficient proxy cache performance and scalability are produced by blending Novell's industry-leading file service mechanism with the best communication engine available. Research indicates that in many cases Novell's Proxy Cache begins returning requested objects to browser clients even before other vendors' web servers can make a context switch to service the task. This extraordinary performance enables BorderManagers to support sophisticated access control and content services on the Internet border and throughout the intranet.
Performance comparisons demonstrate the ability of these border service technologies to turbocharge intranet Web services. As shown in Figure 8, Novell's Proxy Cache and Border Services are three to 10 times faster than other solutions. This is consistent with Novell's heritage of combining high performance with industry-leading access control.
Figure 8: Proxy performance measurements for Novell, Microsoft, and Netscape.
The server configuration for the Netscape and Microsoft performance comparison was:
Compaq Prolinea 4000 (Pentium Pro 200)
Four 100-Mbit LAN cards
Microsoft NT 4.0, Service Pack 2.0
The Novell server configuration for the same hardware ran IntranetWare 1.0 with service pack 2.0.
Ninety-six clients participated in the performance comparison; the client configuration was:
Microsoft Windows 95
The performance analysis consisted of client requests for HTML files ranging in size from 256 bytes to 128KB.
This technology enables administrators to implement sound access control policies without penalizing users with poor response times. BorderManager also has the effect of reducing the workload on WAN connections and Web server hardware. Administrators can consolidate or even replace expensive Unix-based Web server platforms with industry-standard hardware while providing turbocharged intranet/Internet access.
The unique character of the IntranetWare communication engine enables it to dramatically improve on the performance and scalability found in other vendors' implementations. The IntranetWare proxy cache is implemented as a multi-threaded, non-blocking service (as opposed to process). The distinction between process and service is fundamental. A process is a relatively heavy-weight entity with built-in overhead (context-switch time and memory footprint). IntranetWare is an integrated set of kernel services, a finite state machine built of automata for handling sophisticated collections of services such as file, network, proxy, and so on. IntranetWare spawns transactions to service states in the finite state machine, where work-to-do threads are bound to transactions during state processing with dramatically less overhead than in process-based context switching. The consequence is that IntranetWare proxy service is at least an order of magnitude more efficient in using threads than application-server based implementations.
Remarkably, Novell Proxy cache services provide important acceleration advantages even in the case where there is only one client (for example, a home office configuration). The reason for this acceleration is that Web browsers are implemented to receive data from Web servers in small 4K or 8K blocks. Worse yet, the receive window is opened and closed as each block is requested and the browser takes time to paint data on the screen. This is slow enough. Yet, browsers usually communicate directly with origin Web servers over slow (high latency) Internet connections; single-client service is very slow.
In the case where a proxy server is deployed between the client and the origin Web server, Novell proxy cache services open a 64KB receive window between the proxy cache server and the origin Web server. The window remains fully open for the entire transmissioneven as the proxy begins to update the requesting client. (The proxy cache can read data from the origin server in real time.) By the time the browser opens its receive window for the next block of data, it's almost certain that the information is already cached at the local proxy. And the browser/proxy dialog occurs at local intranet speeds.
In the normal course of browsing pages, the user may select a new link before the current page is completely fetched. At this point, the browser stops fetching the first page and immediately requests the page associated with the new link. The first page remains partially fetched, and may need to be refetched entirely if the user returns to it. Novell Proxy Cache back fills the cache with entire pages even though the user (browser) may continue to read other pages. Should the user return to a partially fetched (on the client) page, the proxy can supply the complete page immediately from its local cache.
With the advanced proxy caching technology employed in Novell's Proxy Cache, organizations can achieve tremendous performance enhancements. By caching data on a LAN-based proxy server, the proxy cache reduces the request load across the WAN, typically by more than 60 percent. That means organizations can realize two and a half times as much throughput over the same physical WAN connection, without purchasing expensive, higher bandwidth WAN connections. Because proxy caching lowers and balances loading on both Internet and intranet Web servers, organizations also save the cost of adding Web servers to handle increased demand.
Intranet BorderManager is a key product in Novell's strategy to make open systems more manageable. Proxy Cache Server, a major component of BorderManager, handles requests from the browser at fast speeds, reducing traffic to the object's source and reducing workload on the host platform. BorderManger's other components, IP Gateway, Internet Security Services, and Virtual Private Network, all work together to provide an excellent foundation for access control of open enterprise networking.
* Originally published in Novell AppNotes
The origin of this information may be internal or external to Novell. While Novell makes all reasonable efforts to verify this information, Novell does not make explicit or implied claims to its validity.