Removing IPX from WAN Segments During an Upgrade to NetWare 5: A Case Study
Articles and Tips: article
01 Sep 1999
Follow along as a Novell Consultant takes you through the process of using SLP and the Compatability Mode Driver to phase out the IPX protocol across your network.
A number of general migration strategies are available to achieve the final goal of a Pure IP network with NetWare 5. These strategies are summarized in the article entitled "Migration Strategies for Upgrading IPX and NetWare/IP Networks to Pure IP" in the June 1999 issue of Novell AppNotes. However, the migration strategy chosen by an organization is not always a purely technical decision. It is heavily influenced by the organization's network infrastructure, its functional requirements, its workflow habits, and its general organizational structure.
This AppNote illustrates a case study in removing IPX from the WAN links of a fictional company, Migration Enterprises, Inc., to facilitate more efficient use of WAN link bandwidth. This case study follows the design and implementation of the migration strategy through the following steps:
Determining an appropriate migration strategy after considering the functional requirements and the documentation of the physical network
Planning and setting up the Service Location Protocol (SLP) infrastructure required to facilitate the removal of IPX from the WAN segments, given the organization's functional requirements
The placement and impact of the Migration Agents (MAs) on the network
This AppNote is intended for those who are planning a migration from IPX to Pure IP with NetWare 5. Readers should already be familiar with SLP as defined in IETF RFC 2165. It is also helpful to have an understanding of Novell's Compatibility Mode Driver (CMD), as portions of both technologies will be discussed in detail where necessary to provide insight into the design process.
Migration Enterprises' IPX to Pure IP Migration Strategy
Migration Enterprises, Inc. is an organization whose main business function is to propose IPX to Pure IP migration strategies for clients who want to take advantage of the TCP/IP features of NetWare 5. And what better way to begin providing this service than to propose and build a solution for their own internal network.
As an organization, Migration Enterprises has roughly 300 offices located around the country. Each office maintains its own hardware infrastructure and IT services, which include at least one NetWare file server per location. Each remote office site is connected to the main corporate network through a set of WAN links. This infrastructure leads to a very distributed administration environment where local administrators in each remote office generally want to have autonomous control over their portion of the network.
Migration Enterprises has established central IT groups that are responsible for setting standards and proposing technology direction for the entire organization. One group serves a systems administration function, being responsible for such things as the organization's single NDS tree. Another group is solely responsible for the data communications of the enterprise network, and for the physical connectivity including the routers and switches.
The main problem plaguing Migration Enterprises is that the WAN links between remote sites and the corporate network are being needlessly cluttered with RIP/SAP information, as IPX traffic is passed across the WAN links untouched. The central IT groups realize that something needs to be done to address this, and NetWare 5 appears to be a good solution. The big question to be answered is this: Given the unique characteristics of their organization and network, what is the best strategy for accomplishing this migration?
Determining the Migration Strategy
The distributed nature of Migration Enterprises' network administration does not lend itself well to the dictation of and strict adherence to standards. While standards can be proposed, they cannot necessarily be enforced company- wide. However, the driving force behind the migration strategy comes from the central IT groups. Although they do not have the authority or backing to dictate desktop standards (including client versions) or LAN protocol standards, they can control which protocols traverse the WAN links, and they do control the key NetWare servers that make up the overall network infrastructure.
Again, the main concern for the central IT groups is the bandwidth consumed on the WAN links by the RIP/SAP traffic required to advertise IPX-based services on the network. This plays right into the areas the central IT groups control. They can choose a migration strategy that will eliminate the RIP/SAP traffic from the WAN links, and implement this strategy without impacting the distributed departmental administrators. The migration strategy they chose is to remove IPX from the WAN segments using NetWare Migration Agents or MAs.
It should be noted that the removal of IPX from the WAN segments is only a first step in the process of achieving a Pure IP network. Once IPX has been eliminated from the WAN segments, Migration Enterprises' regional offices can begin converting their local LANs to Pure IP. Once all regional offices have accomplished this transition, the MAs required to provide the connections across the WAN links will no longer be needed.
Listing the Functional Requirements
Once the overall migration strategy has been chosen, the functional requirements for Migration Enterprises must be gathered. These functional requirements will greatly impact the actual design and implementation of the migration strategy. Some requirements will dictate specific methods of implementation, as will be explained.
The central IT groups at Migration Enterprises met to draw up a list of functional requirements for the migration. Their requirements were documented as follows:
The visibility of services must remain intact. Currently, all regional offices can see all of the available NetWare servers on the network. After IPX is removed from the WAN segments, all NetWare servers must remain visible from all locations.
The amount of static configuration of servers required to implement the migration strategy must be minimized. The central IT groups do not want to be responsible for maintaining individual settings on hundreds of servers across the enterprise.
Multicast will not be enabled end-to-end on the enterprise network. The group responsible for the data communications is rightly concerned about opening up multicast on the network. Multicast may be enabled for portions of the network, but end-to-end use of multicast is not an option.
Migration Enterprises' Physical Network
With the functional requirements established, the next step is to look at the physical layout of the network. This will provide more insight into the actual design for the migration strategy at Migration Enterprises. The physical network will dictate the placement of the infrastructure servers and help the strategy implementors understand the new traffic flow on the network.
Fortunately, Migration Enterprises has a current map of their network infrastructure, which makes the design process much easier. If a network map had not been available, one would have had to be created.
As mentioned before, Migration Enterprises has about 300 regional offices that connect to the corporate network through WAN links. The speed of these WAN links varies depending on the size of the regional office and the traffic patterns associated with that office. No link is slower than 56Kpbs, and some links between regional offices and the corporate network are of T-1 speed. The links are managed through a Switched Multimegabit Data Service (SMDS) cloud. Each SMDS cloud services 25 regional offices.
There are three main sites where these links return from the regional sites through the SMDS clouds. These main sites (called POP sites) organize the SMDS clouds and are in turn redundantly connected to one another. No POP site has more than four SMDS clouds connected to it. These backbone connections between POP sites are T-3 links. Figure 1 shows the general layout of the Migration Enterprises network as it has been described.
Figure 1: Migration Enterprises network diagram.
From this network diagram, it is easy to see how network traffic flows between regional office sites. At this point, it is not important to know specifically which offices are connected to which SMDS clouds; however, the groupings of the clouds themselves are important. Network traffic will be isolated based upon the POP to which the SMDS cloud is connected.
From the diagram, the WAN links targeted for IP-only are also clearly seen. IPX can still exist on the LAN at the regional offices; however, all WAN links shown in the diagram are to be converted to IP-only. Thus, all SMDS clouds will be routing IP-only traffic, and IP-only traffic will be passing from POP site to POP site.
Migration Strategy Design Plan
At this point, with the goal of the first step in the migration strategy decided, along with the list of functional requirements and the network diagram, a design plan can be formulated. For Migration Enterprises, the design plan is going to incorporate the use of Novell's Compatibility Mode Driver (CMD) and the Service Location Protocol for dynamic service discovery.
Figure 2 shows the general placement of MAs on the network. along with the placement of the Directory Agents (DAs) required for SLP. Each regional office will have an MA acting in Backbone Support mode (not shown in Figure 2). Each POP site will have a DA and a number of MAs in Backbone Support mode to facilitate the removal of IPX from the WAN links.
Figure 2: Placement of MAs and DAs in multicast "islands."
Multicast will also be enabled on a per-POP basis. This means that multicast will be supported between the regional offices through the SMDS cloud up to the POP site itself. Multicast packets will not be forwarded from POP site to POP site. In effect, each POP site and its associated SMDS clouds will become a multicast "island." These islands are indicated by the shaded areas in Figure 2.
This migration strategy addresses the needs defined in the functional requirements as follows:
The use of MAs in Backbone Support mode will provide for the end-to-end visibility of IPX-based services on the network. Users in one regional office will be able to see the NetWare servers in all other regional offices.
Using multicast will minimize the amount of manual configuration of servers required. The use of multicast with respect to SLP will provide for the dynamic discovery of MAs on the network.
Multicast will not be enabled end-to-end on the network. Multicast packets will be contained per POP site. No multicast packet generated on the network can pass beyond its local "island."
This design strategy also impacts the technology which must be deployed to make it possible. For example, the use of MAs will require an SLP infrastructure. Because multicast is not enabled end-to-end on the network, DAs will be required as well.
Now that the general migration strategy has been developed, it is important to delve into the technical details surrounding the implementation of the DAs and MAs. These will become an integral part of the communications infrastructure at Migration Enterprises.
Building the SLP Infrastructure
SLP is one of the primary methods for service discovery and resolution in a NetWare 5 environment. For most NetWare 5 implementations, the use of SLP is requisite. Certain modules have inherent dependencies upon SLP. The CMD module is one such module. SLP will also be required as the service resolution mechanism to allow the MAs to dynamically discover each other on the network.
In turn, DAs are not a required component of SLP. SLP user and service agents can communicate with each other even when DAs are not present; however, this requires the use of multicast packets end-to-end on the network. Because Migration Enterprises is not going to support multicast end-to-end on their network, another method to facilitate user and service agent communication is required. This is where the DA comes into play and why it is required to complete their migration strategy design.
SLP Infrastructure Overview
The SLP infrastructure at Migration Enterprises is going to consist of user agents, service agents, and DAs. Because the initial purpose of the migration strategy is to remove the IPX protocol from the WAN links, only servers contributing to that end goal will need to participate in the SLP infrastructure. This means that initially only the DAs and MAs will make use of the SLP infrastructure. As more NetWare 5 servers are brought onto the network, they can make use of the SLP infrastructure. For now, though, only the servers mentioned will really make use of SLP.
The central IT group responsible for the NDS tree has decided to create a separate Organizational Unit (OU) just below the Organization object to hold all SLP- related information. This OU will hold the NDS objects representing the DAs and the Scope Unit objects that will contain the SLP service objects. For the purposes of this migration strategy, only one SLP scope will be used in the Migration_Enterprises NDS tree.
The use of scopes is generally required for large scale environments where the number of SLP services available on the network is over 1000. For the initial migration strategy, one scope will be sufficient; however, as the presence of NetWare 5 grows at Migration Enterprises, new scopes will surely need to be created to handle the number of newly available SLP services. (Scoping strategies and techniques will be discussed in a future AppNote.)
For the purposes of this case study, the name of the SLP scope will be ME_SCOPE. While this will require a manual configuration of all servers participating in the SLP infrastructure, it is being done for future compatibility. When SLP version 2 is eventually implemented, it will make for easier compatibility between the current version of SLP (version 1) and the future version as defined in RFC 2608.
The strategy behind the planned SLP infrastructure is to keep SLP traffic local to the multicast island of the Migration Enterprises network, while still providing end-to-end visibility of all SLP-enabled services on the network. It will be the responsibility of the DAs to exchange information to make the SLP services from one multicast island available to the other multicast islands on the network. This may seem a bit cryptic right now, but once the inner workings of the Directory Agent are explained, it will make more sense.
Directory Agent Basics
Although the DA is an optional component of the SLP infrastructure, it will probably be the most useful part of Migration Enterprises' network environment. SLP is a passive protocol driven by User Agent requests and Service Agent registrations. When a User Agent sends a query to the network, all Service Agents holding pertinent information must respond. Thus, the User Agent obtains service information from many different nodes on the network. With a DA on the network, the User Agent will get service information from a limited number of nodes, thus reducing the amount of network traffic.
Note: SLP makes use of TCP and UDP packets addressed to port 427. SLP requests can be identified in a packet trace through this unique characteristic.
The DA basically acts as a central repository of SLP service information. All Service Agents that come up on the network register their list of available services with the DA. All User Agents making use of DAs direct their requests to the DA instead of making the request through a general multicast packet. This makes for more efficient use of the network bandwidth because packets tend to be directed unicast packets instead of multicast. Also, traffic can be isolated based upon the physical network topology instead of having to enable multicast packets end-to-end on the network.
In the presence of a DA, the communications behavior of User and Service Agents changes drastically. Instead of relying heavily upon multicast for the discovery and resolution of services, they rely upon the DA for more efficient communications. At this point, the question of how the User and Service Agents discover the DAs needs to be addressed.
Discovering Directory Agents
User and Service Agents can discover DAs in one of three ways:
Through multicast addressed packets
Through DHCP configuration information
Through static configuration
Multicast Discovery. The first method of DA discovery is through multicast addressed packets. If multicast is enabled on a network, this is a good option. Multicasting for DAs happens only once per User and Service Agent upon loading and once per DA upon loading.
When a DA is loaded on a NetWare 5 server, it sends out an IGMP multicast request to join the DA multicast group. The address for this multicast group is 188.8.131.52. By joining this group, routers will be able to forward DA discovery packets to the DAs on the network. Joining the multicast group is a required function if User and Service Agents are going to discover DAs on the network through multicast.
Figure 3 shows a sample packet trace from loading a DA on a NetWare 5 server.
Figure 3: Trace of packets sent out from the DA upon loading SLPDA.NLM.
The IGMP packets for the multicast group registration are shown in addition to a multicast DA advertisement. Packet 3 is a periodic packet sent out by the DA to ensure the activation of the DA on boxes running a Service Agent. This is called the "directory agent heartbeat" packet. The interval for distribution of this packet is configurable through the Server Parameters option of MONITOR.NLM.
When SLP User and Service Agents load on servers configured to discover DAs through multicast, they will perform a multicast request to the DA multicast address (184.108.40.206). Any DAs on the network will respond to the request using a unicast packet. Once the DA has been "activated" by the User or Service Agent, that DA is used for all SLP service requests. The User or Service Agent will no longer use multicast to discover services on the network.
Figure 4 shows the general traffic pattern on the Migration Enterprises network when migration agents are configured to discover DAs through multicast.
Figure 4: Dynamic discovery of DAs on the Migration Enterprises network.
From the MA perspective, the DA that first responds to the multicast request for a given scope will be the primary DA for that scope. This means that all future SLP queries will be sent to that DA through a unicast SLP packet. In the event the primary DA for a User Agent is unavailable, the User Agent will try the next DA in the list for that particular scope.
The diagram in Figure 4 illustrates one multicast island on the Migration Enterprises network. In this example, there are two DAs in the multicast island (presumably located at the POP site). Because the multicast packet from the MA reaches both DAs, both respond to the migration agent. Thus, the MA builds its list of known DAs on the network. The first response the MA receives will be the primary DA. In the event that DA is unavailable, the MA will try the other DA in the DA list.
Migration Enterprises will clearly need to use multicast discovery of DAs to achieve its functional requirements. Through this multicast discovery within the local multicast island, the MAs will be able to dynamically locate the DAs and thus retrieve a list of other MAs on the network, which is required for removing IPX from WAN segments.
DHCP Discovery. A second method by which information about DAs can be disseminated is through the Dynamic Host Control Protocol (DHCP). Two DHCP option tags can be used to dynamically deliver DA configuration information: option 78 allows the DHCP client to obtain DA address information; option 79 allows the DHCP client to obtain SLP scope information.
Once the DA address is configured through DHCP, the agent should not use multicast to discover DAs on the network. This is because the IP address of the DA is already present and has been "discovered". However, the preferred discovery mechanism can be set on a per-server basis, as will be discussed later in this AppNote.
Both NetWare clients and NetWare servers can use DHCP to obtain directory agent and SLP scope information. The client will obtain an IP address at the same time. The NetWare server will make a DHCP request to a DHCP server to retrieve option tag information, but it will not get an IP address from a DHCP server. NetWare servers still must have statically configured IP addresses.
Migration Enterprises is going to need to use DHCP discovery of scope information if the functional requirements are going to be realized. This is because of the custom scope information (ME_SCOPE). In the event DHCP services are not available on the network, this scope information must be manually configured on all NetWare 5 servers functioning as MAs. This configurable parameter is the SET SLP SCOPE LIST parameter, and it can be set through MONITOR.NLM.
Static Discovery. The final method of DA discovery is through static configuration. With static configuration, the addresses of the DAs on the network are manually entered so agents can make use of the information. With static configuration of DAs, all multicast traffic can be removed from the network because agents will not multicast to discover DAs, and through the use of DAs, User Agents will not multicast to retrieve SLP service information. They will use directed unicast SLP packets.
Static configuration of agents is accomplished rather simply in the NetWare environment. On each NetWare 5 server is a file called SLP.CFG contained in the SYS:ETC directory. This file is used to hard-code DA information. To specify a DA in this file, use the following format:
DA IPV4, <ip_address_of_DA<
Any number of addresses can be contained in the SLP.CFG file. For each DA listed, the Service Agent will register with that DA. However, the User Agent will only use the first activated DA in the list to retrieve SLP service information. Thus, a Service Agent may register with numerous DAs; however, service resolution will only be done against a single DA. Of course, fail-over mechanisms exist so that if the primary DA is not available, the User Agent will look to the next DA configured in the static list.
Currently, it is possible to use fully qualified DNS names in the SLP.CFG file to specify DAs; however, only the first DA specified in the list can be identified in such a manner. This is a limitation of the WINSOCK module on the NetWare 5 server, and it is expected to fixed with NetWare 5 Service Pack 3.
Looking back at the Migration Enterprises network, static discovery of DAs is going to be required for certain servers. The DAs in the three multicast islands will need to be able to discover one another, but won't be able to use multicast to do so. The DAs on the Migration Enterprises network will need to be statically configured to point to one another to facilitate DA discovery and exchange of SLP information. However, this is minimal static configuration of servers for their network. Of the 300 or so servers on the network, only 3-6 servers will need to be statically configured to discover DAs.
Note: There is a fourth method by which DAs can discover other DAs. This is through NDS. When a DA loads, it will look for other DA server objects located within the same NDS context. If it finds other DAs, it will attempt to activate them. Make sure the latest NetWare 5 service pack has been applied before attempting to use this discovery mechanism.
To recap, Migration Enterprises will actually need to use a hybrid of all three DA discovery options to achieve their functional requirements with this migration strategy:
Multicast discovery is going to be used by migration agents in a multicast island to discovery the directory agent(s) local to it.
DHCP discovery is going to be used to disseminate SLP scope information instead of having to statically configure all servers with the custom scope (ME_SCOPE).
Static configuration of the directory agents on the network is required because of the multicast island constraint. The most stable solution for this case is manual configuration.
By default, a NetWare 5 server will attempt to discover directory agents on the network using the three discussed discovery mechanisms. However, the discovery mechanisms can be controlled on a per-server basis. This is accomplished using the SET SLP DA DISCOVERY OPTIONS parameter.
Setting DA Discovery Mechanisms
How agents discover DAs on the network can be set on a per-machine basis. For instance, one NetWare server might use multicast to discover DAs, while another might be statically configured. It is generally a good idea to have the same discovery mechanism for all agents, but it is flexible depending upon the environment.
To force a NetWare 5 server to discover directory agents through one or more of the three mechanisms, a SET parameter is used. This SET parameter is the SET SLP DA DISCOVERY OPTIONS parameter. It can be set at the server console, or it can be set through MONITOR under the Service Location Protocol option. The values for this parameter are shown in the table below.
Discover DAs through multicast
Discover DAs through DHCP
Discover DAs statically
Disable dynamic discovery of DAs if DHCP discovery is successful or if static files are present
The SET SLP DA DISCOVERY OPTIONS parameter setting is based upon a bit comparison test. If the least significant bit of the parameter is set to 1, the server will attempt to discover DAs through multicast. If the second least significant bit of the parameter is set to 1, the server will attempt to discover DAs through DHCP. The server can be set to use multiple methods of discovery by performing a bit-wise Boolean OR of the desired valued. Thus, if both multicast discovery and static discovery are desired for a NetWare 5 server, the calculation of the value would be as follows:
00000001 Discovery through multicast
00000100 Discovery through static configuration
00000101 Discovery through multicast and static configuration
Converting that number to decimal, the SET SLP DA DISCOVERY OPTIONS parameter would be set to 5 to achieve DA discovery through both multicast and static configuration. The default value for the SET SLP DA DISCOVERY OPTIONS parameter is 15, which enables all forms of discovery listed above.
Because Migration Enterprises is going to use a hybrid solution for the discovery of DAs on the network, the SET SLP DA DISCOVERY OPTIONS parameter is going to be set to 7 for all servers. Migration Enterprises cannot use the default value of 15 because the MA servers on the network are going to need to retrieve information from both DHCP and multicast discovery. The value of 15 indicates dynamic discovery will be disabled if DHCP or static information has been used.
The SLP Service Registration Process
When DAs are enabled and Service Agents are configured to use the DAs, the Service Agents must register their services with the DA. It is important to understand this service registration process and how its traffic will impact the network. The Service Agent registers all SLP-enabled services on the server with the DA loaded. When the DA receives this information, it sends an acknowledgement to the Service Agent indicating the services have been registered.
When the Service Agent registers a service, it attaches an associated time-to- live (TTL) to the service. This is the lifetime of the service. When the DA receives the service, it looks at the service's TTL and begins a countdown based on the time since the service was registered. Once the TTL gets to zero, the service is presumed to be no longer valid, and the DA will not advertise the service anymore. It is the responsibility of the Service Agent to re-register the service with the DA before the TTL of the service expires. If the Service Agent fails to do so, an interruption in service can occur because the service will no longer be advertised by the DA.
The default TTL for Novell Service Agents is 3600 seconds (or one hour). This is a configurable parameter that is set on the Service Agent itself. Issuing the following command at the server console will set the TTL for all services registered by that Service Agent:
SET SLP SA DEFAULT LIFETIME = <value<
The value for this parameter can be between 129 and 65535 seconds. Care should be taken in how this value is set. Too small of a TTL setting can lead to increased network traffic as Service Agents will need to contact DAs more frequently to re-register services. Too large of a TTL can lead to the advertisement of services that are no longer available because of a server Abend or other disruption of service.
Service Agents will also register services with all known DAs. Thus, if a Service Agent knows of five DAs, it will place five separate registration requests—one for each DA. While this is good for fault tolerance, having too many DAs for each Service Agent can lead to increased network traffic due to SLP overhead.
When a DA receives a registration request, it will create an entry for the service in its own database. This entry will contain the name of the service, the type of service, the attributes of the service, and the TTL. As mentioned previously, it will send an acknowledgement to the Service Agent that the service has been registered. Figure 5 shows the summarized registration process.
Figure 5: The SLP registration process.
There are some important points to note in Figure 5. First, without a method for DA-to-DA synchronization, the list of services retrieved by an MA on the network is directly dependent upon the DA used for the lookup. If the MA in Figure 5 had looked to DA2 to find a list of SLP services, it would only know about MA 1. Second, after the DA discovery process, all SLP transactions are accomplished using directed unicast SLP packets. There is no multicast in the resolution of services when a DA is used.
Service Agents are supposed to deregister their services if they are no longer available. When a service de-registers with a DA, the DA will no longer advertise the service to User Agents. NetWare servers that are brought down gracefully will send a deregistration packet to the DA to remove their service.
Migration Enterprises is going to use the default TTL value of one hour in the implementation of the migration strategy. This is a fair compromise after weighing the cost of re-registration traffic versus the bandwidth on the WAN links between the regional offices and the POP sites. Users will never really know about SLP services because they will be using IPX-dependent clients. It is only the MAs on the network that are going to be using the SLP infrastructure to discover other MAs for exchanging IPX information.
Understanding Novell's Implementation of the Directory Agent
While the RFC defining SLP is very clear with regard to the User Agent and Service Agent, some areas of interpretation are left open with regard to the DA. For instance, the RFC indicates the methods by which the DA can communicate with the Service and User Agents. However, it does not address the issue of DA-to-DA synchronization. It also does not address the structure of the database used by the DA to store SLP information.
In NDS, Novell already has one of the best object-oriented, distributed databases on the market, and it already takes care of such critical issues as synchronization. It's therefore not surprising that in the development of Novell's implementation of SLP, the choice was made to use NDS as the back end for an SLP object data store and as a method of DA-to-DA synchronization. This fits right in to the strategy surrounding NDS and definitely plays to its features and benefits.
In Novell's implementation of the DA, the communication between User, Service, and Directory Agents works as previously described in this AppNote. However, when a DA gets a service registration from a Service Agent, it goes through a process to create an object in its local SLP cache. Additionally, it creates or modifies an object in NDS to update the service. When a DA is loaded, configuration information is stored in NDS for that DA. This information includes on which NetWare file server the DA is running and what scope the DA is servicing.
For each scope defined in the SLP infrastructure, an SLP Scope Unit object is defined within NDS. Also in NDS, each DA object is configured to service a particular set of scopes. When the Novell DA receives an SLP service registration, it checks its local SLP cache. From there, one of two things can happen:
If the service does not exist in the local SLP cache, an entry is created for the service with the attributes defined in the SLP registration packet. The DA then creates an NDS SLP service object in each scope for which it is responsible.
If the service does exist in the local SLP cache, the entry is modified in the local SLP cache. The DA also modifies the associated SLP service object in all of the scopes the DA services.
When a DA modifies the NDS SLP service object, it triggers an NDS synchronization event. The synchronization trigger forces all replicas of the partition containing the NDS SLP service object to exchange information to make the partition consistent. The other DAs on the network see the NDS synchronization event and reread the SLP service information from NDS to update their local SLP caches.
Figure 6 illustrates the registration of an SLP service and the propagation of that information through NDS to other DAs on the network.
Figure 6: The DA-to-DA synchronization process.
The NDS SLP service objects contained in each scope reflect the SLP information for the entire network. This is because Service Agents belonging to the same scope do not necessarily have to register with all DAs servicing their scope. As seen in Figure 6, the Service Agent only needs to register with the one DA. Assuming that all the DAs are in the same scope and all DAs hold a replica of the partition containing the NDS SLP service objects, the other DAs on the network will learn of the service without the Service Agent having to directly register via SLP.
This is a method that can be used to isolate SLP registration traffic. Service agents need only register with their local DA. NDS synchronization will take care of propagating the registered SLP service to the other DAs servicing the SLP scope. The result is that any User Agent on the network can retrieve the same list of SLP services regardless of which DA is queried.
When a Service Agent de-registers with a Novell DA, the DA does not remove the service from NDS or from its local SLP cache. It simply sets the TTL of the object to zero. This prevents the DA from advertising the service to User Agents. When the service re-registers itself in the future, the overhead of object creation in cache and NDS is not incurred; the TTL of the object is simply renewed. This method of SLP object management creates efficiency in registration and deregistration of SLP services.
Because of Migration Enterprises' network configuration, the NDS method of SLP service propagation is going to be required. All of the Service Agents (MAs) in a local multicast island are going to register with the dynamically discovered local DAs. The DAs at each POP site will register known services to a common scope object in NDS. Because all of the DAs are depositing their information to a common scope, they will learn about all of the SLP services on the network through the NDS synchronization process.
Impact of the Migration Strategy on the NDS Tree
Because the DAs and the NDS tree are so closely intertwined, the registration of SLP services is going to directly impact the Migration_Enterprises NDS tree. This impact is going to affect both the design strategy of the NDS tree as well as the general performance.
From a design strategy, each NDS scope unit object should be made its own NDS partition. The reason for this isolation is because the scope unit object is a very active area of the NDS tree. Each time an SLP service is registered in NDS by a DA, a partition synchronization event is issued. Compound this synchronization event over the 300 registrations that will occur per hour on the Migration Enterprises network, and it produces a significant amount of NDS traffic.
In addition to making the scope unit object its own partition, each DA on the network servicing that scope should hold a replica of that partition. This is the only way the DA can efficiently update the SLP service object information in NDS. If a DA is running on a server that does not have a replica of the partition containing the SLP scope it services, it only updates its local cache once per day. This is not an effective strategy for DA deployment.
For the Migration Enterprises network, the ME_SCOPE container object in NDS should be made its own partition. This partition should be replicated only to the DAs servicing the scope at the POP sites. This will minimize the size of the active NDS partition and make the synchronization process more efficient.
While the increased traffic on the network between the POP sites due to NDS synchronization may seem like a lot of overhead, it is much less than the current strategy with IPX flowing freely everywhere. From a WAN link perspective, the only overhead traffic associated with SLP on the slow WAN links in the SMDS clouds are the initial discovery of the DAs, the periodic registrations of SLP services, and the occasional SLP lookup by the MAs. These are all done with unicast packets.
Compared to the once-per-minute broadcast of RIP/SAP information required for IPX communications, the benefit of the SLP implementation is clear. There is a great reduction in the amount of traffic traversing the WAN links (although there will be some more overhead associated with the MAs, as will be explained in the next section). The spike in traffic comes between the POP sites, which are linked by T-3 speed connections. These connections are certainly capable of handling this increased traffic.
With Migration Enterprises, then, it is a matter of shifting the load of the network to the links that can handle it the best. The planned SLP infrastructure for the migration strategy has certainly done just this. The T-3 links, which are the fastest in the entire organization, are taking on the bulk of the traffic required to deploy the migration strategy.
Setting Up the Migration Agent Infrastructure
Now that the SLP infrastructure has been designed, the MAs can be deployed on the Migration Enterprises network. It is the MAs that will actually be responsible for the exchange of information between non-contiguous IPX segments. They will thus facilitate the removal of the IPX protocol from the WAN links-the desired goal of the first step in the migration strategy. When IPX is removed, the Migration Enterprises network will really be a set of IPX- based LANs that need to communicate with each other over IP-only links. The best way to facilitate that communication is through the use of Novell MAs working in Backbone Support mode.
To the IPX-based devices on the network, the IP-only WAN links will look like a virtual IPX network (also known as a "CMD network"). It is the responsibility of the MAs to encapsulate the IPX packets destined for a non-contiguous IPX segment in an IP packet and to forward the packet to the MA serving the remote IPX segment. The destination MA will un-encapsulate the packet and place the undisturbed IPX packet out on the IPX segment where it will be able to get to its intended destination.
IPX-based nodes on the LANs also require RIP/SAP information about remote IPX segments in order to maintain the end-to-end visibility of IPX services on the network. The MAs in Backbone Support mode will also be responsible for propagating this RIP/SAP information between the non-contiguous IPX segments to maintain the end-to-end visibility of IPX services, which is a functional requirement.
For more information on the CMD module for NetWare servers, refer to the article entitled "Understanding SCMD Mechanics and Processes" in the August 1999 issue of Novell AppNotes.
Backbone Support Basics
It is easy to have a server function as a Backbone Support server. You simply load SCMD.NLM on the server with the /G, /MA, or /BS switch. Also, the server must be enabled to use NLSP routing; this is a critical step in enabling this service. The configuration of NLSP routing can be accomplished through INETCFG.NLM.
Once Backbone Support has been enabled, the server will attempt to find other Backbone Support servers. It will do this in one of two ways:
The backbone support server will query SLP for services of type mgw.novell. These services indicate other Backbone Support servers available on the network.
The Backbone Support server will look to a statically configured list of other Backbone Support servers available on the network.
Note: If one backbone support server is statically configured to look for other backbone support servers, all backbone support servers on the same CMD network must be statically configured to look for other backbone support servers. It is an "all or nothing" proposition.
Backbone Support servers must be part of the same CMD network in order to communicate and exchange bindery information with one another. Backbone support servers belonging to separate CMD networks will not communicate with each other to exchange bindery information or encapsulated data packets.
After Backbone Support servers discover one another, they exchange their RIP/SAP information. They will continue to do so to keep the services advertised on their respective IPX networks current. Figure 7 shows a simple example of two Backbone Support servers exchanging RIP/SAP information.
Figure 7: Backbone support servers exhanging bindery information.
As noted in Figure 7, the RIP/SAP information from each IPX network is encapsulated in IP packets. This exchange of IP packets between the Backbone Support servers is accomplished using a UDP packet addressed to port 2654, as all IPX encapsulated traffic is. (This figure assumes that the Backbone Support servers belong to the same CMD network.)
In order to meet the functional requirements of Migration Enterprises, the Backbone Support MAs are going to rely upon the SLP infrastructure, which was created for this purpose. Remember, the manual configuration of 300 MAs is not an option for the central IT groups at Migration Enterprises.
The MA-to-MA Communication Protocol
When MAs in Backbone Support mode exchange bindery information, this is called the MA-to-MA communication protocol. There are some special considerations to be made when enabling a large number of servers in Backbone Support mode because NLSP routing is enabled on those servers. This NLSP routing is what facilitates the efficient exchange of bindery information between the non-contiguous IPX networks. However, there is an overhead cost involved by using NLSP.
After Backbone Support servers on the same CMD network discover each other, they become NLSP neighbors in the virtual IPX network that has been created for CMD. Because these servers are NLSP neighbors, they must exchange the normal overhead packets required for NLSP routing. These administrative packets (or Hello packets) are required by NLSP to maintain the NLSP routing tables. If the Hello packets are not received within a specific timeout window, the NLSP router marks its neighbor as being unavailable. Figure 8 illustrates the NLSP overhead in an IPX environment.
In Figure 8, server NLSP1 sends out its Hello packet to its neighbors via broadcast (because it is an IPX environment). The neighbor NLSP routers receive this packet and update their tables to reflect the refresh of NLSP1 as their active neighbor. They update the time-to-live of NLSP1 in their NLSP neighbor table. In Figure 8, this is shown in bold, along with their other neighbors and their TTLs. Should the TTL of an NLSP neighbor reach zero, the neighbor will be marked as unavailable, and it cannot be used as a route to reach destination services.
Figure 8: The NLSP Hello packet updates neighbor routing tables.
By default, NLSP Hello packets are sent out every 20 seconds by NLSP routers with a multiplier of three. This multiplier determines the TTL for the NLSP router in the NLSP neighbor table. This makes the default TTL for the neighbor router 60 seconds. By modifying the multiplier and Hello packet interval, the NLSP overhead traffic can be controlled.
In the example in Figure 8, one packet is required from every server at least every 60 seconds to update the NLSP neighbors table on neighbor NLSP routers. The Hello packet is a small packet, and it is much more efficient than the server broadcasting its entire RIP/SAP tables once per minute. However, because NLSP is an IPX-based routing protocol, NLSP packets cannot be placed natively on an IP-only segment. Something has to be done to encapsulate the NLSP packets in IP and to send them to the NLSP neighbors. The SCMD driver is the tool by which this is accomplished. When the IPXRTR module on a Backbone Support server passes the Hello packet to the virtual IPX network, the SCMD module takes over. The NLSP packet is encapsulated in IP and sent to all other Backbone Support servers on the same virtual CMD network.
From an NLSP perspective, all Backbone Support servers that belong to the same virtual IPX network are NLSP neighbors. These servers must exchange NLSP Hello packets to keep their IPX routing tables updated. If the TTL of the NLSP neighbor reaches zero, the route will no longer be available. This will lead to the disappearance of services through the IP-only segment of the network.
Because NLSP is a broadcast-based protocol and IPX encapsulation is not broadcast-based, SCMD has a lot of work cut out for it in mimicking the NLSP environment. SCMD has the responsibility for getting this originally broadcast information to all required destinations. Figure 9 shows how SCMD handles this.
Figure 9: NLSP Hello packets are encapsulated and individually addressed.
When the NLSP Hello packet is passed from the IPXRTR module of the NetWare server to the virtual IPX network, SCMD takes over. It encapsulates the Hello packet and individually addresses the packet to each known Backbone Support server on the same CMD network.
For the example shown in Figure 9, one NLSP Hello packet requires three encapsulated packets sent across the IP only network to accomplish the update of NLSP neighbor tables. Figure 10 illustrates the same update of NLSP neighbor tables as Figure 8, but with SCMD. Notice that three packets are required to update the NLSP neighbor tables instead of the one broadcast packet in Figure 8.
Figure 10: Updating NLSP tables across the virtual IPX network.
As the number of Backbone Support servers on the same CMD network increases, the number of packets required for NLSP overhead also increases. All Backbone Support servers on the same CMD network must communicate with all other Backbone Support servers in this manner. This must be factored into any Backbone Support server design. Besides the overhead NLSP traffic, there will also be the normal bindery information flowing between the servers as well. However, this is where the benefit of NLSP is seen.
Because NLSP is a link state routing protocol, it only needs to communicate changes in its routing tables to its neighbors. Instead of the entire RIP/SAP table of a server being broadcast out on the network once per minute, only the changes the router receives are sent out. This drastically reduces the bindery information that needs to pass across the CMD network from MA to MA because all of them are NLSP neighbors of each other. Therefore, the encapsulated NLSP traffic on the CMD network due to bindery information exchange is significantly smaller than the amount of RIP/SAP traffic that would have to be sent on the same wire in the normal IPX environment.
Filtering Considerations with Backbone Support
Before implementing the MA-to-MA protocol on a network, it is important to consider the implications it may have. This protocol is a virtual routing environment that sits on top of an existing physical routing environment. It is possible to route information across this virtual network that would not normally be permitted across the physical network.
Figure 11 shows an example where the physical IPX network is intact, with a set of routers between two IPX network segments. Between these IPX network segments is a router that filters SAP information. IPX services on network segment 1 are not visible on network segment 2 and vice versa. A Backbone Support server is installed on both network segments, and they discover one another through SLP.
Figure 11: The virtual IPX network can pass bindery information not passed by the physical network.
Through the MA-to-MA protocol, IPX network 1 can see service information about IPX network 2. The RIP/SAP information has come across the virtual IPX network, and the filters on the router are bypassed. If the MA-to-MA protocol is to be used in an environment, the filtering on the physical network needs to be applied to the virtual network.
Fortunately, this can be done on a NetWare server with SCMD. In order for filters to be enabled and enforced by a NetWare server, a LAN driver must be loaded. This LAN driver represents an interface through which traffic passes. These LAN drivers are loaded and configured with INETCFG.NLM. Once configured, FILTCFG.NLM can be loaded to configure filters. In other words, SCMD needs to be loaded as a LAN driver before filtering across the virtual IPX network can occur.
The latest version of SCMD (version 2.02d) includes such a LAN driver that will allow the enforcement of filters on the virtual IPX network. With this latest version, it is possible to prevent the MA-to-MA protocol from bypassing filters enabled on the physical network.
The Migration Agent Infrastructure
Having reviewed all of the technology and inner workings of the MAs and understanding how Backbone Support mode works, the central IT groups are now ready to plan an MA infrastructure that will fit the needs of their corporate network. The critical pieces are properly deploying the MAs on a number of CMD networks to control the amount of NLSP overhead, while still maintaining the end-to-end visibility of IPX services.
As far as the migration strategy is concerned, it is best to think of the Migration Enterprises network as 300 non-contiguous IPX segments that have to be connected across an IP-only backbone. With the MAs able to function in Backbone Support mode, this connectivity can be achieved, but it is going to take careful planning to make sure the network is not saturated with overhead traffic to maintain the connectivity. (The required overhead to make this happen has been discussed in the previous sections.)
Effectively, Migration Enterprises is going to need to employ at least 300 MAs to achieve the first step in the migration strategy. Fortunately, there is already a NetWare server at each regional office that can function as an MA. It is a matter of convincing the regional administrators to load the SCMD.NLM module in Backbone Support mode on their local servers. This shouldn't take too much convincing because if the local department does not comply, their users won't be able to see any of the IPX services outside of the regional office.
Unfortunately for Migration Enterprises, it's not going to be as easy as loading SCMD.NLM on each server. Think about the NLSP overhead traffic of having 300 migration agents in Backbone Support mode all belonging to the same CMD network. That means each migration agent will be sending an encapsulated NLSP Hello packet to the other 299 servers at least once per minute. This amounts to almost 90,000 packets of overhead traffic per minute, not including the bindery information transfer between networks or the normal data traffic.
The central IT groups must come up with another solution that will allow for the end-to-end advertisement of IPX services on the network but also minimize the amount of overhead traffic required to keep the non-contiguous IPX networks connected. The solution is the implementation of multiple CMD networks on the network.
Implementing Multiple CMD Networks
As has been mentioned previously, only MAs belonging to the same CMD network will need to exchange NLSP Hello packets. MAs belonging to separate CMD networks will not exchange this information because they are not considered to be NLSP neighbors. From an IPX routing perspective, they are a whole network away from one another.
Given this fact, it is just a matter of reviewing the network infrastructure and determining how to implement the CMD networks. Figure 12 is simply a reproduction of Figure 1 in this AppNote for convenience. After looking at Figure 12, the multiple CMD network structure virtually draws itself.
Figure 12: Migration Enterprises network diagram.
The logical subdivision in this network diagram is the SMDS cloud. It is a manageable unit to which 25 sites are connected, and 25 migration agents to a single CMD network is certainly an acceptable configuration. The overhead traffic required for 25 servers is significantly less than the traffic required for 300 servers in the same CMD network.
It was easy to see how bindery information is exchanged between migration agents in Backbone Support mode on the same CMD network. It is a little more difficult to understand how separate CMD networks will be able to propagate their bindery information to other CMD networks. The trick is to have a shared IPX segment where MAs belonging to different CMD networks can exchange their RIP/SAP information.
Figure 13 shows an example with two non-contiguous IPX segments being serviced by MAs belonging to separate CMD networks. However, those MAs also happen to share a third IPX segment between them. The sole function of this common IPX segment is to facilitate a per-minute broadcast of RIP/SAP information between MAs just like all IPX-based NetWare servers do. This RIP/SAP broadcast will force each MA to become aware of the services available through the other CMD network. In turn, that information will be propagated to the remote IPX segments at each end.
Figure 13: Using a common IPX segment to exchange RIP/SAP information between CMD networks.
The logical common point for the exchange of RIP/SAP information between the CMD networks (SMDS clouds) is the POP sites. For each CMD network that needs to exchange RIP/SAP information, there has to be a separate migration agent at the POP site. In the case of Migration Enterprises, this means there needs to be 5 migration agents per POP site to handle the 4 CMD networks corresponding to the SMDS clouds and the one CMD network corresponding to the links between the POP sites themselves.
The reason for four separate MAs is because currently only one instance of SCMD.NLM can be loaded per NetWare server. Also, each SCMD.NLM driver can only accommodate one CMD network. A future version of SCMD.NLM will support multiple CMD networks. When this version is available, Migration Enterprises will actually only need one MA per POP site to handle the exchange of RIP/SAP information between CMD networks.
Figure 14 shows the CMD network structure required for Migration Enterprises. Note the CMD network corresponding to each SMDS cloud and the CMD network required between the POP sites to exchange RIP/SAP information between the multicast islands—they are shaded in gray. From each IPX segment, the CMD network looks like just another IPX segment. Thus, all CMD network numbers must be unique to the entire network just like IPX network numbers must be unique.
Figure 14: The CMD network structure of Migration Enterprises.
While having multiple CMD networks is the most elegant solution for Migration Enterprises, it will require some manual configuration of servers. On each server where the MA is loaded, it will have to be configured to belong to a specific CMD network. The CMD network specification is just another SET parameter on the NetWare 5 server (SET CMD NETWORK NUMBER), which can be set at the server console or through MONITOR.NLM.
Note: DHCP can be used to dynamically distribute CMD network information to NetWare servers, which will reduce the manual configuration required. The CMD network option for DHCP is option tag 63, suboption 12. Migration Enterprises will need to implement this technology to achieve their functional requirements.
This solution also is best for Migration Enterprises given their physical network because traffic routing between the CMD networks is the same routing that would occur through the physical network. In the current IPX-based network, all traffic between regional offices must come back through the POP sites. When IPX is removed from the WAN links with the strategy described, the traffic patterns will be the same.
Now that the first step of the migration strategy has been planned for Migration Enterprises, it is just a matter of implementation for the central IT groups. They are confident in knowing they have a well-planned infrastructure to begin the migration of their corporate network to a Pure IP environment.
In planning any strategy to migrate from an IPX to a Pure IP environment, it is always a good idea to gather all of the functional requirements of the organization first along with the physical layout of the network. This will provide for a design that is pertinent to the network in question. Once that information has been gathered, the general plan should come together nicely.
Once the initial goals have been set, it is just a matter of understanding the technology required and the impact it will have on the corporate network. This understanding is key to fleshing out the details of the migration strategy. Without this understanding, it is quite possible to create a migration environment that is much less efficient than the current environment, which usually has IPX flowing freely from end-to-end.
* Originally published in Novell AppNotes
The origin of this information may be internal or external to Novell. While Novell makes all reasonable efforts to verify this information, Novell does not make explicit or implied claims to its validity.