Novell is now a part of Micro Focus

Consumer Gas Company, Ltd. Migration to IntranetWare from NetWare 3.12: A Case Study

Articles and Tips: article

PETER KUO
President
DreamLAN Network Consulting, Ltd.

01 Jan 1998


Those who are contemplating a move from NetWare 3.12 to NetWare 4.11 will benefit from this first in a two-part series describing a successful migration within a large Canadian utility company.

Introduction

With the release of IntranetWare in late 1996, many NetWare 3 sites have or are upgrading to IntranetWare to take advantage of features such as Novell Directory Services (NDS), system-wide login, superior security and simplified support and management. Consumers Gas Company, Ltd. (CGC) is no exception. Consumers Gas installed their first NetWare 3.11 Token Ring network in late 1992. By mid-1997 they had over 120 NetWare 3.12 servers distributed across a wide-area network spanning Ontario, Canada in over 20 locations.

In early 1997, Judy Olivier, senior Network Planning Analyst of the Computer Resource Planning department, took on the responsibility of selecting and implementing the network operating system for the next generation of Consumers Gas' enterprise network. Careful evaluation and lab-based performance comparison were performed on IntranetWare and Windows NT Server 4.0 using two Ziff-Davis Benchmark Operations tools--Netbench 5.0 (which tests the performance of a file server by measuring how well it handles I/O performance) and ServerBench 3.0 (which measures the performance of application servers by running tests that invoke different types of load on a file server). From the test results, it was clear to Judy that IntranetWare offered the best performance, ease of use and management, and especially the low cost-of-ownership and superior security over Windows NT Server.

It is interesting to note that it only took a team of two people four months, from the time of management approval and the completion of NDS tree design (testing and internal certification), to complete the migration preparatory phases.

This AppNote is the first in a series of two articles. This first article describes the planning and testing processes in detail, including NDS tree design, time synchronization, and data migration strategy. The second part will provide first-hand information and experience on the actual migration phase of the project.

Preparation

Judy Olivier, senior Network Planning Analyst of the Computer Resource department headed up the IntranetWare project. Lee Arsenault, Manager, Computer Resource Planning, provided the necessary logistical support and served as the liaison between the project team and management. June Morrison, Network Security Analyst, assisted in specifying and testing NDS and NetWare file system securities. William Iun and Paul Wong of Network Services provided additional network infrastructure support and participated in the testing of in-house applications to ensure compatibility with IntranetWare. In addition, an outside consultant with extensive NDS and NetWare expertise was hired to assist in designing and testing the NDS tree design, NDS partitioning and replication strategy, migration methodology, and to provide technical support during the Consumers Gas internal certification process for IntranetWare.

Key hardware elements of the network and WAN included the following:

  • Compaq Proliant 2500 and 5000 file servers with an average of 64MB of RAM and 4GB of disk space.

  • A variety of Compaq and HP 386, 486, and Pentium workstations running Windows 3.x and Windows NT.

  • IBM 4/16 (operating at 16 Mbps) Token Ring network cards on workstations; Compaq NetFlex Token Ring cards on servers.

  • Cisco 4000, 4500 and 7000 routers and Centillion ATM switches.

  • Bay Network Token Ring hubs.

  • CAT 5, twisted-pair cabling to desktops.

  • ATM backbone between the floors at the Victoria Park and Parkway sites.

  • 100 Mbps (FDDI) backbone links the Victoria Park, Parkway, and Atria sites at the main campus.

  • A variety of 512Kb, 384Kb, 128Kb, and 56Kb WAN links to remote sites throughout Ontario.

  • HP K series UNIX database servers running HP/UX v10 and v9.04.

  • IBM MVS mainframe 9672-R64.

The LAN and WAN carried IPX, IP, and SNA traffic.

Steps Completed Prior to the Planning Stage

Several key steps were completed by the Network Support group prior to the planning phase of the project.

  1. Rolled out VLMs to all Windows 3.x workstations. Updated NET.CFG file to include:

    NetWare Protocol = NDS BIND
    
         VLM = AUTO.VLM
    
         VLM = RSA.VLM
    
         Name Context =  CGC.CGEI
  2. IntranetWare Client for NT was installed on all Windows NT workstations.

  3. Routers configurations were checked and updated to ensure the following NetWare 4-specific SAPs were not filtered: 0x004 (File Server), 0x107 RSPX/Rconsole), 0x26B (Time Sync), and 0x278 (Directory Services).

  4. Allocation of hardware resources to create a project testing lab environment.

These steps, especially the early implementation of NDS-aware client software and the availability of a testing lab, greatly simplified the project planning and rollout phases.

When schedule permitted, selected staff members from the Help Desk Services, Security Services, Network Support, and Computer Resource Planning were sent to various Novell training classes. The classes included:

  • Course 520, NetWare 4.11 Administration

  • Course 525, NetWare 4.11 Advanced Administration

  • Course 526, NetWare 3.1x to NetWare 4.11 Update

  • Course 532, NetWare 4 Design and Implementation

  • Course 555, IntranetWare: Integrating Windows NT

  • Course 804, NetWare 4.11 Installation and Configuration Workshop

  • Advanced Novell Directory Services (from Novell Premium Services)

Designing the NDS Tree

A number of preliminary NDS tree designs were examined and mocked up using DS Standard v3. Using DS Standard, different test trees were easily created, destroyed, and then recreated. After a number of iterations and some fine-tuning, a "final" design was chosen and a pseudo-production tree was put in place. This pseudo-production tree was then used to develop, plan, and test security, partitioning and replication, and user and data migration procedures.

Other than the production tree, two other NDS trees were created on the network. One tree was created for the in-house software developer group, and one for the in-house Pre-Production testing ab (PPL). Although the developers did not write any NDS-aware applications during this time, in the future they will take advantage of the distributed nature of NDS. Therefore, a separate, simple, NDS tree was made available to the applications developers.

The PPL was instituted a couple of years ago at CGC as a staging area for any PC- and LAN-based in-house development, as well as shrink-wrap applications, to adhere to CGC policies and procedures. The PPL has provided a near-production environment to test installation and migration procedures, and to ensure that new applications are compatible with the CGC standard PC and file server configuration. In order to test the migration processes and software compatibility in a near-production environment, another separate NDS tree was created for PPL use.

Hybrid Design. Similar to many large NDS trees, the final design of the Consumers Gas NDS tree was a hybrid design (see Figure 1).

Figure 1: A high-level view of the Consumers Gas NDS tree.

Due to the existing WAN infrastructure, the company's business structure, and the dispersed locations of various functional business units (for example, the Energy Services group is located on the first and third floor in VPC), this NDS tree's design is based mainly on geographical locations. The top level of the tree is divided into the three main regional locations: Eastern Ontario (OU=East), the Greater Toronto Area (OU=GTA), and the Niagara region (OU=Niagara).

In each of the two main locations, the VPC (Victoria Park) and PKWY (Parkway) containers, there existed a PRINTERS subcontainer. All printing-related objects, such as print queues and print servers, resided in the respective PRINTERS containers. By placing all print queues in the same container the Help Desk team facilitated easy print queue management, by not having to constantly change context to locate a given print queue. This also made it easy for users to capture to print queues. At Parkway, all servers and their volume objects were placed in a container called SERVERS; at VPC, the server and volume objects were placed under OU=VPC, while users and other objects were placed in the lower container levels. The placement of all servers and volumes in separate containers apart from User objects made NDS security assignments and management easier. For example, NDS rights could easily be given to the Help Desk staff to change passwords (where Write rights to the User object's ACL is required) without having to worry about accidentally giving the Help Desk staff unwarranted rights to the servers and volumes.

Generally, one would place the company at the Organization level, such as O=CGC. This was not done here because of expandability considerations. CGC's parent company, Consumers Gas Energy, Inc., owns several other subsidiary companies. To facilitate future expansion of the NDS tree to include these other companies, an O=CGEI level was introduced (instead of O=CGC) so other higher-level OUs (corresponding to other CGEI subsidiaries) could be easily added at a later time. If O=CGEI had not been put in place, future integration with new companies (such as tree merges) would require much work on the NDS tree structure.

Naming Standards. Consumers Gas has been using a three-letter userid naming standard, based on their mainframe userid naming convention, ever since the first NetWare server was installed. The userid is derived from the first letter of a user's first name, middle initial, and last name. For example, the userid for John Alan Doe is JAD. This userid-naming convention was carried over to the new NetWare 4 environment.

All NDS tree names start with "CGC_", followed by a three-letter abbreviation to indicate the purpose of the tree. For example, the production NDS tree is called CGC_PRD, the NDS tree for the PPL lab is called CGC_PPL, and the NDS tree for the developers is called CGC_DEV.

One change that the team made to the naming convention in the new environment was the printer and print queue names. In the past, the printers and queues were named using location, department, and model of printer. This created some problems when two or more departments shared the same printer; therefore, the team implemented a new naming convention based on the company-standard regional location code, printer model, and a number. The number would represent the first, second, third, etc., printer of a given model. All print queue names would end with "Q"; all print server names with "PS"; and all NDS printer object names with "P". As an example, the first HP 4Si printer in VPC would have the following NDS object names:

Print queue: VPC_HP4SI_1_Q Print server: VPC_HP4SI_1_PS Printer: VPC_HP4SI_1_P

In order for all names (in both IntranetWare and NT environments) to appear consistently, all object names appear in lowercase, and underscores would be used to separate multiple words instead of using embedded spaces. The exceptions are container names in NDS, which are kept in uppercase.

Restricting Help Desk Access. In the NetWare 3 environment, Help Desk has Supervisor rights to all servers to be able to change user passwords and manage files. Although it is possible to set up Workgroup Managers to accomplish the same tasks, it is not feasible in a large enterprise environment such as Consumers Gas. With Supervisor rights granted, this also means the Help Desk staff has complete, unrestricted, access to every file on every server in the enterprise.

In the NetWare 4 environment, however, access rights available to the Help Desk can be easily tailored to their specific needs, because of the granularity available in NDS security. After meeting with Erwin Chong, Team Leader, Help Desk Services, the NetWare team decided that the day-to-day tasks of Help Desk Services included the following items:

  • Changing client passwords

  • Reset Intruder Lockouts

  • Reset Grace Logins

  • Salvage deleted files

  • Clear user's server connections

  • Print queue management

Other security-related tasks, such as setting account expiration dates and disabling accounts, would be handled by Security Services. Staff members of Security Services and Network Services would have Admin-equivalency in order to function efficiently.

A Help Desk Organizational Role (OR) granted the following NDS rights to each user object, as shown in the following table:


PropertyName

PropertyRights

Account Locked

RWC

Account Reset Time

RWC

Days Between Forced Changes

RWC

Grace Logins Allowed

RWC

Incorrect Login Attempts

RWC

Last Intruder Address

RWC

Maximum Connections

RWC

Object Trustee (ACL)

RWC

Remaining Grace Logins

RWC

The Help Desk OR was granted Object Browse rights to each container and the following NDS rights were granted to each container that had User objects, as per the following table:


Property Name

PropertyRights

Detect Intruder

RWC

Incorrect Login Count

RWC

Intruder Attempt Reset Interval

RWC

Intruder Lockout Reset Interval

RWC

Lock Account After Detection

RWC

The Help Desk OR was granted Read and File Scan file system rights to the SYS volume, and RWCEMF rights to other volumes on each server. The Help Desk OR was also made Console Operator on each server so that the Help Desk staff could easily clear user connections. All Help Desk staff were made occupants of the Help Desk OR. Furthermore, the Help Desk staff were made Print Queue operators so they could manage the print queues.

When a new user account was created by Security Services, the Help Desk OR was granted the NDS rights assignment (through the use of Profile objects) to the new User object. The Help Desk OR has no user management scope over Admin or any Admin-equivalent accounts; these accounts are managed by Security Services and Network Services.

Using Containers as Global Groups. The NDS trustee assignment inheritance works the same way with file systems. For example, when you grant a User object NDS rights (such as Browse and Create) to container O=Two, the granted right flows down to other container objects in that branch (such as OU=Sub1, OU=Sub2), until a new assignment is made or an Inherited Rights Filter (IRF) stops the flow of rights. In the example shown in Figure 2 (see below), the User object Peter is granted Browse and Create NDS rights to O=Two and Browse NDS rights to OU=Sub3. By inheritance, Peter will have Browse and Create NDS rights in OU=Sub1 and OU=Sub2. But Peter will only have Browse NDS rights in OU=Sub3 and below because of the new NDS rights assignment. On the other hand, Peter has no rights to O=One (unless rights were granted at levels higher than O=One). This is the same when working with file systems.

Figure 2: NDS rights inheritance.

The other inheritance property of NDS is using containers as groups. This concept is less-known to system administrators. Any object in a container is automatically considered to be a "member" of the container and its parent container(s). Any rights (NDS or file system) assigned to a container flow down to the subcontainer. Any objects within the container inherit the same set of rights from the container. In some Novell documentation, this feature is referred to as the "ancestor inheritance." The main difference between "normal" inheritance and ancestor inheritance is that you cannot prevent an object (for example, a user) in a container from receiving the rights assigned to that container, not through the use of IRF on the object or any other means. However, you can place an IRF on the container below that object to block rights from flowing down, and normal inheritance and filtering rules apply.

Therefore, by granting file system rights to containers high up in the NDS tree structure, all user and group objects in those containers and below automatically gain the same rights. This allows you to use containers as "global" or "super" groups. It is a much more powerful concept and tool than the EVERYONE group used in previous versions of NetWare. The container OU=CGC was given Read and File Scan file system rights to the directories which hold enterprise-common applications, such as the Lotus AmiPro, Lotus Organizer, and Lotus cc:Mail. By making one single assignment, all users in the NDS tree have access to the common applications. This is not possible in a NetWare 3 environment.

By using the global group concept and granting RF file system rights to OU=CGC, all users in the NDS tree can roam freely within CGC's internetwork and can access the common applications from a local server, instead of having to launch applications from their home servers, which may be located across a WAN link. If the normal group approach is used, every time a new user is added to the network, extra steps must be taken to ensure this new user is made a member of this special group. Also, there is the added overhead to NDS in keeping track of the huge group membership list.

Using NDS Attributes. During the initial implementation of NDS, the various NDS attributes are not heavily exploited. As part of the security policy, all User and group objects must have their full-name attribute filled in. Therefore, at this time, only the Full Name attribute of User objects and the Description attribute of group objects are used. However, there are plans to update some of the NDS attributes (such as Account Expiration) on User objects with information from the MVS mainframe, in the near future.

NDS Partitions and Replicas

As part of the initial rollout of the NDS tree, a skeleton tree with all the containers in place is created. The tree is then partitioned as illustrated in Figure 3.

Figure 3: NDS partition strategy.

Three P-166 IntranetWare servers are dedicated as NDS replica servers. These servers, located at the Head Office location (VPC) where Network Services is based, hold the Master replica of all partitions. One replica server holds the Master replicas of all remote sites, one replica server holds the Master replicas of all partitions under the VPC container, and the last replica server holds the Master replicas for all other containers under the CORP container. On average, each replica server holds about a dozen replicas.

A number of placeholder containers, such as OU=CORP.OU=GTA, in the tree helped to "deepen" the tree and to distribute subordinate reference replicas across more IntranetWare servers. By partitioning at OU=VPC instead of at its lower layers also helped to reduce the number of subordinate reference replicas on servers at VPC; however, this leads to a somewhat large replica ring because of the number of servers. It was decided that this partition will be closely monitored to ensure NDS synchronization can be completed within 30 minutes. Further partitioning of OU=VPC may be necessary in the future.

In order to provide fault tolerance for the NDS tree and to provide good performance for NDS access, two Read/Write replicas of each partition were created and one of the Read/Write replicas was placed on the server local to the users. In remote offices where there is only one IntranetWare server, a Read/Write replica was put in place. If the remote office was connected to a WAN hub site (see Figure 4), the other Read/Write replica was placed on a server at the WAN hub location. If the remote office was connected directly back to VPC, the other Read/Write replica was placed on a server at VPC.

Figure 4: A simplified view of CGC's WAN infrastructure.

Time Synchronization Strategy

Due to a number of in-house client/server-based applications, consistent (not necessarily accurate) time-stamping is important to the integrity of the data. Also, consistent time information helps to facilitate ease of troubleshooting in a client-server environment, the timed sequence of events is invaluable in determining the cause of faults as well as providing application and network performance information. Therefore, it is paramount that all systems, not only IntranetWare servers and client workstations but also UNIX hosts, NT servers, and even the MVS mainframe, are synchronized in time.

External Time Sources. As all workstations, servers, and even the MVS mainframe were TCP/IP-equipped, the logical method for keeping all these devices in time synchronization was to use a TCP/IP-based protocol. Today, one of the time protocols of choice is SNTP (Simple Network Time Protocol, RFC 2030). After much investigation and testing, the SNTPCLNT software from Neatech was chosen (for more information, visit http://www.neatech.ch/sntpclnt).

The SNTPCLN4.NLM was loaded on the Reference Time Server located at VPC. This time server polled one of the HP/UX systems once an hour and updated the time if the time difference was more than 0.5 second. This HP/UX host polled a number of time sources on the Internet as its time sources.

Because there was neither a (S)NTP client nor server software available for the MVS mainframe, the MVS Sysplex Timer was used. Its time source was the WWVB, the 10MHz long wave radio time source from the National Institute of Standards and Technology (NIST) of Boulder, Colorado, USA. Since all the official time sources in the world are in agreement with each other to less than 1 piecosecond, all network time within CGC is essentially synchronized with the same time reference.

Time Provider Groups. The Reference Time Server and seven Primary Time Servers are configured to be time sources for the IntranetWare network. Figure 5 below illustrates the placement of the Reference Time Server and the Primary Time Servers and their relationships to each other; Secondary Time Servers are not shown.

Note: The Reference Time Server and all production Primary and Secondary Time Servers used configured sources so that they do not listen to SAP. This protects these (official) time servers from any renegade time servers (that may have been brought up as a test server). At the same time, the Reference Time Server and Primary Time Servers also send SAPs to advertise themselves as time servers so they can provide time to any unconfigured Secondary Time Servers.

A number of time provider groups were set up. For example, because of the large number of servers, two Primary Time Servers were placed in the Parkway location to service all the local Secondary Time Servers. One of the Primary Time Servers at Parkway formed a time provider group with one of the two Primary Time Servers and the Reference Time Server in VPC, as shown in Figure 5. Because all Primary Time Servers use configured sources, the second Primary Time Server in Parkway did not poll and exchange time information with any other time servers, thereby reducing network traffic.

Figure 5: Time provider group configurations.

Located at each of the major network hub sites (with the exception of Richmond Hill and Whitby) was a Primary Time Server. This time server serves its local Secondary Time Servers, as well as any Secondary Time Servers at remote sites that are connected to this hub site. For example, the Primary Time Server in Mississauga will provide time to Secondary Time Servers at that site plus the three remote sites connected to Mississauga.

In the case of Richmond Hill and Whitby, there are not many sibling sites connected to them. For example, Richmond Hill has only one remote site connected to it. At the same time, these sibling sites are small in size. Therefore, the team decided that the servers at those locations would obtain their time from one of the Primary Time Servers at VPC.

Through the TIMESYNC.CFG file, one of the Primary Time Servers at VPC was designated as the time source for all directly connected remote sites outside of the immediate Toronto area, such as Whitby and Tecumseh. The other Primary Time Server was designated as the time source for all directly connected remote sites inside the immediate Toronto area. Although Parkway is just down the road from VPC and is linked with a FDDI link, they have their own Primary Time Servers, because of the large number of servers at Parkway.

All time servers, especially the Secondary Time Servers in the production NDS tree have their Directory Tree Mode set to ON. This ensures the time servers are getting their time from the production NDS tree and not from other test trees.

Workstation Name Context

The team decided from the outset that the new IntranetWare environment would look as much like the NetWare 3 environment as possible from a user's perspective. Of particular importance was the login process: users would continue to login using their userid as they did in NetWare 3. This objective was easily met by setting a workstation's Name Context to the container where the workstation owner's userid is in the tree. However, this implementation does not address the ease of login for mobile users. So, the team decided that a contextless login process, using third-party utilities, would be implemented to accomplish both objectives.

DOS/Windows Workstations. There are a number of third-party solutions available to provide contextless login in the DOS/Windows environment. After some features and cost comparison, NDSLogin from DreamLAN Network Consulting Ltd. was implemented. (For more information, visit http://ourworld.compuserve.com/homepages/dreamlan.)

NDSLogin is a DOS-based front-end utility to the standard NetWare LOGIN.EXE. NDSLogin takes the same arguments as LOGIN, but parses out the username portion. Using the contextless name, NDSLogin searches the NDS tree to locate user objects with this name. If there are duplicates, the first 10 are displayed and the user can choose from the displayed list. Once the user object is located, and its context determined, NDSLogin passes control to NetWare's LOGIN.EXE. Therefore, all login script commands will be processed. Unlike some of its counterparts, NDSLogin uses a tree-based license (one copy for the entire NDS tree regardless of number of users), instead of nodal licensing which can be very costly for large sites.

NT Workstations. At the time of writing, there were no third-party solutions for contextless login for NT workstations. During BrainShare '97 at Salt Lake City, Novell demonstrated the contextless login feature for the IntranetWare Client for NT. At that time, the contextless login made use of NDS Catalog Services. Novell indicated that the contextless login feature would be part of a future version of the IntranetWare Client for NT. Fortunately, there is one feature available in the current versions of the IntranetWare Client for NT that can provide a near-contextless login environment.

The IntranetWare Client for NT "remembers" the user name and the context of the last logged-in user. Therefore, the user only needs to enter the password. The current procedure for the NT workstation configuration at CGC requires manual intervention at the end to enter some workstation-specific information. An additional step was added to the NT workstation installation procedure so that the context information is added. However, the mobile user situation is not addressed. For CGC, there are only a small group of users using NT workstations and none of them are mobile users.

Server Configuration

Although IntranetWare only required a 15MB DOS partition, the team decided that all IntranetWare servers at CGC would have a DOS partition whose size was equal to the amount of RAM in the server plus 50MB. This would allow core dumps to be directed to the DOS partition in the event of a server abend. The allocation of 50MB would also provide room to place a copy of VLM client software, minimal DOS software (such as EDIT, XCOPY, and FORMAT), and room for NetWare patches.

A standard IntranetWare server configuration at CGC consists of the SYS: volume and a VOL1: volume, both using 64KB disk blocks. The SYS: volume is a minimum of 750MB and file compression is disabled (but suballocation is enabled) on the SYS: volume. All remaining disk space is allocated to VOL1:. Only NetWare operating system files are placed on SYS:. All user files and non-NetWare files and utilities (such as ARCserve and PowerChute), including print queues, are placed on VOL1: whenever possible.

The base software configuration of a standard IntranetWare server consists of the following:

  • Latest released IntranetWare Service Pack

  • Compaq Insight Manager (Compaq server management software)

  • Enterprise Desktop Manager (mainframe based software distribution software)

  • ARCserve 6.1 (not all servers will have ARCserve installed)

  • APC PowerChute v4.2.4

  • MakeSSI PK-2.11, a utility to create server-specific information data for NDS disaster recovery purposes (see the "Disaster Recovery Planning" section below)

Disaster Recovery Planning

ARCserve has been the standard data backup device for CGC since the first NetWare server installation. With the new IntranetWare environment, ARCserve 6.1 was certified for use in production; however, there is currently a project underway at CGC to review other backup/restore solutions. Using Auto-pilot jobs, incremental backups are made during weekdays and a full backup is performed every Friday evening. Unlike NetWare 3, where the bindery files are server-centric, a new procedure for NDS backup and recovery needs to be developed and adopted.

The October 1996 issue of the Novell Application Notes contained an excellent article entitled "Backing Up and Restoring Novell Directory Services in NetWare 4.11." The article described how to devise a comprehensive backup plan for NetWare 4 networks and outlined the necessary procedures on how to properly restore NDS information in various example scenarios. Using the information presented in this October 1996 AppNote, especially that of "Loss of a Server in a Multi-Server Network Where Replicas Exist," the team created and tested an NDS disaster recovery plan with favorable results. In the heart of the NDS disaster recovery procedure in a multi-server-multi-partition environment is the regular backup of the server-specific information (SSI) files.

The TSA410.NLM shipped with IntranetWare provides a new resource called "Server Specific Info." This SSI resource packages critical server-centric information into the following five files that can be backed up and subsequently used for recovery purposes:

  • SERVDATA.NDS. This file contains server-specific NDS information. It is used by the INSTALL.NLM to recover trustee assignments and other NetWare information.

  • DSMISC.LOG. This ASCII text file contains a list of replicas, including replica types, which the server held at the time of back up. It also provides a list of the other servers that were in the failed server's replica ring.

  • VOLSINFO.TXT. This ASCII file contains information about the server's volumes, including name space, compression, and data migration settings, at the time of back up.

  • STARTUP.NCF. This is the server's STARTUP.NCF file from the DOS partition.

  • AUTOEXEC.NCF. This is the server's AUTOEXEC.NCF file.

There are two ways in which the SSI resources can be backed up. The first way is to select the "Server Specific Info" from the Resource list when performing a file system backup. The other way is to simply select a full file system backup of the entire NetWare server. When working with ARCserve, there is no selection for a "Server Specific Info" resource available in ARCserve Manager. After numerous phone calls to Cheyenne Technical Support in Toronto, it was not clear if ARCserve would backup and restore the SSI information. As a result, DreamLAN Network Consulting Ltd. developed the MakeSSI NLM utility at the request of CGC.

MakeSSI is a NetWare NLM that uses Novell's SMS (Storage Management Services) API calls to create the above five SSI-related files. In addition, MakeSSI creates two additional files, DOSDRIVE.LST and PARTINFO.LST. The DOSDRIVE.LST file contains a directory listing of the server's DOS partition--very handy if you have installed third-party drivers not included on NetWare's CDs. The PARTINFO.LST file contains partition information which is only useful for NetWare 4.10 servers.

During development of MakeSSI, it was found that the updated TSA410 module for NetWare 4.10 had two shortcomings. First, it did not create a DSMISC.LOG file as was the case for NetWare 4.11; instead a DSMAINT.LOG was created. Second, no useful partition information was recorded in this DSMAINT.LOG file. Therefore, to bridge this gap for NetWare 4.10 servers, MakeSSI creates a PARTINFO.LST file containing the same necessary partition information as found in DSMISC.LOG on NetWare 4.11 servers.

Part of the standard backup procedure is to back up the SSI files. In the Auto-Pilot script, MakeSSI is launched as part of ARCserve's pre-execution script. So, when ARCserve runs a few minutes later to backup the file system, the most current set of SSI files are available and ready to be backed up.

Documentation

Throughout the planning phase of the project, the team kept detailed meeting notes and documentation. Each time they encountered and resolved an issue or problem, they carefully documented the symptoms and solutions. This information was then distributed to the rest of the project team, and formed part of a "solutions database" that could be referenced in the future. Fortunately, most of the roadblocks were easily resolved using the Novell Support Connection Knowledgebase available at http://support.novell.com.

Only a small percentage of the issues required detailed research and workarounds; the SysOps of Novell Support Connection forums on CompuServe were most helpful.

All of the NDS tree and security designs and migration procedures described in this AppNote were first implemented and tested in the project development lab. The development lab environment of two locations separated by a WAN link allowed the study of NDS traffic and performance across WAN links. As a result of the testing, detailed step-by-step procedures were developed and documented. Of particular importance were the step-by-step procedures for:

  • Installing and configuring IntranetWare servers

  • Across-the-wire migration from NetWare 3.12 to IntranetWare

  • Modifying workstation's NET.CFG

  • Server-recovery after failure

These documents were then field-tested by the staff at PPL. Improvements on procedures were made based on the PPL experience, before a full-fledged rollout took place using these documents.

The details of the installation and upgrade procedures are discussed in the next section. (Any changes necessitated during the rollout phase will be noted in Part 2 of this AppNote.)

Pre-Production Lab Testing

The Pre-Production Lab (PPL) served as a staging and final testing area for any PC- and LAN-based application rollout projects. For example, in order to roll-out a new application to the enterprise, the project leader must document the exact procedures needed for the rollout. With the assistance of the PPL staff, the project leader is to go through the documented procedures in the PPL lab, where "standard" workstations are used. If a given step in the procedure fails, a solution or workaround must be found and the procedure modified. Until PPL certifies the project, no new applications or even workstation configuration (including any patches to the desktop operating system, such as NT's Service Pack) will be made available to the general user community.

Installing the PPL Tree

In order to test the migration processes and software compatibility in a near-production environment, another separate NDS tree was created for PPL use. Because of the isolated environment in the PPL lab, a much simpler PPL tree (see Figure 6) was created for the certification process.

Figure 6: A high-level view of the PPL tree.

Testing Server Installation Procedure

From the outset, the team decided to use across-the-wire migration method. This would provide a rollback option to the old NetWare 3.12 server if necessary.

In the PPL, a brand new IntranetWare server was created using the standard NetWare server installation procedure. First, a DOS partition of 150MB was created (as the server has 96MB of RAM) and the standard server configuration (see the Server Configuration section above) called for a DOS partition size that equaled the amount of RAM plus 50MB. The INSTALL program was launched from the NetWare CD, using a locally attached CD-ROM drive. The English language module was selected. In order to exercise full control during the installation, the Custom Install of NetWare 4.11 option was used.

Of the 21GB of disk space, 750MB was allocated to SYS: while the rest was assigned to the DATA: volume. Both volumes used 64KB disk blocks. Suballocation was enabled on both volumes but compression on SYS: was disabled.

As this server was the first IntranetWare server in the PPL tree, a new tree was created as part of the installation process. The tree was named CGC_PPL. The server was a Single Reference Time Server (being the first server in the tree) and was placed in the Eastern Standard Time Zone. The context of the server within the tree was PPL.CGC.CGEI.

To facilitate monitoring by HP OpenView, TCP/IP was enabled on the server.

No major problems were encountered in following the documented installation procedure; however, a few notes in regards to SMP (Symetric Multiple Processor) support drivers and order of a couple of steps were made and corrected.

Testing Data Transfer Procedure

The data transfer consists of two phases: the transfer of user-related information and the transfer of files. In order to preserve user passwords, the bindery files from the NetWare 3.12 server were imported into the NDS. The following steps were used to prepare the bindery files:

  1. A list of existing print servers, print queues, users, and groups are printed out using PRTINFO:

    PRTINFO servername -pq -r(print queues) PRTINFO servername -ps -r(print servers) PRTINFO servername -user -r(users) PRTINFO servername -group -r(groups)

    This resulted in the sample output from DUPBIND:

    F:\HOME>prtinfo lab312 -user -r
    
         PrtInfo v1.20 [s/n: DLAN/970914-DLAN]
    
         (Extracts Bindery User/Group/PrintQ/PrintServer Info)
    
         Copyright (C) 1997, DreamLAN Network Consulting Ltd.  All
    
         Rights Reserved.
    
         Licensed to Consumers Gas Corporation
    
    
    
         Report generated on 25 Sep 97 at 00:56:49.
    
    
    
         All known Users found on server LAB312:
    
         CSADMIN
    
         GUEST
    
         JWO (Judy Olivier)
    
         KUOP (full name from syscon)
    
         SUPERVISOR
    
         UNIX_SERVICE_HANDLER
    
         TEST
    
         USER1
    
         USER2
    
         USER3
    
    
    
         Report file [USER.LOG] generated.
  2. Identified all groups and users in current bindery that already existed in the NDS destination container. Resolved any conflicts. Renamed the EVERYONE group to include the name of the source server; i.e., EVERYONE to EVERYONE_ACCT1.

  3. Used DUPBIND to identify any objects that had the same name but were of different object type; i.e., groups and users with the same name. Resolved any duplicate names. Sample output from DUPBIND follows:

    F:\HOME>dupbind
    
          Duplicate bindery object scanner, v1.0
    
          Copyright 1993. Novell, Inc.
    
          by Morgan B. Adair
    
          Novell Systems Research Department
    
          Distribute freely.
    
    
    
          Scanning bindery on server LAB312 for duplicate static
    
        bindery objects
    
          TEST
    
           Object ID: 2A00001Ch
    
           Object Type: User group
    
          Test
    
           Object ID: 12000010h
    
           Object Type: User
  4. Ran BINDFIX to remove obsolete MAIL directories, trustee assignments, and to compress the bindery files. Ran BINDFIX a second time to obtain a set of "clean" NET$*.OLD files.

  5. Ran VREPAIR on the NetWare 3.12 server to correct any errors on the DATA: volume; no data from SYS: was transferred.

  6. The system login script (SYS:PUBLIC\NET$LOG.DAT) was printed out and used to update the NDS container login script as needed.

Importing NetWare 3.12 Bindery Files into NDS

Once the above steps were completed, the SYS:SYSTEM\NET$*.OLD files from the NetWare 3.12 server were transferred to the SYS:SYSTEM directory on the new IntranetWare server. The following steps were used to import the NetWare 3.12 bindery files into NDS:

  1. The NET$*.OLD files were renamed to NET$*.SYS.

  2. A Read/Write replica containing the container in which the User and group objects will reside was placed on the IntranetWare server.

  3. The bindery context of the IntranetWare server was set to the container into which the objects will be imported.

  4. At the IntranetWare server console, the INSTALL NLM was used to import the bindery information into NDS, using the Directory Options > Upgrade NetWare 3.x Bindery Info To Directory option.

At this point, the NetWare 3.12 bindery was migrated into the NDS. The files were then copied from the NetWare 3.12 server to the new IntranetWare server, and then followed by some NDS cleanup.

The file copying was done using ARCserve 6.1's Server-to-Server Copy option. This allowed trustee, name space, IRM, and directory restrictions information to be transferred. One major lesson the team learned here was that one must install ARCserve 6 on the NetWare 3.12 server instead of the IntranetWare, otherwise, the trustee information would not transfer correctly; many of the trustee assignment would show up as [Unknown].

After the team migrated the user information and data files, some NDS cleanup was done. For example, a utility called CLEANUP was used to convert all object names to lower case, to provide proper case syntax in the Full Name field of User objects, and to replace any spaces in object names with underscores. Then the GRPNAME utility was used to populate the Full Name field of group objects. All bindery objects (any object with a "+" and a number at the end of the object name, such as @0b410000nwda+586) were deleted from the NDS.

The system login script from the NetWare 3.12 server was then cut-and-pasted into the container login script, and edited as necessary to ensure drive mappings were made to the new IntranetWare server. Old print queues and print server objects were deleted and then recreated. This allowed the JetDirect cards to be reconfigured for NDS mode and to redirect the print queues to the DATA: volume.

And lastly, standard server-related applications, such as EDM and PowerChute, were reinstalled to ensure their NetWare 4 version of the NLMs were used.

The utilities mentioned in this section are available either from Novell or DreamLAN Network Consulting Ltd.:

  • DUPBIND.EXE is developed by Novell Research, and is available from Novell's FTP server (ftp.novell.com), the file is called AN304X.ZIP.

  • PRTINFO.EXE, CLEANUP.EXE, and GRPNAME.EXE are developed by DreamLAN, and are part of their "NDS Migration ToolKit". You can obtain the latest version from DreamLAN's Website at:

    http://ourworld.compuserve.com/homepages/dreamlan.

Testing Workstation Upgrade Procedure

Given the current workstation configuration was using VLM, no software additions to the Windows 3.1x environment were needed to connect to the NDS. The only changes required were to the NET.CFG file located in the C:\NWCLIENT directory on the workstation. The changes will be implemented via the NetWare 3.12 system login script prior to migrating the server to IntranetWare. The following was added to the end of the system login script:

IF NETWORK_ADDRESS = "360D7936" OR NETWORK_ADDRESS =

         "97DAB80A" GOTO NEXT

     END


     
     IF OS << "WINNT" THEN BEGIN<
     #NDS

     END



     NEXT:

The NDS.COM resides in SYS:PUBLIC and, for security concerns, is a compiled version of a batch file that makes updates to the NET.CFG. PC Magazine's BAT2EXE utility was used to convert the batch file into an executable.

A couple of minor programming logic errors were identified while updating the workstations in PPL. The NDS.BAT/NDS.COM was updated accordingly. Here is the updated NET.CFG, with the modifications shown in bold:

;UPDATE COMPLETE

Link Driver LANSUP

     FRAME TOKEN-RING

     FRAME TOKEN-RING_SNAP

     PROTOCOL IPXODI E0 TOKEN-RING

     NODE ADDRESS

     MAX FRAME SIZE 4216



PROTOCOL IPXODI

     BIND LANSUP

     IPX PACKET SIZE LIMIT 4160

       IPX RETRY COUNT 80;NORMAL PC

      ;IPX RETRY COUNT 5500;REMOTE TRAM USER



;*********Parameters for Remote TRAM User**********

    ;SPX Connections 30

    ;SPX Abort Timeout 5500

    ;SPX Listen Timeout 2000

    ;SPX Verify Timeout 1000



NetWare DOS Requester

     FIRST NETWORK DRIVE = F

     NETWARE PROTOCOL = NDS BIND

     PREFERRED SERVER = PKWY_CLEAN_APP

     NAME CONTEXT = "PPL.CGC.CGEI"

     VLM = AUTO.VLM

     VLM = RSA.VLM

     SIGNATURE LEVEL 0

     SHOW DOTS ON

     AUTO RETRY = 5

     BIND RECONNECT = ON

     FORCE FIRST NETWORK DRIVE = ON

Testing NDS Security Configuration. An Organization Role object called Help Desk was created in the PPL tree in the PPL.CGC.CGEI context. The OR is granted the various rights, as outlined in the "Restricting Help Desk Access" section, to a number of user objects in the same context. An user object called Victor (for Victor Ward, a Help Desk Representative) is then made occupant of the OR. Victor then logged in under his userid and performed various day-to-day Help Desk functions on the managed users. For example, changing passwords, resetting intruder lockouts, and reset grace logins, without any problems. Victor was not able to modify any other NDS attributes that were outside the management scope of Help Desk Services.

Software Compatibility Testing

CGC has been running VLMs for quite some time with NetWare 3.12 and found no compatibility issues with any of their business applications, such as Lotus AmiPro, Lotus Organizer, Lotus cc:Mail, Microsoft Excel and Attachmate 3270, and in-house developed applications. The only concern is if the introduction of the two extra VLM modules--NDS.VLM and RSA.VLM-- will cause any compatibility problems.

Testing Business Applications. Because PPL is used to test many different applications, many of the testing procedures have been automated using scripts and macros. Therefore, it was easy to put the updated workstations, with different hardware configurations, through their paces in running the various business applications. No incompatibilities with any of the applications were detected when using the updated workstations.

Testing NetWare 3.12 Coexistence. Because of the number of servers at CGC, it is not feasible or possible to upgrade all of them to IntranetWare overnight. Therefore, it is vital that the IS support staff be able to access NetWare 3.12 and IntranetWare simultaneously. In PPL, coexistence with NetWare 3.12 servers were tested and found the access to be "transparent" to the users. The only thing to be remembered by the support staff is that once attached to an IntranetWare server, one must use the /B parameter in the LOGIN in order to log into a NetWare 3.12 server.

Summary

This AppNote is Part 1 of 2 regarding the migration project from NetWare 3.12 to IntranetWare at Consumers Gas Company, Ltd. in Ontario, Canada. This AppNote described the planning and testing processes in details, including NDS tree design, time synchronization and data migration strategy. With a two-person team dedicated full-time to the project, the NDS tree design, testing and certification was completed within a four-month time frame. The second part of the AppNote will provide first-hand information and experience on the actual migration phase of this project, scheduled to start during the last quarter of 1997.

The utilities discussed in this AppNote may be obtained from the following sources:

  • The SNTPCLNT software can be downloaded from Neatech's web site at http://www.neatech.ch/sntpclnt.

  • DUPBIND.EXE is available from Novell's FTP server (ftp.novell.com) in a file called AN304X.ZIP.

  • PRTINFO.EXE, CLEANUP.EXE, and GRPNAME.EXE can be found at DreamLAN's Website at http://ourworld.compuserve.com/homepages/ dreamlan. (A copy of AN304X.ZIP is also available for download from the DreamLAN web site.)

* Originally published in Novell AppNotes


Disclaimer

The origin of this information may be internal or external to Novell. While Novell makes all reasonable efforts to verify this information, Novell does not make explicit or implied claims to its validity.

© Copyright Micro Focus or one of its affiliates