Novell is now a part of Micro Focus

Design Rules for NDS Replica Placement

Articles and Tips: article

JEFFREY F. HUGHES
Senior Consultant
Novell Consulting Services

BLAIR W. THOMAS
Senior Consultant
Novell Consulting Services

01 Jan 1997


Discusses seven basic rules for placing NDS replicas to provide fault tolerance and reduce synchronization traffic in a NetWare 4.1x network.

Excerpted from Chapter 5, "Replicate for Fault Tolerance and to Reduce Synchronization Traffic" in Novell's Four Principles of NDS Design, by Jeffrey F. Hughes and Blair W. Thomas (Novell Press, 1996, ISBN # 0­7645-4522-1). To order this book, call 800­762­2974 or 317-596-5200.

Introduction

Novell Directory Services (NDS) is a global distributed name service that can be split into partitions, which are logical sections of the NDS tree or database. The partitions can then be distributed or placed on any number of NetWare 4.1 servers. An NDS replica is the physical copy of a partition that is stored on a specific NetWare 4.1 server. Any NDS partition can have multiple replicas or copies.

One of the most important responsibilities of the network and NDS administrators is to design the placement of the NDS replicas. Specific replica design rules have been created to assist you in meeting the design objectives for replicas and in understanding how the replicas should be implemented throughout the NetWare 4.1 servers on the network. The following are the design rules for replica placement:

  • Always have three replicas for fault tolerance.

  • Replicate locally.

  • Keep the maximum number of replicas per partition from 7 to 10.

  • Keep the maximum number of replicas per server to 15.

  • Replicate to reduce subordinate references.

  • Replicate to provide bindery service access.

  • Replicate to improve name resolution.

These design rules must be considered when organizing the replicas for your network. While we discuss each of the design rules in the following sections, we will stay focused on two replica design principles or objectives: provide fault tolerance and reduce synchronization traffic.

Always Have Three Replicas for Fault Tolerance

NDS replicas increase the availability (fault tolerance) of a partition. A replica increases the availability because the partition now has more than one copy of its information. For example, if a server holding a replica of the partition becomes unavailable, then the users can use another copy or replica for authentication and updates.

A primary goal of NDS replicas is to eliminate the single point of failure for an NDS partition. Having multiple replicas of a partition on separate servers increases the availability of object information if one of the servers should become unavailable. In Figure 1, the NYC and SFO partitions have been replicated to multiple servers to provide fault tolerance for each partition. If one of the servers in the partition becomes unavailable, the other server will respond to the NDS requests.

Figure 1: The NYC and SFO partitions have been replicated to multiple servers on the network to provide fault tolerance for the respective NDS partitions.

We strongly recommend that you always have at least three replicas for each of the NDS partitions in your tree. This means you should always have one master and two read/write replicas for every partition. Three replicas will provide you with adequate fault tolerance and still minimize the synchronization traffic between the servers.

The NetWare 4.1 installation program will automatically create up to three NDS replicas for fault tolerance. When you are installing additional servers into the tree, the installation program will place a replica of the partition where the server is going to be located in the tree. This will occur until there are three replicas for the partition. If the partition where you are installing the server has fewer than three replicas, the installation program will place a read/write replica on the new server in that partition.

If the server installed is being upgraded or has an existing bindery, such as NetWare 3.12, then the installation program will automatically convert the bindery to an NDS replica of the partition where the server is being installed. A NetWare 3.12 server being migrated or upgraded will receive either a master or read/write replica of the partition even if there are already three other replicas for that partition.

For example, in Figure 2 the NYC partition has been replicated automatically to the servers NYC-SRV1, NYC-SRV2, and NYC-SRV3 by the installation program. Since NYC-SRV1 was the first server installed into the NDS tree and has the master replica for the [ROOT] partition, it will also receive the master replica for the NYC partition. The others will receive read/write replicas until there are at least three replicas for the NYC partition. Notice that a fourth server, NYC-SRV4, has been installed into the partition but did not receive a replica automatically. It is assumed that NYC-SRV4 is a new server and was not migrated from NetWare 3.12 (meaning there was no bindery to convert to a replica).

Figure 2: The NYC partition is replicated automatically to the first three servers NYC­SRV1, NYC-SRV2, and NYC-SRV3 by the installation program. A fourth, NYC­SRV4, was installed into the partition but did not receive a replica automatically.

These replicas are created automatically only for the partition where the server object is being installed. The other partitions in the tree are not affected. The installation program performs this automatic replication to support fault tolerance of the NDS tree information. If you are comfortable with where the three automatic replicas were placed during installation, you do not need to change any of the replicas for a partition.

Replicate Locally

You can increase the performance of the NDS access by the physical placement of the replicas. Since replication enables you to have multiple replicas of the partition, it makes sense to place those replicas local to the users that need the information. Having a replica local will decrease the time needed to authenticate, make changes, do searches, and extract NDS information.

Thus, the best method to guarantee these efficiencies in your design is to replicate each of the NDS partitions locally. This means that you would place all the replicas of a partition on the NetWare 4.1 servers that are in the same location as the partition. You should never place a replica across a WAN link if you have available servers that are local. A local server is defined as a server that it is on the same side of your WAN link as each of the other servers holding replicas.

Remember, one of the objectives for proper placement of the replicas is to reduce the synchronization traffic between servers holding copies of a partition. Therefore, if you place a replica on a server that is physically located across a WAN link then you have increased the synchronization traffic. Each server with a copy or replica of the same partition will now have to communicate across the WAN link to keep all of the information in the partition synchronized. This increases both the time and effort for each of the servers.

Another drawback of placing a replica across the WAN link is that most, if not all, partition operations need to communicate to each of the replicas during the operation. For example, if a server is not available during a partition operation because a WAN link is temporarily down, then the operation will wait until it can finish the communicate before completing the operation. It is highly recommended that you check communication between servers holding the replicas of a partition and for the master replicas of the other partitions involved before starting an operation.

Figure 3 shows an example of placing a replica that is not local to the other replicas of the NYC partition. The servers NYC-SRV1, NYC-SRV2, and NYC­SRV3 all have replicas and are all local servers to the partition NYC and each other. On the other hand, the server ATL-SRV1 should not be holding a replica of the partition because it is physically located across a WAN link in the ATL location.

On the other hand, following the rule of placing replicas locally will guarantee that the users will always retrieve their personal information from servers that are physically close to them. Ideally, the replica should be placed so that the user's information is stored on the server that also stores the user's home directory. This may not always be possible, but it does improve the users access to the NDS objects and properties. For example, during the login process the user will map drives to volumes, capture to print queues, and access several of the user properties. NDS will execute each of these requests regardless of where the replicas are stored. However, the login speed for users will be increased if the users find the requested information on their preferred server.

Figure 3: You should always try to place replicas on local servers in the partition. This means that the replicas for a partition are placed on the servers that are in the same location or site. Do not place a replica across a WAN link from the original partition.

Replicas for Remote Offices with One Server

As stated, one of the primary design rules is that you always have three replicas of a partition and always replicate the partition on local servers. However, meeting this criteria may not always be possible. For example, a partition is created for a small remote office location on your network that only has one server. The master replica is held on the central server, which holds the parent partition. Although it is recommended that you place at least two more read/ write replicas for fault tolerance, it is also recommended that you place those replicas on local servers or servers that are in the same physical location as the master replica.

For this type of situation, small field or remote office with only one server is an exception to the design rules. The following questions are frequently asked:

  • What happens if the location or site has only one server?

  • How do I create three replicas for fault tolerance?

  • Do I still partition and replicate locally?

In the case of a remote site with just one server, you should still partition at that site and place a replica of the partition on the local server. In order to replicate for fault tolerance, a second replica of the partition must be placed on a server that is in the nearest location. Although the design rule states that you should not place replicas across WAN links, it is more important to provide fault tolerance for NDS. It is better to replicate a partition at least twice, even if across a WAN link, than lose the NDS information if the server ever goes down.

Typically, the remote office or site contains a small number of users and objects. Therefore, although we recommend against replicating across your WAN, a small replica across your WAN is your only alternative for fault tolerance. It is better to replicate a small partition across a WAN than lose the NDS information if the server ever goes down. Novell Technical Support has stated that is easier to recover or restore the NDS information for a partition if the partition has a current replica. Thus, for the remote office or site it is recommended that replicating for fault tolerance is more important than replicating locally.

However, we do not necessarily recommend that you place two replicas on servers across the WAN link, making three replicas all together. Although three replicas for fault tolerance is ideal, it is better to reduce the synchronization traffic overhead across the WAN by having only two replicas for the remote office partition.

Figure 4 illustrates how a small remote office in the ACME tree should be replicated. For this example only, assume that there is a small remote site in Boston called BOS, which is connected to the NYC hub. There is only one server (BOS-SRV1) in the remote office of Boston. You should create a small partition for BOS and replicate it to the server BOS-SRV1. You should also place a replica on the nearest server, which would be NYC-SRV1 in the New York City hub location.

Figure 4: There is only one server (BOS-SRV1) in the remote office site of Boston. You should create a small partition called BOS and replicate it to the local server BOS-SRV1 and to the server NYC-SRV1 in the New York City location. It is more important to replicate for fault tolerance than to not replicate across the WAN links.

Replicas for Other Types of Remote Offices

When you are trying to develop a replica design for your company and there are remote or field offices that have been partitioned, you also need to consider a couple of more situations. The following situations can cause some confusion on how to design replication properly according to the rules for fault tolerance and placing the replicas locally.

Remote Offices that Have Two NetWare 4.1 Servers. In this case, the remote office has two NetWare 4.1 servers. You should create and place a replica for the partition on each of the local servers. This will establish at least two replicas for the partition defined for the remote office. Since there are already two local NetWare 4.1 servers at the remote office, you may decide that the replication is sufficient for fault tolerance. A third replica placed across a WAN link will cause more WAN traffic that may not be needed. We typically do not recommend that you create the third replica and placed across the WAN link. However, depending on the speed of the links then it may be feasible. If the WAN link is T-1 or greater, a third replica may be appropriate.

Remote Offices that Have No Servers. The case of a remote office that does not have a local server is easier. Since the remote office does not have a server, the users are reaching across the WAN infrastructure in order to log in to servers at the hub location. The remote office location should have an organizational unit (OU) or container defined in the tree to provide better access to resources. However, the container should not be its own partition.

Since the remote office is not a partition, it will be part of the partition defined at the hub location. All the replicas for the partition defined at the hub will be stored on local servers which are the same servers that the remote office users log in to.

Maximum Number of 7 to 10 Replicas per Partition

The maximum number of replicas per partition is 7 to 10. These range will minimize the total amount of network traffic that has to be generated by the replica synchronization process between servers. Reducing the replica synchronization traffic meets one of the replica design objective.

You will create more replicas than you initially planned because you may need to enable bindery services on more NetWare 4.1 servers. Bindery services requires that each server have at least a read/write replica where the server is setting its bindery context. Although the installation program will only try to create up to three replicas per partition for fault tolerance, you may need to create more replicas simply to support bindery services on more servers.

Bindery services is also required during a NetWare 3 to NetWare 4.1 server upgrade. For example, if you are upgrading a NetWare 3 server, a read/write replica of the partition where the server's object is placed will be held on the upgraded NetWare 4.1 server. The read/write replica is placed on the server regardless of whether there are already three replicas. (Refer back to the "Always Have Three Replicas for Fault Tolerance" section earlier in this AppNote.) The assumption is that since the server is being migrated from NetWare 3 and the bindery, bindery services in NetWare 4.1 will be needed.

The rule of no more than 7 to 10 replicas per partition implies that the replicas are full NDS replicas. A full replica is considered to be either a master, read/ write, or read only replica. A full replica does not include the subordinate reference replicas. If you have more than 10 replicas, consider splitting the partition just so you can reduce the total number of replicas per partition and decrease the overhead for the replica synchronization process.

If any of the object information for a partition changes in one of the replicas, it needs to change in all the other replicas for that partition. Remember that the NDS database is loosely consistent, which means that all the replicas are not instantaneously updated. For example, each user object's information changes when the user logs in to NDS. The change is written to the replica on the server that logged the user in. These changes are then synchronized to each of the other replicas of the partition. You can see that replica synchronization increases by simply having more objects in the partition and having more replicas per partition.

The replicas synchronization process runs in the background and works to maintain consistency across the replicas of a partition and the entire NDS database. Although the information passed between the replicas during the synchronization exchange is limited to just the updates, it could impact the traffic on the network, both local LAN and WAN, depending on where you place the replicas. Also, since the LAN infrastructure is typically faster than the WAN, we recommend placing all the replicas local.

Maximum Number of 15 Replicas per Server

One of the primary objectives of the replica design is to minimize the impact of the synchronization between replicas. The replica synchronization process will execute on an individual NetWare 4.1 server and manage all the updates to the other replicas. The replica synchronization process will run for each of the individual replica held by that server. A single server can not hold more than one replica of the same partition. A server can only store multiple replicas if the replicas are from different partitions in the NDS tree. In order to reduce the overall impact of the replica synchronization process on any one server, you should avoid having more than 15 replicas per server.

The best method to reduce the synchronization work load of the servers and the network traffic is to maintain only a few replicas per partition and just a few replicas per server. These two factors go hand in hand to reduce the total number of replicas on a single server and to avoid having more than a total of 7 to 10 replicas per partition.

Plan the placement of the file servers into the tree so that a single container does not hold servers that are on different WAN segments. A partition and its replicas should contain only file server objects that are physically close on the network.

We always recommend that you place the NDS replicas on high-end servers that will keep up with the other servers holding the other replicas. The synchronization process between all the replicas is only as fast and efficient as the weakest link. Do not place replicas on a low-performance server because it will affect the entire process.

The number of total replicas per server may vary depending on your specific network environment, server hardware, and the application of the server in use. For example, the number of replicas for a main file and print server, which holds the home directory for each user, should not exceed 7 to 10. An application or e-mail server could possibly hold a few more, such as 20 to 25 replicas. A NetWare 4.1 server dedicated as an NDS replica server can possibly hold up to 100 replicas per server. Again, the network infrastructure and the services that each server is providing will affect the total number of replicas per server.

Using Dedicated Servers to Hold NDS Replicas

A dedicated NDS replica server is a high-end NetWare 4.1 server whose sole purpose is to maintain NDS replicas in the tree. If the dedicated NDS replica server is a super server that has extremely fast hardware, then it could be possible to have 100 replicas or more stored on the server. The major dependencies would still be the network infrastructure, the speed of the links between servers, and the speed of the hardware where each of he other replicas are placed. Again, remember that each of the replicas placed on the server must be from different partitions. This assumes a very large NDS tree that has at least 100 or more separate NDS partitions.

Replicate to Reduce Subordinate References

When designing the placement of the NDS replicas, you should consider the distribution of subordinate reference replicas. You should design your replica placement to reduce the number of subordinate reference replicas for a partition. Subordinate partitions that cause subordinate reference replicas should be limited from 10 to 15 per parent. This means that a parent OU or container that is a partition should not have more than 10 to 15 child OUs beneath it that will also be partitions.

It is possible to have more than 10 to 15 child partitions, which create the subordinate references. However, in order to push past 10 to 15 child partitions, you should understand the impact and ramifications of the placement of each of the subordinate references. Most NDS tree and partition designs will be able to meet the recommendation of 10 to 15 child partitions. If you have to have more then it should only be implemented in a small portion of the tree. Although you can have more than 10 to 15 child partitions, you should never have more than 35 to 40 child partitions for any one partition. Having more child partitions could cause each server holding the parent to store more subordinate reference replicas.

A subordinate reference replica provides the connectivity between the parent partitions and child partitions. A subordinate reference is essentially a pointer from the parent down the tree to the next layer or child partitions.

Subordinate reference replicas are a design issue because they participate in the replica synchronization process for each of the partitions. The major difference between a subordinate reference replica and the other replica types is the amount of information passed during synchronization. The subordinate reference replicas exchange timestamps with the other replicas and update only the top-most object of the partition, as needed. Thus, the amount of information exchanged between servers for the subordinate reference replicas is very little.

Before we discuss the subordinate reference replica issues, it is comforting to know that if you have designed your NDS tree properly, the subordinate references will be correctly distributed. A good NDS tree and partition design based on the design rules in this book will automatically distribute the subordinate references appropriately.

This entails that the top of the NDS tree is based on the WAN infrastructure with each of the WAN locations being a partition. In this fashion, the partitions are built hierarchical and in the shape of a pyramid. If you created only a few replicas per partition then the subordinate reference replicas will been optimized and there is very little else that you can change.

NDS as a system is responsible for creating, removing, and maintaining the subordinate reference replicas for each partition. Each NDS administrator should be aware (not beware) of the subordinate reference replica placement and understand its impact on the synchronization process. This means you should try to understand where the subordinate references are being created and, if possible, try to replicate so that the overall number of subordinate replicas is reduced. Reducing the number of subordinate reference replicas will increase the performance of the synchronization process for each server.

One way to reduce subordinate reference replicas is to reduce the number of replicas of a parent partition. This is done by limiting the number of servers that will store a copy of the parent partition. Another way to reduce subordinate reference replicas is to place the child partitions on the same servers as their parents. The second option is not always recommended or feasible.

Example of Subordinate Reference Replicas

A good example for illustrating subordinate reference replicas is to use the [ROOT] partition, which is at the top of the tree. Figure 5 illustrates the ACME tree partitioned with the [ROOT] partition. Directly beneath the [ROOT] partition are 200 partitions for the cities or locations in the tree. This is an extremely bad tree and partition design. However, for this example assume that we have created 200 cities or locations and that each is partitioned under the [ROOT] partition.

Figure 5: Assume for this example that the ACME tree has over 200 cities, each partitioned under the [ROOT] partition.

Consider what would happen each time you place a replica of the [ROOT] partition on another NetWare 4.1 server. The NDS system will be forced to create 200 subordinate references on that server, one for each of the child partitions. This assumes that the server holds only a replica for the [ROOT] partition. From the server perspective, it now has to participate in the replica synchronization process for each of the 200 partitions. This amount of work will probably overload the one server. The server will probably not be able to keep up with all the changes in the network. You can see the problem, especially as you add more replicas of the [ROOT] partition to other servers in the network.

In this case of 200 cities being partitioned off the [ROOT], the issue is not just the number of replicas held by one server, but also the amount of traffic between the replicas. Although very little data is passed between a subordinate reference replica and the other replicas, each replica must be contacted. Subordinate reference replicas, by nature, are typically stored across the WAN links and the impact of synchronization to more servers could be an issue.

Another concern is the effect that the large number of subordinate reference replicas has on partition operations. Each replica of a partition must be contacted before a partition operation can complete successfully. (See the "Managing NDS Replicas" section in Chapter 5 of the book.)

The simplest way to eliminate all of these problems is to design your tree and partitions like a pyramid based on the WAN infrastructure, which naturally distributes the subordinate references appropriately. (The rules for designing the NDS tree and partitions correctly are discussed in previous chapters of the book.)

To fix the problem with the ACME tree depicted in Figure 5, you simply need to change the design of the tree. The quickest design fix is to add another layer of OUs or containers directly under the [ROOT] partition and then create each of the containers as partitions. By doing this you will distribute the subordinate reference replicas across more NetWare 4.1 servers. Figure 6 illustrates how adding the regional layer OUs and partitions in the ACME tree will help distribute the subordinate reference replicas.

Figure 6: Adding a layer of regional OUs and partitions in the ACME tree design will help distribute the subordinate reference replicas.

Subordinate References Effect on Partitioning Operations

Partitioning operations is another issue that could affect the placement of the subordinate reference replicas. During any partitioning operation, all the replicas in the replica list must be contacted in order to complete the operation. This includes the subordinate reference replicas. If, for any reason, the server with a replica is not available, the partition operation will not complete until that server is available. If any replica is unavailable, the partition operation will continue and try to contact the replica until it can be reached. Thus, the greater number of subordinate references, the greater the possibility they could affect the efficiency of the partitioning operations.

You may be wondering why the subordinate reference replicas are contacted during a partition operation. Even though subordinate references do not contain all the partition information, subordinate references do contain the partition root object and its properties, such as the replica list, which could change during the operation.

Replicate to Provide Bindery Service Access

The NetWare 4.1 feature known as bindery services enables the NetWare 4.1 server to respond to the bindery calls made by the bindery-based utilities and applications. Bindery services lets these applications run on NetWare 4.1 and NDS without modifications.

In order for a NetWare 4.1 server to support bindery services, it must hold either a read/write or master replica of a partition where the server is installed or where the server is setting the bindery context. You will need to place read/write replicas on the NetWare 4.1 servers to support bindery services. This requirement of bindery services will force you to place more NDS replicas than initially planned.

You will need to plan bindery services requirements and replica placement in order to provide proper access and minimize the total number of replicas per partition and server. Remember, a single partition should not have more than 7 to 10 replicas. Bindery services may force you to exceed these numbers and make you split the partition just to reduce the number of replicas.

Bindery services is also required during a NetWare 3 to NetWare 4.1 server upgrade. For example, if you are upgrading a NetWare 3 server, a read/write replica of the partition is placed on the upgraded NetWare 4.1 server. The read/write replica is placed on the server regardless of whether there are already three replicas, as described in the "Always Have Three Replicas for Fault Tolerance" section earlier in this AppNote. The assumption is that since the server is being migrated from NetWare 3 and the bindery, bindery services in NetWare 4.1 will be needed.

Server Bindery Context

The bindery context set at the server is officially known as the server bindery context. The server bindery context is the name of the container object(s) where bindery services is set for the server. The server bindery context is also referred to as the bindery path. Figure 7 illustrates how the server NYC-SRV1 needs to hold a read/write replica of the NYC partition in order for the server bindery context at OU=NYC to be available.

Figure 7: Setting the bindery services for partition NYC requires that a read/ write or master replica be stored on the server NYC-SRV1. The NetWare 4.1 server is setting the server bindery context to OU=NYC.

In Figure 8 all the objects in the container OU=HR.OU=NYC.O=ACME that are NetWare 3 bindery object types are viewed and accessed as if they are in a bindery. The NDS objects that are available through bindery services include the user, group, print queue, print servers, and profile. This list is included because these are the only objects that were defined in the NetWare 3 bindery. The new NDS objects, such as directory map, organizational role, computer, and alias, will not show up as bindery objects in your Directory.

Figure 8: Server NYC-SRV4 has a bindery context set to OU=HR.OU=NYC.O=ACME.

You can set the bindery context using a SET command at the console. In order to set the bindery context for the server NYC-SRV1 in the ACME tree, enter the following at the server console prompt:

SET SERVER BINDERY CONTEXT = "OU=NYC.O=ACME"

Bindery-based clients and servers can access the objects subordinate to the containers where the bindery context is set. Because NetWare 4.1 now allows you to set multiple bindery contexts for the server, the handling of the invalid context error has changed. NetWare 4.1 cannot fail the entire SET command because one of the containers specified may be valid.

In order to see or verify the effective bindery path for a server, you should enter the following command at the server console prompt:

CONFIG

When you check the effective bindery path by using the CONFIG command, you will see the distinguished names of each NDS container object listed on separate lines. Each container listed here is a valid bindery path. Objects created through bindery services are created as subordinate to the first effective (or valid) bindery path.

In previous versions of NetWare 4, you were limited to setting the bindery context at only one container object. With NetWare 4.1, you can now set up to 16 container objects as the bindery context.

Replicate to Improve Name Resolution

The mechanism NDS uses to find the object location is referred to as name resolution. If the object information is not stored locally, the server must walk the directory tree to find the server that has the correct information. Every replica maintains a set of pointers to all th either replicas in the same partition. Using these pointers NDS can locate the partitions that are above and below in the directory tree. With this detail, NDS can follow these pointers to locate the servers holding the requested data.

To speed the access of locating the appropriate server, replicas can be placed closer to help the user find the requested information. For example, if you are trying to locate information from one side of the tree to the other, the partition and replicas that will naturally help you is the [ROOT] partition. In this case, we recommend that you replicate the [ROOT] partition to the major hub site in your company. This is recommended only if the [ROOT] partition is set up as a small partition. Small means the [ROOT] partition should include just the [ROOT] object and the O=ACME object. See Figure 9 for an illustration of a [ROOT] partition that is created as a small partition for replication across a WAN link.

Figure 9: The [ROOT] partition can be replicated across the WAN links if it is created as a small partition containing just the [ROOT] object and O=ACME object.

Figure 10 illustrates how the [ROOT] partition can be replicated to a few strategic locations int he WAN infrastructure to facilitate improved name resolution. In this case, we have placed a copy of [ROOT] in New York City (NYC) on the NYC-SRV1 server, in Atlanta (ATL) on the ATL-SRV1 server, and in San Francisco (SFO) on the SFO-SRV1 server.

Figure 10: NDS replicas for the [ROOT] partition are stored on different servers throughout the network to facilitate improved name resolution.

Again, the [ROOT] partition should be kept small to keep NDS synchronization traffic to a minimum. A small [ROOT] partition (containing very few objects) is fairly static. So the replication of [ROOT] across the WAN links is okay. However, do not go overboard on replicating this partition. Three or four replicas of the [ROOT] partition is sufficient.

We do not recommend that you replicate any other partition across the WAN infrastructure for the purpose of name resolution. Because [ROOT] is at the top of your tree and name resolution that traverses the tree from one side to the other must go through the [ROOT] partition, it only makes sense to replicate [ROOT] at key hub locations in your tree.


Tip: NDS replicas do not provide fault tolerance for the file system stored on theNetWare 4.1servers. Replicas only increase the availability of the NDS objectand propertyinformation stored in the NDS partition.

Partition and Replica Matrix

The best method for keeping track of where partitions and replicas are stored in the system is to use a Partition and Replica Matrix. This matrix will help you design and implement the partitions and replicas more efficiently and will also help you track the replica type and placement for each partition. The matrix helps you document the creation of partitions and the placement of each replica. If you need to perform any partition operation, you have a quick and easy tool to refer to.

* Originally published in Novell AppNotes


Disclaimer

The origin of this information may be internal or external to Novell. While Novell makes all reasonable efforts to verify this information, Novell does not make explicit or implied claims to its validity.

© Copyright Micro Focus or one of its affiliates