Novell is now a part of Micro Focus

Pass It On: Synchronizing NDS Replica Information

Articles and Tips:

Jeffrey F. Hughes

Blair W. Thomas

01 Mar 1997


Novell Directory Services (NDS) is a network-wide database for IntranetWareand NetWare 4 that stores information about network resources--such as users,groups, printers, servers, and volumes--and organizes these resources intoa hierarchical directory structure.The NDS database can be divided intological sections, orpartitions, and these partitions can then bereplicated to multiple IntranetWare and NetWare 4 servers for increasedfault tolerance, improved network performance, and easier access to networkresources.

Because NDS can be logically partitioned and replicated across multipleservers, these servers must ensure that every replica of a particular partitionmaintains the same information. To keep each replica up-to-date, IntranetWareand NetWare 4 servers use thereplica synchronization process.

A LOOSELY CONSISTENT DATABASE

When you make a change to NDS, you actually modify only one replica ofa particular partition. The replica synchronization process automaticallysends this change to the other replicas of this partition, ensuring thateach replica receives all of the changes made to the partition. (See Figure 1.) The replica synchronization processmaintains the integrity of the NDS database across all of a partition'sreplicas.

Figure 1: All replicas of a single partition, such as the [Root] partition, work together to exchange updated NDS information.

The NDS database is loosely consistent: Because changes can be made tovarious replicas of a partition, not all of these replicas may contain thesame information at exactly the same time. In other words, when one of apartition's replicas is modified, the other replicas do not receive thechange immediately because the replica synchronization process takes timeto complete. The change is propagated to the partition's other replicasover time.

Only the information that has been changed for each object or propertyis sent to every replica of the partition in which the object resides. Sendingonly changes between replicas reduces the total amount of information thatmust be sent, thereby keeping network traffic for the replica synchronizationprocess to a minimum.

KICKING OFF THE REPLICA SYNCHRONIZATION PROCESS

The replica synchronization process is event driven, which means thatany NDS event can activate this process. (Aneventoccurs when youcreate or modify an object or property.) The replica synchronization processis scheduled to run on a server after an object or property that residesin a replica on the server is changed. The amount of time that lapses betweenan event occurring and the replica synchronization process beginning dependson the event itself. Each event has one of two flags, which determines whetheror not the event is a high priority, orhigh convergence, event:

  • Fast synchronization (high convergence)

  • Slow synchronization (low convergence)

Fast Synchronization

Fast synchronization occurs whenever a user updates an object. This typeof synchronization takes place 10 seconds after the change is made. Thisslight delay enables subsequent events, if any, to be processed during thesame synchronization cycle as the event that triggered this cycle.

Slow Synchronization

Slow synchronization occurs whenever a user logs in to the NDS tree.This type of synchronization takes place 30 minutes after the login event.Slow synchronization updates the following four properties of the User object,which are automatically changed when the user is authenticated to the NDStree:

  • Network address property

  • Last login time property

  • Current login time property

  • Revision count property

The network address property is modified to reflect the current physicaladdress of the workstation from which the user is logged in. The last logintime and current login time properties are also modified to reflect theirnew status. Likewise, the revision count property, which records all changesmade to an object, is modified to reflect the login event.

THE REPLICA SYNCHRONIZATION PROCESS AT WORK

After an event occurs on a particular replica, the replica synchronizationprocess reads the replica pointer table, orreplica ring, which islocated on each server on which a replica resides. The replica ring listsall of the servers that hold a replica of the partition. The replica synchronizationprocess contacts each server listed in the replica ring one at a time, andthese servers then send all changes that have been made since the last synchronizationcycle to the other servers listed in the replica ring.

After a server successfully sends its changes to another server listedin the replica ring, the first server proceeds to the next server in thereplica ring until each server has been updated. If the replica synchronizationprocess fails and is unable to update one or more replicas during the synchronizationcycle, the update is rescheduled for a later synchronization cycle.

Timestamps

To avoid performing updates in the wrong order, the replica synchronizationprocess tracks the order in which events occur. Each NDS object is automaticallyassigned two timestamps: One timestamp indicates when the object was created,and the other timestamp indicates when the object was last modified. Eachproperty of a particular object, on the other hand, is assigned only onetimestamp, which indicates when the property was last modified.

These timestamps ensure that the replica synchronization process performsupdates in the correct order for every object and property in a partition.When an event occurs, NDS issues a new timestamp and associates this timestampwith the event. The replica synchronization process refers to the timestampsassociated with each object and property to determine the order in whichevents occurred. For example, if the same property of the same object weremodified by two network administrators at approximately the same time, thereplica synchronization process would update only the event with the mostrecent timestamp.

The timestamps for each object and property in a particular partitionhave unique values because these values are assigned by the replica in whichthe object or property exists. A timestamp consists of the following threevalues, which are shown in Figure 2:

Figure 2: A timestamp for each object and property consists of a seconds value, a replica number value, and an event ID value.

  • Seconds. This 4-byte value records Universal Time Coordinated (UTC) in whole seconds since 12:00 a.m. on January 1, 1970. (UTC is Greenwich time.) These seconds indicate the actual time at which the event occurred.

  • Replica Number. This 2-byte value records the number of the replica in which the event occurred and the timestamp was issued. (The master replica assigns a number to other replicas of the same partition, ensuring that each timestamp in the partition is unique.)

  • Event ID. This 2-byte value records the order in which events occurred during a particular whole second. Because computers are so fast, many events can occur in a single second. The event ID, which is reset every second, differentiates these events from one another. The first event that occurs in a single second is assigned the event ID of 0 KB, the second event that occurs in the same second is assigned the event ID of 1 KB, and so on up to 64 KB.

To create accurate timestamps, servers must maintain accurate time, andthis time must be synchronized between all of the servers on your network.(For information about how to perform time synchronization, see "NetWare 4 Time Sync: Synchronize Your Network Servers," NetWare Connection, Oct. 1996, pp. 36[shy ]39.)Every server on your network must create accurate timestamps to ensure thatthe replica synchronization process performs updates in the correct order.

Synchronized Up To Vector

Each server that holds a replica of a particular partition maintainsa property called thesynchronized up to vector. This property consistsof a list of timestamps: one timestamp for each server in the partition'sreplica ring, and one timestamp for each event.

The synchronized up to vector is critical to the replica synchronizationprocess, which refers to the list of timestamps to determine what informationmust be synchronized between servers that hold a replica of the same partition.Because the synchronized up to vector for a particular server contains atimestamp for the latest updates the server has received, other serverssend this server only those updates that have not yet been performed.

Before a server begins the replica synchronization process, the servertakes a snapshot of its own synchronized up to vector and stores this snapshotin memory until all other servers that hold a replica of the same partitionhave been contacted and updated successfully. The server can then performthe appropriate updates again if the replica synchronization process fails.

You can use either the NDS Manager utility or the DSREPAIR utility toview the timestamps in each server's synchronized up to vector. (The NDSManager utility is included with IntranetWare and NetWare 4.11.) You canalso use these utilities to view other replica synchronization information,such as the time at which the last updates were performed for a particularpartition. (See Figure 3.)

Figure 3: The NDS Manager utility displays replica synchronization information about each partition, such as the time at which the [Root] partition was last updated.

MILESTONES OF THE REPLICA SYNCHRONIZATION PROCESS

When an event occurs in a replica on a particular server, this serveris responsible for updating other replicas of the same partition. Any serverthat initiates the replica synchronization process is called thesourceserver, and any server that receives updates from a source server iscalled thetarget server. Figure 4 shows how a source server sends updates to each target server, and Figure 5 shows the steps that are performed each time a source server initiatesthe replica synchronization process with a target server.

Figure 4: A source server must send its modified objects and properties to each server that is listed in the source server's replica ring.

Figure 5: The source server and the target server must perform several steps to complete the replica synchronization process.

  1. An event occurs on the source server, which then schedules the replica synchronization process.

    • If the event is flagged as fast synchronization, the process is scheduled to be performed 10 seconds from the time the event occurred.

    • If the event is flagged as slow synchronization, the process is scheduled to be performed 30 minutes from the time the event occurred.

  2. The source server reads its own synchronized up to vector and replica ring to determine which servers on the network hold a replica of the partition that is being synchronized.

  3. The source server connects and authenticates to a target server.

    • If the source server cannot establish a connection with the target server, the source server checks its own replica ring and proceeds to the next server listed in this ring.

    • If the source server establishes a connection with the target server, the source server requests the target server's synchronized up to vector.

  4. The target server sends its own synchronized up to vector to the source server.

  5. The source server determines whether or not it needs to send updates to the target server by comparing the timestamps in its own synchronized up to vector to the timestamps in the target server's synchronized up to vector.

    • If the source server's timestamps are not more recent than the target server's timestamps, the source server terminates its connection with the target server and proceeds to the next target server that holds a replica of the partition.

    • If the source server's timestamps are more recent than the target server's timestamps, the replica synchronization process proceeds.

  6. The source server sends updated object and property information to the target server.

  7. The target server verifies that the information sent by the source server should be updated. The target server compares its own timestamp for each object and property with the timestamp sent by the source server. The target server then updates the appropriate objects and properties. The source server repeats this process until all updates have been sent to the target server, which then sends a reply indicating whether or not the updates were received successfully.

  8. The source server sends the target server an"End synchronization"message, which includes the source server's synchronized up to vector.

  9. The target server merges the timestamps from its own synchronized up to vector with the timestamps from the source server's synchronized up to vector, using the higher timestamp values of the two vectors.

  10. The source server checks to see if all of the servers listed in its own replica ring have been updated.

    • If a particular server has not been updated, the source server tries to contact the server up to three times, which is the predefined maximum number of retries. If the source server reaches the maximum number of retries without successfully contacting this server, the source server reschedules the replica synchronization process for the entire partition.

    • If every server has been updated, the source server writes its own synchronized up to vector (as it appeared at the beginning of the process) to the partition's synchronized up to vector.

    • The source server creates a partition status attribute, which indicates the results of the replica synchronization process.

  11. The replica synchronization process is complete.

ACTIVATING THE REPLICA SYNCHRONIZATION PROCESS

As mentioned earlier, the replica synchronization process is automaticallyactivated whenever an NDS event occurs. However, this process is also activatedunder the following circumstances:

  • When NDS transmits a replica synchronization trigger, orheartbeat

  • When you manually schedule the replica synchronization process

Replica Synchronization Heartbeat

By default, NDS transmits a replica synchronization heartbeat every 30minutes. This heartbeat prompts each IntranetWare and NetWare 4 server thatholds a replica to begin the replica synchronization process. In this way,NDS checks to see if each server is still connected to the network.

You can adjust the heartbeat time interval by changing the NDS SynchronizationInterval SET parameter. As mentioned earlier, the default value of thisparameter is 30 minutes, but you can define a new setting using the followingSET command:

SET NDS SYNCHRONIZATION INTERVAL = [time in minutes]

You can also use the following SET DSTRACE command to adjust the heartbeattime interval:

SET DSTRACE = !H [time in minutes]
Manual Scheduling

Using the DSTRACE SET command, you can force the replica synchronizationprocess to begin at any time. To check to see if a replica has been changedon a particular server, enter the following command at the file server console:

SET DSTRACE = *S

If a replica on the server has been changed, this command will forcethe replica synchronization process to begin. However, if no replica hasbeen changed, this process will not begin. To force the process to beginregardless of whether a replica on the server has been changed, enter thefollowing command at the file server console:

SET DSTRACE = *H

REPLICA SYNCHRONIZATION TRAFFIC

To minimize the amount of traffic generated by the replica synchronizationprocess, you must consider this process as you create replicas on the network.The following factors determine how much traffic the process generates:

  • Total number of objects per partition

  • Total number of replicas per partition

  • Total number of replicas per server

  • Distribution of subordinate reference replicas

  • Server and network performance

Total Number of Objects per Partition

We recommend that a single partition contain no more than 1,500 objectsso the replica synchronization process can efficiently update all of thepartition's replicas. If a partition grows larger than 1,500 objects, youshould split the partition to reduce the total number of objects it contains.

Total Number of Replicas per Partition

By default, the IntranetWare or NetWare 4 installation program createsthree replicas per partition to ensure fault tolerance. However, you mayneed to create additional replicas to support bindery services on otherservers. We recommend that a single partition contain no more than fivefull replicas, which include master, read-write, and read-only replicas.(Subordinate reference replicas are not full replicas.) If you need to createmore than five full replicas, you should split the partition to reduce thetotal number of replicas of this partition.

Total Number of Replicas per Server

The maximum number of replicas you should create on a particular servervaries depending on the server's purpose. For example, we recommend thata typical network server hold no more than 7 to 10 full replicas, whereasa dedicated replica server can hold up to 100 replicas. Of course, onlycompanies with large enterprise networks will want to dedicate an entireserver to storing replicas.

Distribution of Subordinate Reference Replicas

As mentioned earlier, each server that holds a replica participates inthe replica synchronization process. Because subordinate reference replicasare listed in the replica ring, they are contacted by source servers toreceive updates.

NDS automatically places a subordinate reference replica on each serverthat holds a replica of a parent partition but does not hold a replica ofthe parent's child partition. In this way, NDS links the parent partitionto its child partition.

A subordinate reference replica contains only the topmost container objectin the partition, which is called thepartition root object. Thispartition root object stores all of the information that links the parentpartition to its child partition, including a list of the partition's replicas.

The replica synchronization process updates the partition root objectwhenever this object or one of its properties is modified. The synchronizedup to property is typically the only property of this object that is changed.However, this property is updated frequently.

Although subordinate reference replicas contain little data that mustbe synchronized, they do participate in the replica synchronization process.As a result, a large number of subordinate reference replicas can increasethe amount of network traffic generated by the replica synchronization process.

You should find out how many subordinate reference replicas are storedon your network and reduce the number of these replicas if possible. Toreduce the number of subordinate reference replicas, you can reduce thenumber of replicas of the parent partition. You can also place child partitionson the same server as the parent partition if possible.

Server and Network Performance

You should place every replica on a high-performance server. Becausethe replica synchronization process is only as efficient as its weakestlink, you should not place even one replica on a low-performance server.

The speed of the replica synchronization process also depends on thespeed of your network's LAN and WAN links. The faster these links are, themore efficient the replica synchronization process will be.

CONCLUSION

By understanding how the replica synchronization process works, you canproperly partition and replicate the NDS database to help the servers onyour network perform this process as efficiently as possible. The replicasynchronization process can then keep all of the replicas in your NDS treeup-to-date without bogging down the network with unnecessary traffic.

Jeffrey F. Hughes and Blair W. Thomas are part of Novell ConsultingServices. They are coauthors ofNovell's Four Principles of NDS Design,Novell's Guide to NetWare 4.1 Networks,andNovell's Guideto IntranetWare Networks.

* Originally published in Novell Connection Magazine


Disclaimer

The origin of this information may be internal or external to Novell. While Novell makes all reasonable efforts to verify this information, Novell does not make explicit or implied claims to its validity.

© Copyright Micro Focus or one of its affiliates