Novell is now a part of Micro Focus

ZENDS Design for Large Sites: Implementing a Replicated ZEN Container

Articles and Tips: article

TIMOTHY FORDE
Consultant
Novell Consulting, Asia Pacific Region

01 Nov 1998


Presents ways to exploit the flexibility in NDS design to deliver better application services to the desktop with ZENworks.

Introduction

Administering a large network consumes enormous resources, particularly in regard to providing a robust application delivery solution to the desktop. Z.E.N.works provides more opportunity than ever to leverage Novell Directory Services (NDS) to deliver network centric services to the desktop via a user's "digital persona". For all the functionality Z.E.N.works delivers; however, it has not made good NDS design and configuration irrelevant. In fact, it has revived NDS design as a hot topic in the context of desktop management and application delivery, especially with Z.E.N.works' Novell Application Launcher (NAL) component.

Since Z.E.N.works and NDS are inextricably linked, I have invented the acronym ZENDS design in the title of this AppNote. This article is relevant to the functionality of Z.E.N.works 1.x but it should be understood that optimising strategies change over time as the products themselves evolve.

The ideas presented here revolve around exploiting the flexibility in NDS design (traditionally an infrastructure discipline) to deliver better application services to the desktop. It should be understood from the outset this is only recommended where the integrity of NDS can be assured. The optimisation of technologies that leverage NDS, such as NAL and other Z.E.N.works components, remains subordinate to the ongoing health of NDS itself. Going beyond the basic guidelines requires an understanding of NDS design fundamentals. If the health of NDS begins to suffer as a result of a new replication strategy, for example, then that must be reversed to the point where the integrity of NDS is consistently maintained. Consult the references listed at the end of this AppNote for the NDS "vital signs" that indicate good health (or otherwise).

Since this AppNote was written in Australia, it retains the British spelling of the author.

Concepts and Issues

In order to assess the merits of using replicated resource containers, we must canvas the following issues which are crucial to developing the right design solution. Most involve understanding how and where objects and files are accessed from the network.

Trade-offs between Administration and Performance

The difference between design scenarios often amounts to trade-offs in administration versus performance at the desktop. Performance affects the users in an immediate sense, while cost of administration has a more insidious affect of diverting IT resources to unproductive work such as maintenance instead of rolling out new products or innovations which will ultimately enhance the user's productivity.

Two Phases of Accessing a NAL Object

There are two distinct things happening when users access applications from the NAL window. These two phases need to be separated when performance issues are considered.

Populating the NAL Window. This is the time taken to query NDS about every application the user is associated with by container (including inheritance), group, or personal association. It is affected by the search parameters set in the Launcher Configuration in NWAdmin for the user and container objects. As each associated object is located, the attributes are read from NDS. If any of these objects is not located in a replica that is close at hand, this may include crossing a WAN link to find a server that contains a replica that has a copy of the object.

Launch / Distribute the Application. This involves accessing the files the object points to both from an execution perspective (where the program runs off the server) and a distribution perspective. When NAL distributes an application, the application template and the source files are read or copied from a file server. This can be a specific file server referenced by an UNC path or a drive mapping which can be directed to the nearest server.

Location Independence Strategy

An integral part of a flexible desktop delivery system is the ability to be clever about where resources are accessed. More specifically, an appropriate location independence regime can be implemented to ensure a logical drive pointer is mapped to a local resource such that files are more efficiently sourced from a local server if appropriate. This is detailed in an October 1998 AppNote entitled "A Z.E.N.works-friendly Location Independence Strategy for NetWare Networks".

Z.E.N.works Performance Killers at the Desktop

There are many apparently benign practices causing performance degradation both during login and as network resources are used. Before considering any sophisticated designs to maximise performance in a large environment, it is worth canvassing more basic traps that can rob an otherwise good network of apparent speed. These include the following:

Global Groups. These can be loosely defined as groups that span WAN boundaries. The use of global groups to include users from containers that represent different physical locations may seem at first glance to be an ideal solution to granting file system rights or application access to a large number of users in the enterprise. This is common in NetWare 3 environments that migrated to NetWare 4.x, and NDS but which did not make the transition to container-based assignments. In fact, in all but very small environments (or those with 100% replication), the use of global groups can be a real performance killer.

Container assignments are more efficient because they leverage an existing object and provide rights inheritance and security equivalence to the occupants. Global groups also burden NDS with a large number of external references. Furthermore, file system rights can now be assigned to the Application object itself, negating a common reason for using groups. Z.E.N.works can be configured to ignore associations by group membership. Groups should be used where a container assignment does not achieve the desired granularity—for example, if you want to target a subset of users that don't warrant their own sub container.

NAL Tree Walking. One of the features of NAL is the ability to configure it to inherit applications from a predetermined number of container levels. This is a great feature to assign a single application to users. But like many great features, this can actually degrade performance when incorrectly used, especially if NAL has to tree walk across WAN boundaries to locate an application object located in a replica on a server far away.

There are three mechanisms to control tree walking of NAL in your environment:

  1. Set a suitable container as the "Top of the NAL inheritance Tree" to ensure NAL will not tree walk past this point. The container may be the user's immediate container or the container that represents the root object of the partition, thus preventing the need to connect to the parent partition to resolve an association. See the Launcher configuration of the selected container in NWAdmin.

  2. Set the Launcher configuration to inherit from only the number of levels that exist between the user object and the "Top of NAL Inheritance tree".

  3. Set the Launcher configuration to ignore group association if groups are not used to associate users with applications.

Workstation Manager Tree Walking. A new feature of Z.E.N.works is the Policy Package object allowing user-, workstation-, and container-based policies to be applied to the desktop. Like global group membership and NAL object association, policy searches can be a curse if inappropriately configured. By default, the search behaviour of Workstation Manager includes going all the way to the [Root] unless a container policy package is encountered that includes a search policy. A search policy can be effectively used to ensure policy searches do not proceed past some logical point. Options for this include:

  • Don't search past the user's container

  • Don't search past a selected container

  • Don't search past the end of the partition (wherever that is)

The last option is a useful catch-all if no firm strategy of policy location is in place. It prevents the search going up to the parent partition possibly residing on a remote server.

Remember that having set a search policy, any policy associated outside that search domain will not be effective. Check the Effective Policies tab in NWAdmin for a user to confirm the desired policy outcome.

Note: If a cloning tool is used to deploy workstation images (such as Symantec's Ghost), ensure the master workstation is not registered or imported as an object in any NDS tree, production or otherwise, when the image is made. Deploying workstations based on this can cause unexpected login behaviour. Unregister and clean the master before the image is taken.

Typical Design Scenarios

Before discussing the possibilities for replicated object containers, it is useful to survey more straightforward design scenarios. For the purposes of this analysis, we will refer only to Application objects; however, much the same logic can apply for other objects used in Z.E.N.works such as Policy Package Objects.

Plan A: "Place Them High"

A tempting solution for making objects available to a large number of users with a low administration load is to place the objects very high in the tree. Access from an NDS perspective can be accomplished through inheritance or global groups. This is generally a bad idea and should only be considered if the site is small and NDS is not partitioned or each site has a full set of replicas.

The point is when programmes on the user's workstation attempt to access NDS objects that are not held in a local replica, a tree walk occurs to complete the process. In the case of NAL reading an Application object or Workstation Manager accessing a policy, these require checking the nearest replica, and if not successful, crossing the WAN to locate a server that does have a replica that holds the object. It's even worse if the association to the object is by a global group whose own associations and rights assignments have to be located and read across the WAN. It is obviously impossible to replicate the entire NDS database in a large number of sites (hence partitioning) and there are limits to the number of replicas that can be supported particularly where those replicas contain dynamic objects that generate NDS transactions.

Furthermore, if a location independence solution is not implemented and the users share a common set of application icons in a multi-site environment, all files are read from the same spot. This will also produce poor performance and generate excessive WAN traffic with possible "flow on" effects to other systems that depend on network bandwidth.

Even with a workable location independence regime in place to map drives to local servers, these restrictions tend to limit the scalability for customers who would be well-suited for this kind of regime.

Plan B: "Place Them Low"

A popular and robust solution is to maintain a set of Application objects in the leaf (bottom level) containers where the User objects are located. This is particularly so when the container represents a physical site. Since partitioning is done around container objects, all object attributes are close at hand, probably on the user's home server. Furthermore, each NAL object references files on the local server, thus avoiding excessive WAN traffic and generally improving performance.

This regime also allows you to configure fault tolerance, load balancing and Application Site List features built into NAL. Furthermore, the new application copy feature can be used to create these replicated application objects in each container. Fault tolerance, load balancing, and application site lists can be used to provide redundancy and fault tolerance.

Replicated ZEN Container

For very large sites, the design scenarios outlined above may not be suitable and may lead to concerns about performance and administration. For example, an environment with 100 applications and 300 locations equals 30,000 objects to create and manage! In addition to the space these consume in NDS partitions (and their replicas), this could potentially consume prohibitive amounts of administration time. Where the number of applications and physical sites are high, an alternate configuration may be considered.

The replicated resource container strategy involves the following:

  • A container of NDS objects, partitioned and replicated to multiple sites. This works with phase one of the NAL process populating the NAL window (as detailed above).

  • A location independence strategy coupled with a replicated set of application files to ensure a local set of files available to the users wherever they physically connect to the network. This works towards phase two of the NAL process (delivering the application files when the application is executed).

Candidates for Replication

The following classes of objects could be considered for a replication container:

  • Application objects to launch and distribute applications such as GroupWise, Netscape, and the office automation applications that form part of the basic desktop offering.

  • Policy objects intended to have a uniform use across multiple sites. For example, a Container policy to limit searches so workstation login isn't mired in time-consuming searches all the way to the [Root]. Also, standard access user policies to minimise administration.

  • Groups, Organisational Roles, and Profile objects whose sphere of influence takes in multiple physical sites. These can be used to house location independence logic and multi-user access to local file systems.

Note: The use of Group objects in this context should generally be restricted to containing members who are in the locations to be serviced by a replicated container. If they contain members of other containers who will not share in the replica ring, the negative aspects of global groups apply.

How It Works from a Desktop Perspective

When NAL executes on the user's workstation, it builds a list of associated applications as per the inheritance behaviour configured at the container (or parent container) in the Launcher Configuration tab in NWAdmin. Since a local replica exists for the ZEN container, resolution is fast and so the apparent speed of the NAL window is enhanced.

The actual execution speed is similarly enhanced by the location independence regime as application files are drawn from a local server that has the replicated applications files.

How It Works from a Partitioning and Replication Perspective

It is important to understand that the principles of NDS design regarding partitioning and replication metric still apply. These include such things as the number of replicas per partition at 2-5 for Quick Design and up to 10 for Advanced Design scenarios. There are particular attributes that make a replicated resource container to many sites work in excess of this:

  • The number of objects is relatively small. Even for 100 application and policy objects this is still a relatively small partition.

  • The size of the objects is not large. The bulk of the space is consumed by the application template and source files stored (and replicated) in the file system and not NDS.

  • The objects are static. This is very important and is a major reason why replicas that include objects that include dynamic attributes such as users should stick to the published limits. Dynamic objects cause NDS traffic as the replicas are synced.

It is only these factors that allow a greater number of replicas to be placed out on remote servers than would normally be considered prudent. If any of these parameters was to change dramatically, an individual object should be excluded from this configuration or possibly the whole idea should be abandoned for that site.

Note: While the number of objects and their changeability are important factors, the most important determinate here is the number of replicas in a replica ring. If problems are encountered, reduce the number of replicas in the ring as a first step.

Where to Place the ZEN Container

The placement of the resource container is dictated by the rules governing NDS partitioning and replication. There is an obvious temptation from an administration perspective to place it very high in the tree. However, raise NDS design concerns over controlling the number of Subordinate References, limiting the number of child partitions per parent and the number of replicas per partition that can be reliably supported (see Figure 1).

Figure 1: Here the ZEN container is replicated around the remote sites numbering 3.

Associating the Applications

The applications in the resource container should be associated with the containers representing locations that are participating in the replication. Group assignments should only be considered when this does not provide the desired degree of granularity. Container assignments are an efficient method from an administration perspective.

Granting File System Rights

A time-saving new feature is the ability to associate file system rights to the Application object and therefore indirectly to any user associated with the application. While this works well in many situations, it has the effect of granting the user those rights simply by association in Z.E.N.works 1.x. This may not be desirable when many disparate locations are involved, as every user would have file system rights to the file servers in other locations. The existence of these far-flung rights may have performance and security implications. The administrative advantage of this feature is no need of Group objects to achieve the same thing. This means every user in every location has rights to every volume in every other location, which works well where good mobile user functionality is required.

Alternatively, the objects in the ZEN container can be associated directly with the recipient containers but not include the required file system rights as part of the Application object. File system rights to each location's file server can be granted to the location container unless it is desired that every user in every location has rights to every volume in every other location.

Hybrid Solution

It is sometimes thought that if a design scenario fits for 70% of the NDS tree, it must be continued for the whole tree no matter how inappropriate it is for the remaining 30%. Even worse is the prospect of a good strategy which suits 70% of the tree being scuttled by a difficult 30% that is not suited to the same treatment. In such cases, why not use a hybrid strategy?

Low Level Objects and Replication Mixed

It is not uncommon for a production solution to suit a replicated ZEN container to deliver Application objects to most of the NDS tree, while some lower level objects are employed in others where this did not make sense. The reasons for this may include:

  • The communications to a few errant sites continue to be a problem, impacting the ability of the replica synchronisation cycle to complete within a reasonable period. Those sites are left off the replicated container list and have their own objects.

  • The replica ring of the ZEN partition is just too large and, without laying the blame at any particular site, the number of servers in the ring just has to be reduced. Again some sites have their own objects.

  • The fault tolerance and load balancing features are weighted more highly than the performance and administration advantages of a replicated container in that part of the tree.

Multiple Replicated Containers

Furthermore, the sheer size of an environment may necessitate having multiple replicated resource containers. Consider an example to service 80 sites, as illustrated in Figure 2. This tree uses four logical concentration points to ensure that the NDS tree is not too flat and wide in the wrong places. There are four replicated containers positioned to service each cluster of 20 sites. That is four times the number of objects to manage, but still an improvement on 80 times the number!

Figure 2: Diagram of multiple replicated ZEN containers.

The bottom line here is striking a balance to deliver improved performance at the desktop and reducing the overall administration overhead. This may include increasing work in certain areas (replication) in order to reduce work in other areas (object maintenance). As long as the replica synchronisation cycle completes reliably within the specified time, there should be no ill effects of using this approach to delivering performance at the desktop.

Summary

Replicated resource containers do not make sense in every situation. They are just another configuration option to consider with their own set of benefits and caveats. If the work involved implementing such a regime is not significantly less than maintenance of Application objects in every location, then this solution may not be appropriate.

It is likely that a hybrid solution will suit many large sites. However, it is important that the implications of each method outlined above be understood, not just the replicated container scenario featured here. It is also crucial that in considering it, the trade offs with NDS Design, partitioning and replication must always play out in favour of the preservation of NDS health.

The following references cover topics related to this AppNote:

  • "A Z.E.N.works Friendly Location Independence Strategy for NetWare Networks" Novell AppNotes, October 1998

  • "Maintaining a Healthy NDS Tree: Part 1" Novell AppNotes, August 1997

  • "Maintaining a Healthy NDS Tree: Part 2"Novell AppNotes, October 1997

  • "Using Novell Application Launcher 2.0 and snAppShot for Application Delivery" Novell AppNotes, August 1997

  • Novell's Four Principles of NDS Design by Jeffrey F. Hughes and Blair W. Thomas (Novell Press, 1996)

  • NDS Design for Z.E.N.works and NDS for NT (see docs\zendsgn.htm on the Z.E.N.works CD) or Designing NDS for Z.E.N.works at: http://www.novell.com/coolsolutions/zenworks/basics.html

* Originally published in Novell AppNotes


Disclaimer

The origin of this information may be internal or external to Novell. While Novell makes all reasonable efforts to verify this information, Novell does not make explicit or implied claims to its validity.

© Copyright Micro Focus or one of its affiliates