Novell is now a part of Micro Focus

Traffic Problems?: Making Way for Important Network Packets

Articles and Tips: article

Linda Kennard

01 Jul 1999


In the immortal words of John Swigert during the Apollo 13 flight to the moon, "Houston, we have a problem." Granted, the problem that we, or rather, you and other network administrators have doesn't threaten your lives--but it does threaten your networks. The problem is you're running a variety of applications with disparate needs and sensitivities on multiservice networks that have limited bandwidth. In other words, you're not leaking critical oxygen into endless space; you're flooding finite space with critical applications.

Delay-sensitive, mission-critical traffic is struggling for space on the same pipe that is being devoured by bandwidth-intensive, nonessential network traffic. Consequently, when a few users fire up their RealAudio players to listen to clips from the Apollo 13 movie soundtrack that they have downloaded from movietunes.com, all network traffic suffers. Although the users who are grooving to the tunes don't care about the network's waning performance, other users do. When database access and other business applications are drained of power, the users who are actually working complain.

Who takes the heat for this problem? You do. What are you going to do about the problem? You could simply add more bandwidth--a practice that vendors and industry analysts call over-provisioning. Although over-provisioning a LAN is arguably more cost-effective than managing the bandwidth you have, over-provisioning a WAN is not always a practical solution to network congestion. Adding bandwidth to a WAN is too costly to be widely accepted as practical. (For more information, see "More Versus Managed Bandwidth: A Poor Excuse for a Debate.")

Even if bandwidth were free, it isn't limitless. You have to continually add bandwidth to compensate for the law of diminishing space, which is this: The more space you have, the more space you use--whether that space is a closet, a house, or network bandwidth. You can call this phenomenon whatever you like, but the point is that no matter how much bandwidth you have, "[you] keep coming up with new and creative ways to use" even more bandwidth, Gordon Smith, marketing vice president at Ukiah Software Inc., points out. "The golden future where bandwidth is limitless and free," Smith adds, "is still a good long way off."

THE RIGHT (TRAFFIC-MANAGEMENT) STUFF

In the absence of this mythical era of limitless, free bandwidth, what can you do to improve the throughput and reliability of WAN traffic? You can deploy a router that touts quality of service (QoS) capabilities, or you can deploy a relatively new type of product called a traffic shaper. (See "Routers and Switches With Quality of Service Capabilities" and "Traffic Shapers.")

Although you undoubtedly know what routers are, you may not have heard about traffic shapers. Traffic shapers can be hardware-based products (with management software) or software-based products that serve primarily to control congestion by applying prioritization rules and allocating prespecified percentages of bandwidth to different types of traffic. To control congestion on a WAN, how do you decide which device to deploy--a traffic shaper or a router with QoS capabilities? Which device is better? And what does "better" mean?

SO MANY CHOICES, SO LITTLE TIME

Assume for a moment that "better" refers to a device's ability to ensure a consistent level of service for various types of traffic from one end of a WAN to the other. In this case, neither device is better than the other because neither device can meet this criterion. End-to-end QoS to every office or person with whom you communicate--including those with whom you communicate over the Internet--is not possible today. Barring that possibility, the best you can hope to do is to control traffic at common points of congestion, such as at the LAN/WAN border.

Is it better to use a traffic shaper or a router with QoS capabilities to control traffic at the LAN/WAN border? The strongest argument for routers is that you have to use a router anyway, so why not use one with traffic-prioritization capabilities? Unfortunately, you probably already have routers, and those routers most likely don't have traffic-prioritization capabilities.

Rather than waste the investment your company has already made by scrapping the routers you have and buying others, you may instead consider preserving that investment by buying a traffic shaper. Investment preservation represents the strongest argument for deploying a traffic shaper on a LAN/WAN border.

Beyond that, whether it is better to deploy a router or a traffic shaper is a stupid question or, at best, the wrong question. A better question centers around the methods these devices use to control traffic. That is, all of these devices aim to control the rate of traffic, particularly TCP traffic. To control traffic, all devices use one of two methods: queuing or TCP window sizing. The better question is this: To control congestion, is it better to use queuing, as provided by routers and a few traffic shapers, or TCP window sizing, as provided by most traffic shapers? As with all questions posed in this industry, no one can give you a straight answer. However, you can learn more about each method for controlling congestion and make an informed choice based on the needs of your company.

UNFAIR AND PARTIAL QUEUING

Queuing algorithms that traffic-prioritization devices use to provide different levels of service to different traffic types are unfair and partial--as they should be. A fair method for forwarding traffic would be one that views all traffic as equal. But all traffic is not equal. What you need is a method that recognizes--and responds appropriately to--different traffic types, giving priority to the most critical traffic.

The standard first-in-first-out (FIFO) queue that even the most basic routers use (not to mention banks and drive-through windows) treats all traffic equally. Treating traffic equally is a QoS nightmare. In FIFO queuing, a device has one queue per port and simply forwards packets when they reach the front of the line. As a result, Structured Query Language (SQL) packets can get stuck behind long lines of Network News-Transfer Protocol (NNTP) packets, and when the FIFO queue's buffer fills up, SQL packets may be dropped.

Fortunately, the specialized queuing algorithms that routers, switches, and traffic shapers use to prioritize traffic exercise obvious biases for traffic you designate as high priority. The following are the most common queuing algorithms.

  • Priority Queuing

  • Weighted Fair Queuing (WFQ)

  • Class-Based Queuing (CBQ)

Get Your Priorities Straight

The Priority Queuing algorithm creates multiple queues per port and assigns each queue a relative priority value. Devices that use Priority Queuing decide which queue to place traffic in by checking preconfigured traffic-prioritization rules. (For more information about traffic-prioritization rules, see "Discrimination--In a Good Way.") Priority Queuing then transmits traffic in high-priority queues before transmitting packets in lower-priority queues.

Priority Queuing is obviously better at ensuring different levels of service for different types of traffic than FIFO queuing is. However, Priority Queuing gives the queue being serviced all available bandwidth. The potential result is that when used alone, Priority Queuing can starve lower-priority traffic of bandwidth by always servicing a high-priority queue despite traffic piling up in a low-priority queue.

Not surprisingly, all of the vendors that use Priority Queuing compensate for this inherent Achilles' heel by using other methods, such as TCP window sizing or WFQ. Of the products mentioned in this article, routers and switches from 3Com Corp., Cabletron Systems Inc., Lucent Technologies, and Nortel Networks Corp. support Priority Queuing, as do traffic shapers from Netscreen Technology Inc., Packeteer Inc., and Ukiah Software Inc. (See "Routers and Switches With Quality of Service Capabilities" and "Traffic Shapers.")

Queuing With Some Weight to It

Like Priority Queuing, WFQ creates multiple queues for different traffic classes and assigns each queue a relative priority value. Devices that support WFQ (as with devices that support Priority Queuing) decide which queue to place traffic in by checking preconfigured traffic-prioritization rules. But the similarities between Priority Queuing and WFQ end there.

With WFQ, you assign a weight value to each queue in proportion to its level of priority. You weight queues to ensure that higher priority queues get a larger percentage of available bandwidth than lower priority queues get. The exact amount of bandwidth each queue actually gets depends on the number of queues sharing that bandwidth at a given moment.

For example, suppose that you have eight queues. You assign three queues a weight value that ensures that when the queues are busy, they get the largest amount of bandwidth. You then assign the remaining five queues lower but equal values, ensuring they always get some bandwidth, but a smaller amount than high-priority queues get.

When one of the high-priority queues and two of the other queues are busy, the high-priority queue may get 50 percent of the available bandwidth, and the other queues may get 25 percent each. When the high-priority queues are all busy at the same time, each high-priority queue gets only a little more than 30 percent of the available bandwidth. In other words, although WFQ ensures that traffic classes always get some bandwidth, the specific rate is variable--not guaranteed.

Of the products mentioned in this article, routers from Lucent Technologies, Cisco Systems Inc., and Nortel Networks support WFQ, as do routers and switches from Cabletron Systems. (See "Routers and Switches With Quality of Service Capabilities.") NetGuard Inc.'s traffic shaper also supports WFQ. Check Point Software Technologies Inc.'s traffic shaper uses WFQ with other proprietary capabilities to guarantee bandwidth rates and limits. (See "Traffic Shapers.")

In contrast to Priority Queuing and WFQ, CBQ enables you to allocate a guaranteed bandwidth rate to each traffic class.

Queuing With Class

CBQ is an open packet-scheduling algorithm that is arguably the most sophisticated of the queuing algorithms vendors commonly use today. In contrast to Priority Queuing and WFQ, CBQ enables you to allocate a guaranteed bandwidth rate to each traffic class.

Like Priority Queuing and WFQ, CBQ creates multiple queues for different traffic classes and decides which queue to place traffic in by checking preconfigured traffic-prioritization rules. Unlike Priority Queuing and WBQ, however, CBQ enables you to assign traffic classes guaranteed data rates. For example, if you allocate 56 kbit/s to a high-priority traffic class, CBQ ensures that that traffic class always gets 56 kbit/s.

With CBQ you can also define parameters that enable devices to distribute additional bandwidth to traffic classes as bandwidth is needed. In short, with CBQ you can ensure that traffic classes always get their guaranteed bandwidth rates. You can also ensure that traffic classes can burst above those guaranteed rates when necessary, depending on how you have configured the CBQ device.

Of the products mentioned in this article, traffic shapers from IPHighway and Sun Microsystems Inc. support CBQ, as do routers from Xedia Corp. (See "Routers and Switches With Quality of Service Capabilities" and "Traffic Shapers.")

SHAPE UP!

Although some traffic shapers use CBQ to control congestion, most traffic shapers, including those from Allot Communications Inc., Elron Software Inc., NetGuard, Packeteer, and Ukiah Software, use a method called TCP window sizing. To understand TCP window sizing, you must understand a few things about TCP: Devices communicating via TCP send acknowledgment (ACK) packets to indicate that messages have been received. These ACK packets also contain the TCP Receiver Window Size, a value that indicates to the sending station how much data can be sent during the next packet transfer.

The receiving server (or an intercepting device) can enlarge or reduce the window size at any point during the communication exchange. A sending server waits for an ACK packet from the receiving server before sending the next series of packets. When that ACK packet arrives, the sending server sends as many packets as possible to fill up the available window. Because TCP aggressively fills up the window, TCP often creates sudden bursts of large chunks of traffic. As you know, bursty, chunky traffic is not a good thing. Devices that use queuing to manage bandwidth respond to TCP traffic as they respond to all traffic: They place the traffic in a queue. The TCP traffic then awaits its turn like every other traffic flow. When bursts of TCP traffic hit queuing buffers that are already full, packets get dropped, and this packet-dropping signals TCP sending devices to slow down.

Figure 1: The DiffServ standard changes the name of the Type of Service (TOS) field to the DS Byte field.

Waiting until TCP traffic overflows queues to signal sending servers to slow down is a potential problem. If all TCP streams simultaneously slow down and then simultaneously ramp up again, the resulting cycle wreaks havoc on network performance: Queuing devices drop packets, leading servers to slow down. Later, servers ramp up, leading queuing devices to again drop packets.

Devices that use TCP window sizing intercept TCP ACK packets and adjust the window size when necessary to keep TCP traffic flowing. Rather than allowing TCP to function as it would normally, devices that use TCP window sizing intervene and reset the TCP window size to tell the sending server to slow down--before it sends a sudden burst of traffic. TCP window sizing thus combats the traffic bursts that are the defining characteristic of TCP. (For more information about TCP window sizing, visit http://www.packeteer.com/technology/tcp.htm.)

Of course, vendors offer different types of TCP window sizing. In fact, Packeteer, a company that produces bandwidth-management devices, called into question our use of the phrase TCP window sizing rather than TCP rate control in reference to the method its PacketShaper products use to control traffic. Using the phrase TCP window sizing, Jeff Barker, senior product manager at Packeteer, suggests, "is a way to trivialize the TCP rate control method that Packeteer invented." TCP rate control, Barker adds, "involves more than simply adjusting the TCP window size."

Chris Belthoff, Elron Software product manager, agrees. Elron's CommandView Bandwidth Optimizer also uses a custom version of TCP rate control that Elron calls Dynamic Traffic Control. As Belthoff explains, products that use TCP rate control, rather than what Belthoff and Barker both refer to as simple window sizing, address issues such as when to apply rate control and by how much and for how long to adjust the TCP window size. "Applying TCP rate control inappropriately," Belthoff adds, presumably implying the trouble TCP window sizing alone can cause, "can actually increase retransmits."

ALL THINGS CONSIDERED

Vendors of products that use queuing or TCP window sizing (or rate control as the case may be) to control traffic could probably spend days debating the advantages and disadvantages of one versus the other. (For more information about the pros and cons of both queuing and TCP window sizing, see "All Things Are Relative.") The bottom line is that neither queuing nor TCP window sizing is inherently better or worse but simply different. You should learn about the specific methods that individual traffic-prioritization devices take. Then you can purchase a traffic-prioritization device based on what is best suited for your company's goals and network environment.

Of course, you should base your decision on more than just whether that device uses queuing or rate control. For example, you should learn how the device classifies and prioritizes traffic--and at what level the device classifies and prioritizes traffic. When a packet hits a device that provides traffic-shaping capabilities, the first thing that device needs to determine is what type of traffic this packet is flowing with and what level of priority this type of traffic gets. Determining what type of traffic the packet is flowing with can mean determining anything from what user or application sent the packet, to which host sent or will receive the packet.

The specifics make a big difference. "Traffic identification and classification is crucial," Barker explains. "If you can't identify the traffic, you certainly can't control it." (For more information about classification and prioritization methods, see "Discrimination--In a Good Way.")

Most traffic-prioritization devices support classification and prioritization standards, such as 802.1p (Ethernet prioritization), IP Precedence (TOS), and DiffServ. However, using these standards and even adding classification and prioritization based on IP addresses or port numbers may not be enough. According to Barker, you need a device that can "look into application content, allow dynamically negotiable port assignments, and look at URLs." Vendors describe this level of prioritization as highly granular prioritization.

When you're in the market for a traffic-prioritization device, you should also get answers to the following questions:

  • Can the device manage both inbound and outbound traffic?

  • Can the device analyze traffic to determine what sorts of traffic-prioritization rules might benefit the network most?

  • Can the device monitor traffic after you've created rules to determine whether or not the rules were effective?

  • Does the device integrate with a Virtual Private Network (VPN) solution? (Can the device handle encrypted traffic?)

  • Does the device have an easy-to-use interface?

  • Does the device support Lightweight Directory Access Protocol (LDAP)?

Traffic identification and classification is crucial," Barker explains. "If you can't identify the traffic, you certainly can't control it."

DIRECTORY SERVICES SIMPLIFY THE PRIORITIZATION PROCESS

An increasing number of routers and traffic shapers support LDAP 3. Support for LDAP implies an ability to integrate with LDAP-compliant directories, such as Novell Directory Services (NDS). Directory integration implies an ability for the device to access an enterprise-wide directory to store traffic-prioritization policies that could prioritize traffic based on User and Group objects. Support for LDAP also implies the potential to create a traffic-prioritization rule one time only and store it in an LDAP-compliant directory. All of your company's LDAP-compliant traffic-shaping devices can then access that rule.

Of the six router vendors mentioned in this article, five router vendors have made their products LDAP-compliant. (See "Routers and Switches With Quality of Service Capabilities.") In addition, four router vendors either have integrated or will be integrating their products with NDS. The following routers and switches are integrated with NDS:

  • Cabletron Systems' SmartSwitch 2000/8000/8600 routers and SmartSwitch 2000/6000/9000 switches

  • Cisco Systems' 2500/3600/4000/4500/4700/7200/7500 routers and 5000/6000 switches

  • Lucent Technologies' Cajun P550 switch

  • Nortel Networks' Accelar 1000/8000 switches and BayRS routers

These vendors integrate, or will integrate, their routers or switches with NDS through their policy-management software. This integration enables you to create traffic-prioritization rules based on a User or Group object.

For example, routers and switches understand only IP addresses--they don't understand NDS User or Group object names. Policy-management software for a particular router reads a user's IP address from the NDS User object and sends that IP address to the router. The IP addresses themselves must be regularly updated by a Dynamic Host Configuration Protocol (DHCP) service, such as Novell's Domain Naming System (DNS)/DHCP service.

Prioritizing traffic based on users' names is more useful--and more convenient--than prioritizing traffic by IP address. By prioritizing traffic based on a User object name, you ensure that the user gets the same priority level regardless of where that user logs in--whether from the accounting department, the help desk, the lab, or his or her own desk. The traffic-prioritization rules you create for users will follow those users wherever they go.

Router vendors are not alone in their pursuit to directory-enable their products. Of the nine traffic shapers mentioned in this article, six support LDAP 3. (See "Traffic Shapers.") In addition, the five traffic shapers listed below are integrated with NDS.

  • Allot Communications' Systems Release 2.0

  • Check Point's FloodGate-1 1.5

  • NetGuard's GuidePost Bandwidth Manager for NT

  • Packeteer's PacketShaper 1000/2000/4000

  • Ukiah Software's NetRoad TrafficWARE

Furthermore, one other vendor intends to integrate its traffic shaper with NDS. According to Belthoff, Elron Software plans to integrate future versions of CommandView Bandwidth Optimizer with LDAP 3-compliant directories. NDS, Belthoff states, is "at the top of Elron's list."

A second vendor suggests that, theoretically at least, its traffic shaper already can integrate with NDS. Sun Microsystems' Solaris Bandwidth Manager 1.5 supports LDAP 3 but specifically integrates only with Sun Directory Services 3.1. However, according to Joel Feraud, product manager at Sun Microsystems, the "policy-schema used by Solaris Bandwidth Manager to abstract its configuration . . . can be stored in any LDAP 3-compliant directory server supporting CRAM-MD5 authentication of login," including NDS.

THE NEW FRONTIER

Not everyone believes LDAP support represents a significant benefit. For example, when Dave Logan, senior consultant at Acuitive Inc., is reminded of the potential positive effects of deploying LDAP-compliant directories, he replies, "That's a nice story, but LDAP hasn't been widely deployed."

Logan's claim is highly questionable. After all, 80 percent of all Fortune 500 companies have deployed NDS--a figure that clearly suggests widespread deployment among what are arguably trend-setting customers.

Regardless of the exact number of companies that have deployed enterprise-wide directories, vendors clearly believe that LDAP will play an important role in the networking future. For example, Novell and Lucent Technologies recently teamed up to create open directory-enabled networking standards, as have Microsoft and Cisco. All four companies participate in the Desktop Management Task Force's (DMTF) Directory-Enabled Networks (DEN) initiative, as do 3Com, Ukiah Software, and others. (DMTF is a standards organization that oversees the development of industry-standard and interoperable management tools and utilities. DEN strives to define a schema for integrating network equipment into a directory service.)

Routers, switches, and traffic shapers and their integration with enterprise-wide directory services such as NDS are forging the trek into another related frontier: policy-based networking. Vendors are currently creating policy management software that uses LDAP to communicate with directories, where traffic-prioritization rules are stored. For example, Cabletron Systems' SPECTRUM Enterprise Manager, slated to be released in the fourth quarter of 1999, will enable you to create QoS policies for routers and switches from more than 200 companies, including Cisco, 3Com, Lucent Technologies, and Nortel Networks.

Ukiah Software recently announced the upcoming release of its distributed policy-management system called NetRoad Active Policy System (APS). NetRoad APS will enable you to create, distribute, and manage QoS policies that are enforced throughout the network by routers, switches, and traffic shapers--and whatever combination of those devices you use. In its initial release, NetRoad APS will support Cisco routers and 3Com switches and, predictably, Ukiah Software's own TrafficWARE, which Ukiah Software is porting to NetWare 5. However, this is only a partial list. Ukiah Software plans to support other vendors as well, ultimately striving to provide a multivendor, IP services management solution.

Among other things, policy-management systems enable you to create a traffic-prioritization policy just once. After you create that policy, a policy-based network enforces that policy from one end of a network to the other.

For example, Ukiah Software's NetRoad APS will enable you to create one policy and then push that policy out to affected devices, including routers, switches, and traffic shapers. If you create a policy for Cisco routers indicating that they should give high priority service to Service Advertising Protocol (SAP) traffic to and from members of the Finance Group object, as specified within NDS, APS will then push that policy out to all of the Cisco routers on your company's network or, Smith claims, to all of the Cisco routers on your service provider's network.

About one year ago, author Salvatore Salamone suggested that "the entire industry's idea of QoS needs to shift from traffic prioritization on one portion of a network to policy-based management on the entire end-to-end network." ("A Serious Look at Quality of Service," InternetWeek, March 9, 1998.) Whether vendors heeded Salamone's words or whether those words are a prophetic suggestion of the year-2000 odyssey toward policy-based networking, the entire industry is apparently on the verge of shifting its collective idea about traffic prioritization in just the manner Salamone prescribed.

Linda Kennard works for Niche Associates, which is located in Sandy, Utah.

* Originally published in Novell Connection Magazine


Disclaimer

The origin of this information may be internal or external to Novell. While Novell makes all reasonable efforts to verify this information, Novell does not make explicit or implied claims to its validity.

© Copyright Micro Focus or one of its affiliates