Novell is now a part of Micro Focus

Understanding the Big ATM Picture

Articles and Tips:

Kristin King

01 Feb 1997


Few technologies have received as much publicity as Asynchronous TransferMode (ATM) has over the past few years: In fact, I'm willing to bet thatmost computer magazines published more than one article about ATM last year.However, many of the articles I read were often more confusing than enlightening--haphazardlytossing out acronyms such as SVC, PVC, AAL, UNI, PNNI, LANE, MPOA, and IPNNI,without explaining how these acronyms fit into the big ATM picture.

In defense of the writers of these articles, I have found that clearlyexplaining the big ATM picture is no easy task: In addition to being toolarge a topic to explain in one article (which is why this issue includestwo articles about ATM), the big ATM picture is extremely complicated. (Seethe related article, "Real-World ATM.") Instead of being a static, unified technology, ATM consistsof changing standards and products that do not always fit these standards.

ATM is a flexible technology: It can enable a network to carry many typesof traffic, such as audio, video, and data, while providing enough bandwidthfor each type of traffic and guaranteeing timely delivery of time-sensitivetraffic. Also, ATM can work either as a high-speed LAN technology or asa high-speed backbone connecting traditional LANs.

In addition, ATM standards organizations have already developed manyinterconnectivity standards and are rapidly developing more. Using thesestandards, vendors are building switches that interoperate with other vendors'switches as well as with traditional LAN technology.

Although the rapid development of standards adds to the complexity ofthe big ATM picture, these standards also increase the speed by which ATMcan become a practical and usable technology. And as ATM becomes more practicaland usable, it will become more common. Understanding today's big ATM picture,therefore, may be key to understanding the future direction of networking.

This article provides an in-depth introduction to ATM: It explains basicATM concepts, examines the ATM model, and describes two standards basedon that model.

UNDERLYING ATM CONCEPTS

The most basic concepts underlying ATM networks can be summarized inthree statements:

  • ATM networks are cell-relay networks.

  • ATM networks are connection-oriented.

  • ATM networks are switched.

ATM Networks Are Cell-Relay Networks

The premise of a cell-relay network is simple: The network transmitsdata in small, fixed-size packets calledcells. As you know, an Ethernetnetwork transmits data in large, variable-length packets calledframes.

Cells have two advantages over frames:

  • First, because frames vary in length, an incoming frame must be buffered (stored in memory) to ensure the entire frame is intact before it is transmitted. Because cells are the same length, they require less buffering.

  • Second, because cells are the same length, they are predictable: Cell headers are always located in the same place. As a result, a switch automatically knows where the cell header is and can process cells more quickly.

On a cell-relay network, cells must be small enough to reduce latencybut large enough to minimize overhead.Latencyis the time that lapsesbetween the moment at which a device seeks access to the transmission medium(the cabling) and the moment at which the device receives that access. Anetwork carrying traffic that is sensitive to time delays (such as audioand video traffic) must have minimal latency.

On an ATM network, each ATM-attached device (such as a workstation, server,router, or bridge) has immediate, exclusive access to the switch: Becauseeach device has access to its own switch port, devices can send cells tothe switch simultaneously. Latency becomes an issue only when multiple streamsof traffic reach the switch at the same time. To reduce latency at the switch,the cell size must be small enough so the time it takes to transmit a cellhas little effect on the cells waiting to be transmitted.

Although decreasing cell size reduces latency, there is a trade-off:The smaller the cell, the larger the percentage of the cell that is dedicatedto overhead (the transmitting and routing information contained in the cellheader). If the cell is too small, a large percentage of the cell is dedicatedto overhead, and a small percentage of the cell is dedicated to the actualdata being transmitted. In this case, bandwidth is wasted, and transmittingcells takes a long time even if latency is low.

When the American National Standards Institute (ANSI) and what is nowthe International Telecommunications Union (ITU) designed ATM, they hada difficult time achieving a balance between low latency and low overhead.These organizations had to weigh the interests of both the telephony industryand the data communications industry. The telephony industry wanted a smallcell size because voice is usually sent in small samples, and reducing latencywould ensure that these samples arrived on time. The data communicationsindustry, on the other hand, wanted a larger cell size because data filesare often large and more sensitive to the penalties of overhead than tolatency. The two factions eventually compromised on a 53-byte cell size,which includes 48 bytes of data and a 5-byte header.

ATM Networks Are Connection-Oriented

ATM networks are connection-oriented: To transmit packets from a sourcepoint to a destination point, the source must first establish a connectionwith the destination. Establishing a connection before transmitting packetsis similar to making a telephone call: You must dial a number, the destinationtelephone must ring, and someone must pick up the telephone receiver beforeyou can begin speaking.

Other transmission paradigms, such as Ethernet and Token Ring, are connectionless.Connectionless technologies simply put packets on the wire with the properaddressing information, and the hubs, switches, or routers find the destinationand deliver the packets.

Connection-oriented networks have one disadvantage: Devices cannot simplytransmit packets; they must first take the time to establish a connection.

However, connection-oriented networks have several advantages: Becauseswitches can reserve bandwidth for specific connections, connection-orientednetworks can guarantee that the connection will have a certain amount ofbandwidth. Connectionless networks, which simply transmit packets as theyreceive them, cannot guarantee bandwidth.

Connection-oriented networks can also guarantee a certain Quality ofService, which is the level of service the network can provide. Qualityof Service includes factors such as the amount of cell loss allowed andthe variation allowed in the spacing between cells. As a result, connection-orientednetworks can send different types of traffic--audio, video, and data--overthe same switches.

In addition, connection-oriented networks can better manage network trafficand prevent congestion because switches can refuse connections they cannotsupport.

ATM Networks Are Switched

On an ATM network, all devices, such as workstations, servers, routersand bridges, are attached directly to a switch. When an ATM-attached devicerequests a connection with a destination device, the switches to which thetwo devices are attached set up the connection. While setting up the connection,the switches determine the best route to take--a function traditionallyperformed by routers.

After the connection is established, the switches function as bridges,simply forwarding packets. These switches differ from bridges in one importantrespect, however: Whereas bridges broadcast packets to all reachable destinations,the switches forward cells only to the next hop along the predeterminedroute.

Ethernet switching can be configured so that all Ethernet workstationsare attached directly to a switch. In this configuration, Ethernet switchingresembles ATM switching: Each device has immediate, exclusive access toa switch port; the switch is not a shared medium.

However, ATM switching differs from Ethernet switching in a few importantrespects: Because each ATM device has immediate, exclusive access to a switchport, devices connected to an ATM switch do not need to use complex arbitrationschemes to determine which device has access to the switch. In contrast,workstations connected to an Ethernet switch must engage in arbitrationschemes despite their immediate, exclusive access to a switch port. Ethernetnetwork interface boards are designed to use the arbitration protocol todetermine whether the workstation has access to the device.

ATM switching also differs from Ethernet switching in that ATM switchesare connection-oriented, whereas Ethernet switches are connectionless.

In addition, ATM switches are usually nonblocking, which means they minimizecongestion by transmitting the cells as soon as they are received. A nonblockingswitch must have an extremely fast switching fabric (the switching mechanismwithin the switch) and enough output capacity to be able to immediatelyforward all incoming cells. In theory, if a switch has 10 incoming portsof 10 Mbit/s, the switch must also have an outgoing port of 100 Mbit/s.In practice, the outgoing port can have a slightly smaller capacity andstill be able to immediately forward all incoming cells.

ATM ARCHITECTURE

Transmission technologies such as Ethernet and Token Ring fit withinthe seven-layer Open Systems Interconnection (OSI) model. ATM, however,has its own model, created by several standards organizations.

ATM was first developed by ANSI and ITU as a transport mechanism forBroadband Integrated Services Digital Network (B-ISDN), a public network.(B-ISDN is a WAN used to connect many companies' LANs.) The ATM Forum, aconsortium of ATM vendors, has since appropriated and extended the B-ISDNstandards as standards for both public and private networks. (For more informationabout these standards organizations, see "ATM Standards Organizations.")

The ATM model as defined by both ANSI and ITU and by the ATM Forum consistsof three layers:

  • The physical layer

  • The ATM layer

  • The ATM adaptation layer

These three layers roughly correspond in function to the physical, data-link,and network layers of the OSI model. (See Figure 1.) Currently, the ATM model does not define any additional layers andthus has no layers that correspond to higher OSI model layers. Althoughhigher ATM layers have not been defined, the highest layer in the ATM modelmight communicate directly with the physical, data-link, network, or transportlayers of the OSI model, or it might communicate directly with an ATM-awareapplication.

Figure 1: Unlike other transmission protocols, ATM uses its own model, rather than the OSI model.

THE PHYSICAL LAYER

In both the ATM and OSI models, standards for the physical layer specifyhow to send bits over the transmission medium. More specifically, ATM standardsfor the physical layer specify how to take bits from the transmission medium,convert these bits into cells, and send the cells to the ATM layer.

ATM standards for the physical layer also specify the cabling that canbe used with ATM and the speeds at which ATM can be run over each type ofcabling. The ATM Forum originally defined DS3 (45 Mbit/s) rates and faster.However, implementations of 45 Mbit/s ATM are used mostly by WAN serviceproviders. Other companies more commonly use 25 Mbit/s or 155 Mbit/s ATM.

Although the ATM Forum did not originally adopt 25 Mbit/s ATM, individualvendors have embraced 25 Mbit/s ATM because it can be manufactured and installedless expensively than other speeds of ATM: Only 25 Mbit/s ATM can run overCategory 3 unshielded twisted pair (UTP) cabling as well as over highergrades of UTP and fiber. Because 25 Mbit/s ATM is inexpensive, it is intendedfor desktop ATM. (For more information about 25 Mbit/s ATM, see "A More Affordable ATM Solution: 25 Mbit/s ATM.")

On the other hand, 155Mbit/s ATM runs over Category 5 UTP cabling, Type1 shielded twisted pair (STP) cabling, fiber, and wireless laser infrared.In addition, 622 Mbit/s ATM is available for LANs, although this speed ofATM is not yet widely implemented. 622 Mbit/s ATM runs only over fiber.And for wireless networks, Olivetti Research Labs is building a prototype10 Mbit/s Radio ATM network.

THE ATM LAYER AND VIRTUAL CIRCUITS

In the OSI model, standards for the data-link layer specify how devicescan share the transmission medium and ensure a reliable physical connection.Standards for the ATM layer specify how ATM signaling, traffic control,and connection setup are performed. The ATM layer's signaling and trafficcontrol functions are similar to the OSI model's data-link layer functions,but the connection setup functions are more like the routing functions thatare defined by standards for the OSI model's network layer.

Standards for the ATM layer define how to take a cell generated by thephysical layer, add a 5-byte header, and send the cell to the ATM adaptationlayer. These standards also define how to set up a connection with the Qualityof Service that an ATM-attached device, or end station, requests.

Connection setup standards for the ATM layer define virtual circuitsand virtual paths. An ATM virtual circuit is the connection between twoATM end stations for the duration of the connection. The virtual circuitis bidirectional, which means that once a connection is established, eachend station can send to or receive from the other end station.

After the connection is established, the switches between the end stationsreceive translation tables, which specify where to forward cells based onthe following information:

  • The port from which the cells enter

  • Special values in the cell headers, which are called virtual circuit identifiers (VCIs) and virtual path identifiers (VPIs)

The translation tables also specify which VCIs and VPIs the switch shouldinclude in the cell headers before the switch sends the cells.

Three types of virtual circuits are used:

  • Permanent virtual circuits (PVCs)

  • Switched virtual circuits (SVCs)

  • Smart permanent virtual circuits (SPVCs)

PVC

A PVC is a permanent connection between two end stations that is establishedmanually when the network is configured. A user tells the ATM service provideror network administrator which end stations must be connected, and the ATMservice provider or network administrator establishes a PVC between theseend stations.

The PVC consists of the end stations, the transmission medium, and allof the switches between the end stations. After a PVC has been established,a certain amount of bandwidth is reserved for the PVC, and the two end stationsdo not need to set up or clear connections.

SVC

An SVC is established whenever an end station attempts to send data.First, the end station requests a connection with another end station. Thenthe ATM network distributes the translation tables and notifies the sendingstation about which VCIs and VPIs must be included in the cell headers.An SVC is established between two end stations on an as-needed basis andexpires after an arbitrary amount of time.

An SVC must be established dynamically rather than manually. With anSVC, signaling standards for the ATM layer define how an end station establishesand maintains a connection and how the end station clears the connection.These standards also specify how an end station uses Quality of Serviceparameters from the ATM adaptation layer to establish the connection.

In addition, the signaling standards define a way to control trafficand congestion: A connection is established only if the network can supportthat connection. The process of determining whether or not to establisha connection is calledconnection admission control(CAC).

SPVC

A SPVC is a hybrid of a PVC and an SVC. Like a PVC, an SPVC is establishedmanually when the network is configured. However, the ATM service provideror network administrator sets up only the end stations. For each transmission,the network determines which switches the cells will pass through.

Most early ATM equipment supported only PVCs. Support for SVCs and SPVCsis only now becoming available.

Advantages and Disadvantages

PVCs have two advantages over SVCs: Because a network using SVCs mustspend time establishing a connection, pre-established PVCs can provide betterperformance. PVCs also give you more control over the network since an ATMservice provider or a network administrator can select the path that thecells will take.

SVCs have several advantages over PVCs, however: Because SVCs can beset up and cleared more easily than PVCs, networks using SVCs can mimicconnectionless networks. This feature is useful if you are running an applicationthat cannot work on a connection-oriented network.

In addition, SVCs use bandwidth only when necessary, whereas PVCs mustreserve bandwidth all the time, in case it is needed. SVCs also requireless management because they are established automatically rather than manually.

Finally, SVCs provide fault tolerance: If one switch along the connectionpath fails, other switches select a different path.

In some ways, SPVCs represent the best of both worlds. As with PVC endstations, SPVC end stations are set up in advance. As a result, the endstations do not need to spend time establishing a connection each time anend station needs to transmit cells. Like SVCs, however, SPVCs provide faulttolerance.

Of course, SPVCs have drawbacks as well: Like PVCs, SPVCs must be establishedmanually, and you must reserve bandwidth for SPVCs even when you are notusing them.

Virtual Paths

Connection setup standards for the ATM layer also define virtual paths.Whereas a virtual circuit is a connection that is established between twoend stations for the duration of a connection, a virtual path is a pathbetween two switches that exists all the time, regardless of whether a connectionis being made. In other words, a virtual path is a remembered path thatall traffic from a single switch can take to reach another switch.

When a user requests a virtual circuit, the switches determine whichvirtual path to use in order to reach the end stations. Traffic for morethan one virtual circuit may travel the same virtual path at the same time.For example, a virtual path with 120 Mbit/s of bandwidth can be dividedinto four simultaneous connections of 30 Mbit/s each.

THE ATM ADAPTATION LAYER AND QUALITIES OF SERVICE

In the OSI model, standards for the network layer specify how to routeand manage packets. In the ATM model, standards for the ATM adaptation layerperform three similar functions:

  • They specify how packets are formatted.

  • They provide information to the ATM layer that enables this layer to set up connections with different Qualities of Service.

  • They control congestion.

Packet Formatting

The ATM adaptation layer consists of four protocols (referred to as AALprotocols) that format packets. These four protocols take cells from theATM layer, reassemble them into data that the protocols operating at higherlayers can use, and send this data on to a higher layer. When the AAL protocolsreceive data from the higher layer, they break the data into cells and passthese cells to the ATM layer.

The following AAL protocols are defined in the B-ISDN standards:

  • AAL 1

  • AAL 2

  • AAL 3/4

  • AAL 5

However, the ATM Forum has developed only three of these AAL protocols:AAL 1, AAL 3/4, and AAL 5.

Each AAL protocol packages data into cells in a different way. All ofthe protocols, except for AAL 5, add a certain amount of overhead to the48 bytes of data in an ATM cell. This overhead includes special handlinginstructions for each cell. These instructions are used to provide differentservice categories.

Qualities of Service

The ATM adaptation layer also defines four service categories:

  • Constant bit rate (CBR)

  • Variable bit rate (VBR)

  • Unspecified bit rate (UBR)

  • Available bit rate (ABR)

These service categories are used to provide different Qualities of Servicefor different types of traffic.

CBR is used for time-sensitive traffic, such as audio and video, whichsends data at constant rates and requires low latency. CBR guarantees thehighest quality but uses bandwidth inefficiently. To protect CBR trafficfrom other transmissions, CBR always reserves bandwidth for the connection,even if there are currently no transmissions on the channel. Reserving bandwidthin this way is particularly a problem over a WAN link when the subscriberis paying for each megabit of bandwidth, whether the virtual circuit isin use or not.

Two types of VBR exist and are used for different types of traffic: Real-timeVBR (RT-VBR), which requires fixed timing between cells, supports time-sensitivetraffic such as compressed voice and video. Non-real-time VBR (NRT-VBR),which does not require fixed timing between cells, supports delay-toleranttraffic such as frame relay.

Because VBR does not reserve bandwidth, it uses bandwidth more efficientlythan CBR does. Unlike CBR, however, VBR cannot guarantee quality.

UBR is used for data traffic such as TCP/IP traffic, which can toleratedelays. Like VBR, UBR does not reserve additional bandwidth for a virtualcircuit. As a result, the same virtual circuit can be reused for multipletransmissions, thus using bandwidth more efficiently. Because UBR does notguarantee quality, however, UBR traffic on highly congested networks hasa high rate of cell loss and retransmission.

Like UBR, ABR is used for data traffic that can tolerate delays. Alsolike UBR, ABR enables virtual circuits to be reused. However, whereas UBRdoes not reserve any bandwidth or prevent cell loss, ABR negotiates a rangeof acceptable bandwidth for a connection and an acceptable cell loss ratio.

CBR, VBR, UBR, and ABR all include different traffic parameters, suchas the average and peak bit rates at which an end station can transmit.These service categories also include Quality of Service parameters suchas the following:

  • Cell loss ratio, which is the percentage of high-priority cells that can be lost during a transmission.

  • Cell transfer delay, which is the amount of time (or the average amount of time) a cell takes to reach its destination.

  • Cell delay variation (CDV, also known as jitter), which refers to variations in the way groups of cells are spaced between end stations. High CDV causes breaks in audio and video signals.

When requesting a connection, an end station requests one of the fourservice categories. The ATM network then establishes the connection usingthe corresponding traffic and Quality of Service parameters. For example,if the end station requested a CBR video connection, the ATM network wouldreserve the necessary bandwidth and use the traffic and Quality of Serviceparameters to ensure an acceptable bit rate, cell loss, delay, and variation.

The ATM network also uses the Quality of Service parameters to performtraffic policing, thereby ensuring that the network does not become congested.The network monitors existing connections, ensuring that they do not exceedthe maximum bandwidth rate they have been granted. If a connection beginsto exceed this rate, the network discards cells.

The network also determines which cells to discard if the network becomescongested: The network checks the Quality of Service parameters for theconnection and discards cells with a high cell loss ratio. Finally, thenetwork refuses to accept connections if the network cannot support them.

ATM's ability to provide different Qualities of Service to applicationsis generally seen as one of its strengths. Users can reserve only the bandwidththey need, while preserving the quality of audio and video traffic and ensuringthat the network does not become congested. However, for ATM's Qualitiesof Service to be truly useful, applications must be able to take advantageof these Qualities of Service.

ATM vendors and standards organizations are devising ways to enable applicationsto use Qualities of Service. For example, several ATM vendors are workingto extend the Internet Engineering Task Force's (IETF's) Resource PreservationProtocol (RSVP) to enable applications to request Qualities of Service.In addition, to enable non-ATM-aware applications to take advantage of Qualitiesof Service, FORE Systems and several other vendors are developing LegacyApplication Quality of Service software. This software will be implementedin LAN access devices and ATM network interface boards. This software enablesthe devices and boards to establish connections with different Qualitiesof Service, depending on the type of application, the source and destinationaddress, and other characteristics.

ATM MODEL STANDARDS

The ATM Forum has developed many standards based on the ATM model, includingthe following:

  • User-to-Network Interface (UNI). UNI defines the interface between an end station and a switch.

  • Private Network-to-Network Interface (PNNI). PNNI defines the interface between switches.

These standards define how workstations and switches interoperate inan all-ATM network.

UNI

The ATM Forum's UNI standards define the way in which an ATM-attacheddevice communicates with a switch. Figure 3 shows how a packet moves from a workstation to a switch. First, the user sendsdata such as audio, video, or LAN. The type of data determines which oneof the four AAL protocols receives the data and breaks it into cells. Thesecells are then transmitted to the ATM layer, which adds routing information.Next, the cells are transmitted to the physical layer, which breaks theminto bits and sends these bits over the transmission medium to the switch.

Figure 3: An ATM workstation communicating with a switch

The ATM Forum has defined two versions of UNI--UNI 3.0 and UNI 3.1. Theseversions are nearly identical except that the ATM Forum based UNI 3.1 onthe latest version of the ITU signaling specifications, unfortunately makingUNI 3.1 signaling incompatible with UNI 3.0 signaling. Fortunately, mostswitches support both UNI 3.0 and UNI 3.1.

The ATM Forum is currently working on defining UNI 4.0, which will includesignaling changes, support for ABR, and other enhancements. UNI 4.0 willbe compatible with UNI 3.1.

PNNI

The ATM Forum's PNNI specification includes standards that enable twoswitches from different vendors to work together. Figure 3 shows how a cellpasses through an ATM switch. The switch receives the cell at the physicallayer as a physical signal, passes this signal to the ATM layer, and convertsthis signal into a cell. The switch then examines the cell, determines whereto send it, converts the cell back into a physical signal, and passes thissignal to the next switch or to the end station.

PNNI is a link-state routing protocol, similar to the NetWare Link ServicesProtocol (NLSP) used in IPX networks or the Open Shortest Path First (OSPF)routing protocol used in IP networks. With link-state routing, switchesdistribute information about network topology and about the Qualities ofService the ATM network can support. As a result, every switch on the networkunderstands the topology of the entire network. Because every switch understandsthe network topology, every switch can calculate a path through the network,taking into account traffic conditions such as congestion.

Because PNNI defines a way for switches to distribute information hierarchically,every switch does not need to know the entire network topology in orderto forward cells. Instead, the ATM service provider or network administratorcan divide the network into different conceptual layers. Each switch needsto know only the topology of the layer to which it belongs. Thus, you canconstruct extremely large networks without overwhelming the switches.

With PNNI, the network can be divided into many layers or only one layer.According to Andy Reid, product manager for ForeThought Software at FORESystems, an ATM network using only one layer can support approximately 200switches.

At the lowest layer of the network topology, the switches are dividedinto clusters, called peer groups. All of the switches within a peer groupexchange routing information with one another. If a switch is a border node(a member of more than one peer group), the switch exchanges routing informationwith all of the peer groups to which it belongs. In this way, both peergroups know how to forward cells to destinations that either peer groupcan reach. Switches within a peer group use PNNI to elect apeer groupleader.

At the next layer of the network topology, several peer group leaderscomprise a peer group of their own. They then use PNNI to elect a peer groupleader. These leaders can comprise a peer group in the next layer and soon until the highest layer. At the highest layer, the entire network isrepresented by one peer group.

Switches at the lowest layer of the network topology use informationfrom higher layers to compute routes. As a result, switches do not needto know the entire network topology.

PNNI standards also specify how signaling is performed. PNNI signalingstandards specify how to set up ATM virtual circuits with the appropriateQualities of Service and how to maintain and clear these circuits. In addition,the PNNI signaling standards define ways in which to ensure that the networkdoes not become congested by allowing only connections that the networkcan support and by policing existing connections to ensure they do not usemore bandwidth than they have been given.

CONCLUSION

Understanding basic ATM concepts, the ATM model, and ATM standards willhelp you determine whether this technology is a viable high-speed solutionfor your network. (If you are interested in a quick reference of commonATM terms, download our ATM glossary.) However, this information is just the first step. Before you actually purchasean ATM product to add to your existing network, you should know how ATMvendors are making their products interoperate with other products. Formore information about current ATM products, read the related article, "Real-World ATM."

Kristin King works for Niche Associates, an agency that specializesin technical writing and editing.

ATM Standards Organizations

Many organizations have had a hand in shaping ATM standards. The following organizations are the most influential:

ANSI, CCITT, AND ITU

The American National Standards Institute (ANSI) and the International Telegraph and Telephone Consultative Committee (CCITT) developed ATM as a set of recommendations for the Broadband Integrated Services Digital Network (B-ISDN). The CCITT is now the International Telecommunications Union (ITU).

B-ISDN is sometimes confused with ATM; however, B-ISDN is not ATM. B-ISDN is a high-speed network that uses ATM as its transport mechanism. B-ISDN defines the User-to-Network Interface (UNI) and the Network-to-Network Interface (NNI) for ATM. In addition, B-ISDN defines the following three levels, which the B-ISDN standards documents refer to as planes:

  • User Plane. This plane defines UNI. The user plane includes all three ATM layers: the physical layer, the ATM layer, and the ATM adaptation layer.

  • Control Plane. This plane defines NNI and also includes all three ATM layers.

  • Management Plane. This plane defines network management.

ATM FORUM

The ATM Forum, a consortium of ATM vendors, appropriated and extended the B-ISDN standards to create industry standards that enable ATM products to interoperate with products on traditional LANs.

Some of the most important ATM standards include the following:

  • UNI

  • Private NNI (PNNI)

  • Integrated PNNI (IPNNI)

  • LAN Emulation (LANE)

  • MultiProtocol Over ATM (MPOA)

In addition, the ATM Forum has defined the Interim Local Management Interface, which defines ATM network management. (LANE and MPOA are explained in "Real-World ATM.")

IETF

The Internet Engineering Task Force (IETF) developed Classical IP Over ATM, which allows vendors to create products that send IP packets over an ATM network within a single IP subnetwork. The IETF is now developing Next Hop Routing Protocol (NHRP), which will route IP packets between IP subnetworks.

A More Affordable ATM Solution: 25 Mbit/s ATM

Most ATM switches provide 155 Mbit/s performance. For companies thatnow have a 10 Mbit/s network, 155 Mbit/s to the desktop may be too largea leap. The cost of 155 Mbit/s is often hard to justify, especially since155 Mbit/s ATM cannot run on unshielded twisted pair (UTP) cabling and mayrequire a new cabling system.

As a result, the ATM Forum adopted a 25 Mbit/s ATM specification thatruns over Category 3 UTP cabling. Because 25 Mbit/s ATM is fast enough foraudio and video applications and can be implemented at less cost than 155Mbit/s, 25 Mbit/s ATM is an ideal desktop solution for companies that cannotafford or do not yet need 155 Mbit/s ATM. Several vendors are already selling25 Mbit/s network interface boards and switches, including IBM, FORE Systems,and Madge Networks. These 25 Mbit/s devices incorporate LAN emulation (LANE),which means that end stations can communicate over ATM switches just asif they were communicating over Ethernet or Token-Ring switches. Some 25Mbit/s switches also support IP switching, a technology that enables endstations to send IP packets over ATM switches.

In addition, 25 Mbit/s ATM works with videoconferencing applications.For example, First Virtual Support sells 25 Mbit/s ATM network interfaceboards that support existing desktop videoconferencing products from AT&TGlobal Information Solutions and PictureTel Corp.

Although 25 Mbit/s ATM is still relatively expensive, many companiesmay find that the ability to support audio and video justifies the cost.

* Originally published in Novell Connection Magazine


Disclaimer

The origin of this information may be internal or external to Novell. While Novell makes all reasonable efforts to verify this information, Novell does not make explicit or implied claims to its validity.

© Copyright Micro Focus or one of its affiliates