Novell is now a part of Micro Focus

LAN-based Imaging Tests Revisited

Articles and Tips: article

RICH LEE
Senior Research Engineer
Novell Systems Research

BRENNA JORDAN
Research Assistant
Novell Systems Research

01 Dec 1995


For several years, Novell Research has been involved in a series of imaging tests. From our initial focus on DOS-based imaging, we have turned to testing imaging performance in the MS Windows 3.1 environment. Integration of the latest imaging technologies calls for thorough understanding of the effects and implications of the evolving DOS and Windows imaging platforms. This Application Note highlights some of the key differences between image retrievals in a DOS-based and a Windows-based imaging system.

PREVIOUS APPNOTES IN THIS SERIES Nov 93 "Multi-Segment LAN Imaging: Departmental Configuration Guidelines" Jul 93 "Multi-Segment LAN Imaging Implementations: Four-Segment Ethernet" May 93 "Imaging Test Results: Retrieval Rates on Single- and Multiple-Segment LANs" Feb 93 "Imaging Configurations Performance Test Results" Jan 93 "Imaging Configurations and Process Testing" Oct 92 "The Past, Present, and Future Bottlenecks of Imaging" Jul 92 "The Hardware of Imaging Technology" May 92 "Issues and Implications for LAN-Based Imaging Systems"

Introduction

In today's fast-paced business environment, people are continually striving to be more efficient and more effective at what they do. The area of document imaging is no exception. No matter what type of imaging system is deployed, PC users demand that image retrievals be quick and hassle-free. As technology advances, expectations for "bigger, better, and faster" image retrievals escalate proportionally.

Among PC users, individual image retrieval rates are a key delineating factor. However, systems integrators, developers, and purchasers cannot discount such issues as CPU utilization, disk channel throughput, and error rates. For several years, Novell Research has been involved in a series of imaging test benches. Our initial focus was on DOS-based imaging; results from these tests have been reported in previous AppNotes (see the list on the title page of this AppNote). Recent efforts have turned to image performance testing in the MS Windows 3.1 environment. Understanding the effects and implications of the evolving DOS and Windows imaging platforms is a critical factor in successfully integrating the latest imaging technologies.

This AppNote points out some of the key differences between image retrievals in a DOS-based and a Windows 3.1-based imaging system from our in-house testing on a NetWare 3.12 network. Specific performance comparisons are made in the areas of CPU utilization, disk channel throughput, and workstation image retrieval rates. Other areas for consideration when comparing the two operating system platforms encompass the actual physical setup of the test benches. Test bench parameters such as the number of clients, latency, and caching are critical for thorough analysis of our data.

It is not our intention to provide specific performance guidelines for individual client bases. However, these comparative results represent a first step in the process of developing LAN implemen-tation guidelines for Windows 3.x- and Windows 95-based imaging clients. A client-determined performance/output weighing mechanism is placed in the DOS and Windows environments, as a result of current image testing.

The Test Benches

To compare the DOS and Windows environments, we established two separate test benches. The multiple-segment, DOS-based imaging test bench (described fully in previous AppNotes) is referred to as MUTEST. The Windows-based imaging test bench is called IMGTEST. This section describes the setup and parameters for each test bench.

Although each test bench adheres to a different paradigm, we used the same server for both sets of tests. The testing was done on a Compaq SystemPro file server with 40 MB of RAM and an 11 GB Micropolis disk array, running NetWare 3.12. A single NE2000 network interface card in the server provided the connection to a single Ethernet LAN segment. Image files used in the IMGTEST and MUTEST test benches were located on the server's disk array, until cached by NetWare according to the least recently used (LRU) algorithm.

MUTEST (DOS-based Test Bench)

The DOS-based image testing was performed on the MUTEST test bench, consisting of eight 386/33 workstations connected to a SynOptics 10Base2 concentrator. Four unique images were used in this test bench, with a total of 10 image seeks per workstation. The images retrieved in this suite were developed on a file-to-image relationship whereby they were saved in one file, cached by NetWare in server memory, and retrieved as byte offsets from within the file.

Our early DOS-based tests showed high error rates due to improper file locking and request timing in the test bench. The images-per-second rate of the client caused heavy taxation of the wire due to excessive simultaneous image calls by a minimal number of workstations. To determine the effective parameters governing image retrievals, it appeared extremely critical to reduce the number of across-the-wire "collisions." In latter DOS-based testing, we imposed a five-second latency period between image retrievals to simulate user lag-time.

To increase the number of workstations in our MUTEST suite, we split the network into multiple segments. To eliminate disk I/O effects and increase high-end throughput, we ran a test prior to taking the actual data samplings. This provided us, hypothetically, with a way to optimize against disk contention.

IMGTEST (Windows-based Test Bench)

The IMGTEST test bench, an object oriented front-end test tool, was performed in the Windows 3.1 operating environment. Physically, the test bench consisted of six 386/33 workstations connected to an Ethernet 10BaseT hub. The image files used for IMGTEST were comparable to those used in MUTEST: medium density-scanned images (8.5- by 11-inch pages consisting of large border space and some graphics), scanned at 150 dpi. The image retrievals were routed through Windows using an ImPower Viewer object. For further examination of the effects of file roll-over in the NetWare LRU cache, we integrated ten unique images (for a total of 200 image seeks per workstation) into our test bench.

In the Window-based series of tests, no latency period was necessary between image retrievals. Due to the viewers and drivers used, retrieval speeds were sufficiently slow. The chances of across-the-wire data collisions were significantly reduced due to an internal latency period within the Windows test bench itself. Therefore, IMGTEST's script files did not require a prescribed delay period.

The start timing for each test in Windows could be controlled directly. We used the server's internal clock as the reference point for execution of the IMGTEST suite. Each client was set to start retrieving images from the server at a certain "trigger" time. During the testing period, each client gathered statistics regarding individual image retrieval rates. Further data was acquired through the use of a LANalyzer and STAT.NLM. Each of these data collection mechanisms was set to a specific referencing clock.

As in the DOS tests, the IMGTEST test bench was executed once before the test run to cache the images. However, the file-based paradigm invoked in the IMGTEST bench does not directly support image caching. Therefore, image retrievals often occur directly from the server's disk array before being output to each client's display.

Key Test Results

Our intentions in the earlier imaging test runs were to define and clarify the relationship of LAN components versus current imaging technology. Our intentions now go one step further: to compare image retrieval statistics derived from DOS (MUTEST) with those from the Windows (IMGTEST) environment.

The following sections present some key results obtained from the DOS and Windows test benches respecting CPU utilization, disk throughput, and image retrieval time. In comparing and contrasting these results, we can make some initial inferences about the differences between the two environments.

CPU Utilization

A comparison of server CPU utilization presents a classic display of the differences between the DOS- and Windows-based imaging test suites. As shown in Figure 1, the CPU utilization in the DOS tests appears to be relatively consistent, with minimal "spikes" in the data.

Note that the utilization bursts remain consistent, which is an indication that the server's CPU is only periodically tasked at high levels. The "bursty" nature of the CPU utilization data could be contributed to by any number of factors, including periodic broadcasts and responses from each client. Overall, for eight users on one segment, the CPU utilization stayed at or below 15 percent for a majority of the test. This relatively low utilization demonstrates that the server's CPU is not being heavily taxed in the DOS environment.

Figure 1: In the DOS tests, server CPU utilization remained relatively constant with minimal spikes.

Figure 2 illustrates the server CPU utilization of the IMGTEST test bench.

Figure 2: In the Windows tests, server CPU utilization exhibited generally higher levels with more frequent spikes.

Here, the CPU utilization spends a considerable amount of time at or above the 10 percent mark, with the overall CPU utilization rate never exceeding 40 percent. This is the opposite of what we observed in the DOS-based image retrieval testing. Again, this demonstrates a marked difference between the Windows-based test suite and its DOS-based predecessor.

Note that, as in the DOS-based tests, the CPU utilization peaks at regular intervals. However, a comparison of the CPU utilization graphs clearly shows that the peaks are much closer together under the Windows environment. This could be ascribed to the implementation differences between single-image file retrievals in the Windows-based tests and the multiple-images-per-file paradigm in the DOS-based tests. Again, although the images were cached into memory at the beginning of the test, the nature of the IMGTEST test bench is to periodically recache image files from the server's disk array.

Disk Throughput

The graph of disk channel throughput in Figure 3 points out how caching images affects the server's performance in the DOS-based tests.

Figure 3: The results of DOS-based throughput tests show a significant peak in disk channel activity midway through the test.

As seen in Figure 3, the server disk channel throughput remains relatively low at approximately 0-30,000 bytes per second. Around 200-250 seconds into the test, the disk channel throughput soars into the 200,000-300,000 bytes per second range, and then proceeds to drop back into the low 0-30,000 bytes per second range. This is evidence that when the server runs out of RAM cache in the MUTEST suite, it resumes disk activity.

The server disk channel throughput differs significantly between the DOS-based environment and the Windows environment (see Figure 4).

Figure 4: In the Windows-based tests, throughput results show constant disk channel activity.

As noted in Figure 4, the disk channel throughput in IMGTEST never exceeds 50,000 bytes per second. (In MUTEST it exceeded 50,000 bytes per second midway through the test.) The results drawn from image retrievals on the Windows platform indicate that disk channel throughput demonstrates the most activity at 15,000 bytes per second and below.

After direct analysis of the data in Figures 3 and 4, we can draw some conclusions as to the effects of disk caching in the two operating environments. Remember that in each test bench, we executed a preliminary test run to cache the images. Evidence from the MUTEST test bench shows that running out of cache for the images increases disk effects midway through the test. On the other hand, IMGTEST reflects an inability to cache even 10 images against the test bench requirements. This significantly increases the amount of time the server's CPU spends attending to disk I/O functions.

The midstream bursts seen in the DOS test bench results are replaced by continuous low-level activity laid over the high peak utilizations in the Windows environment. As it is conceivable that caching systems are external to each test suite, we cannot discount the effects of the image and file paradigms that drive each test bench.

Retrieval Time

Figure 5 shows a distinct anomaly peculiar to our Windows environment testing. Some clients were faster, while others were apparently hindered by external factors. External factors seemingly affected the image retrieval rates of each client in our test suite.

Figure 5: In the Windows environment, retrieval time test results show that some clients were faster than others.

We cannot make a firm generalization for the Windows environment just from this set of tests. But it has caused some concern that merits further testing.

Interestingly, when the client image retrieval times are averaged together, the result is a chart like that shown in Figure 6. This chart generalizes that - within a specified range of file sizes - the time a client takes to retrieve an image decreases as the size of the image file increases. These results may seem counterintuitive, but keep in mind that they simply show a particular typification for file-based imaging within a defined size range. Further testing will be done with larger file sizes, as equipment allows.

Figure 6: Averaging client retrieval times together shows that the larger the file, the less time it takes to retrieve it.


Note: The graph in Figure 6 does not take into account an incidentalobservation that certain Windows clients (typically the firstclients to log in to the NetWare server) are favored with respectto service in the IMGTEST test bench. This observation also meritsfurther investigation.

Conclusion

In this Application Note, we have addressed data from earlier DOS-based test runs and contrasted them with results from Windows file-based implementation models. There is evidence that the DOS-based image testing from previous years contributed to higher performance rates and higher capacity. However, there is a distinct need to pursue Windows-based image retrieval performance and configuration statistics when dealing with large, color-enhanced images.

In future AppNotes, we will present additional results from imaging tests in a variety of environments. Our ultimate goal is to produce detailed configuration and implementation guidelines that system integrators, developers, and customers can benefit from in image-enabling their business activities.

* Originally published in Novell AppNotes


Disclaimer

The origin of this information may be internal or external to Novell. While Novell makes all reasonable efforts to verify this information, Novell does not make explicit or implied claims to its validity.

© Copyright Micro Focus or one of its affiliates