How You Can Help Improve the Quality of Novell's Software
Articles and Tips: article
MCNE, MCNI, CDE
Novell Technical Support
11 Jan 2002
This AppNote explains the system/software testing process at Novell and outlines various ways in which you, as a customer, can help us improve the quality of our software.
Since this AppNote was written in the U.K., it retains the author's original British spelling and usage.
software testing, beta testing, technical support
network administrators, support technicians
familiarity with networking software
The process of software development and testing affects each and every one of us, both as product vendors and as customers. Without an understanding of what goes on during the software testing process, a communications gap exists between Novell and you, which affects our product quality.
In a sense, the software testing cycle never ends. Products are developed, tested internally, tested externally, deployed, fixed, and then tested again. As customers, you measure the actual quality of this testing when you deploy our products. Our products are just as much yours as they are ours.
This AppNote explains how software/system testing is performed at Novell. The purpose is to encourage you to get involved in the testing process. Some of you may have participated in beta testing our software before, but many are not aware how helpful it can be when customers not only install and configure out products, but rigorously test them to see if they can find any problems. What you do with the information you gather about these problems also has a big impact on the problem's resolution.
Understanding the Software Testing Process at Novell
To begin, let's take a look at the software testing process as it occurs at Novell, Inc.
The Inherent Challenges of Software Testing
The NetWare operating system (OS) is a huge collection of code. Testing every component and feature in NetWare would take about one year to complete. Of course, during that time a number of software defects or "bugs" would be found and hopefully fixed. As changes are made to the code being tested, new tests would be required to check the fixes.
To make matters even more complicated, think about all the different hardware and drivers that are routinely used in conjunction with NetWare, not to mention LAN vs. WAN environments, workload and other performance factors. Then there are all the server-based products that run on and interact with the OS. All of these must be taken into consideration when testing the OS.
Another aspect to consider is software that accesses the NetWare OS from a client-side interface, which raises numerous testing issues around the various client/desktop operating systems that NetWare supports. Last but not least, everything has to be tested for backwards compatibility to ensure it all works with the multitude of older versions of the software that are still being used by customers.
Before any of these products can be tested, they must first be installed and properly configured. This introduces many more variables, resulting in quite a complex system testing matrix. The bottom line is that testing a gigantic software system (such as NetWare) is a task no company can face without customer assistance.
Customer Attitude Toward Software Defects
No one likes to find bugs in their software, whether it be computer games or mission-critical systems. The sad truth is that you've probably encountered fewer bugs in games you have played than in the products you rely on to run your business. Customers get frustrated when a software bug prevents them from doing their work or going home on time. Their first thought is that the vendor is responsible-"any vendor will do, let's blame them all at the same time if possible."
Why Are There Bugs in Our Products?
Most games don't have critical bugs, if any at all. Is this because game developers are better programmers? No, they are human and are bound to make mistakes. Are games just easier to code? No, there are some really complex, leading edge games out there.
So why do business-critical products seem to have more bugs? Here are a few reasons I can think of:
Fewer product categories to develop and support
Reduced product integration (no gaming between vendors)
Limited client-side functionality (only one gaming interface)
No backward compatibility required
Simpler testing environments
Fewer configuration parameters
In the book PostgreSQL: Introduction and Concepts (ISBN 0-201-70331-9, Addison-Wesley, 2000), author Bruce Momjian makes the following statements regarding database development:
"A database server is not like a word processor or game, where you can easily restart it if a problem arises. Instead databases are multiuser, and lock user data inside the database, so they must be as reliable as possible. Development of source code of this scale and complexity is not for the novice."
"It was amazing to see that many bugs were fixed with just one line of C code."
"Because Postgres had evolved in an academic environment, it had not been exposed to the full spectrum of real-world queries."
These points about database servers apply to network operating systems as well. Because they must support multiple users and reliably store valuable business data, they are not easy to program. Many problems can be introduced by a single bug. And software testing in a lab environment is likely to suffer from the same problems as software developed in a strictly academic setting.
So what does Novell do to overcome these challenges and others related to system testing? The following section gives you a quick rundown of the different levels of testing that Novell products go through before they are released.
Levels of Testing at Novell
At Novell, software developers (SDs) are tasked to code a particular feature, not usually the entire product. To be reasonably sure each piece of code will integrate with that of other developers working on the same product, they perform component-level testing. Each SD checks the code they have written themselves, then they perform basic tests on the full product code.
The product code is then handed off to the quality assurance (QA) testers. (By the way, these folks are also responsible for reproducing the defects logged by customers. If they can't reproduce a problem, they move on to the next one. We'll talk more about this later on.)
Next, the product makes its way to the System Test (ST) team. These engineers use a test case database, which documents test scenarios they need to perform on each product. They also test the products for backward compatibility.
The products are installed, configured, and tested by the Corporate Interoperability Test (CIT) team. Their job is to determine whether the various products developed by Novell can be used together. They ask tough questions such as: Are the products using the correct versions of the code, and do settings contradict each other?
There are many others levels of testing: Platform System Test (PST), Performance System Test, CPR System Test (CPRST), Desktop System Test (DTST), and Collaboration/Apps System Test (CAST), to name just a few.
Although the previous testers would have completed preliminary performance testing, such testing occurs on a massive scale in Novell's SuperLab. This is a busy place where machines are reserved 6 months in advance and things happen on a large scale. SuperLab features approximately 1000 machines which can be converted between clients and servers, turned on/off, or rebooted remotely, all at the same time.
Once the product is put through its paces in SuperLab's simulated real-world environment, it is now ready to be released for beta testing by our fearless customers. Even after all the testing Novell has performed, the product is not yet ready for production use. Because of the variety of implementations and vast number of variables that can affect networking products, it is only when many, many customers test the software in real-world environments that new bugs are discovered.
Finally, after several rounds of beta releases and bug fixes, the product is deemed ready to be sold to customers. To the dismay of all, more bugs are exposed at this point. So what went wrong?
Well, it's not so much that something went wrong as we have bumped up against the stark realities of the software business. True, the more tests we perform on our software, the more likely we are to detect bugs. But although detecting more bugs helps make better products, it doesn't guarantee bug-free products. And no matter what else we do, problems involving complex product integration take exponentially longer to resolve.
Reporting Software Defects
Taking the time to document the bug is the last thing on a person's mind at the time of an incident. As a customer, you just want the problem to go away, with the least effort on your part. Unless the solution is already provided on a support site, a support incident must be opened with the vendor. Typically, you will receive one of these responses to your reporting of the problem:
We already have a released/field test patch.
It's currently being fixed.
It will be fixed in the next major version.
It won't ever be fixed.
It's not a software failure.
Chances are you're not the first person to experience a particular problem. But you could be the first person to officially log a defect for it. Keep in mind that other bugs are out there, all requiring attention. All these defects must be fixed. To get the response you expect, we need your help and cooperation. In reality, we need a business case to prioritise the problem.
How Problems Are Prioritised
Novell are notified about bugs/defects, software failures, or software problems in a variety of ways:
Directly through the beta program, when customers perform beta testing
Via customer defect submissions from Novell's Support site on the Web. If you encounter a problem, report it at the following URL:
Via incidents opened for a problem via the Web, with customer services, or through a premium/dedicated engineer. You can open an electronic support incident at http://support.novell.com.
Remember, if you don't report problems you encounter with our software, you are making a contribution to product quality. However, it is not a positive one.
Once Novell have been notified about a defect, the correct team must be identified to deal with the problem. The assignment is made by the manager for the team dealing with the product category of the defect, usually the same day or the next working day after the defect is logged. Meetings are held daily to reassign any defects which may have been placed in an incorrect product category or where the defect details make it unclear which product the defect is for.
Once the defect has been assigned to a developer team, it is given a priority for fixing. Novell makes every effort to ensure each defect gets the priority it deserves. However, the only information managers and engineers have available to determine priorities are the details provided in the defect description. Here is where you can have some control over how your defect is prioritised.
Tips for Getting Your Defect Properly Prioritised
Here are some guidelines to follow so that your defect report can be given the proper priority.
Use the latest software version you can find. Use field test or beta software in a testing environment. Be sure that you actually use the products in the same way you will in your production environment, as much as possible.
Try to reproduce the problem in the simplest environment possible. A complex testing configuration takes longer to set up and can introduce problems of its own. If the problem only occurs in a particular complex environment, verify that before you report it.
Don't do random testing to produce a problem. Performing many tests that update and change the same test environment one after the other could result in bugs being detected, but may make the problem impossible to reproduce. Random testing produces random results.
Reproduce the problem in a clean test system. After discovering a bug, create a brand new test system and see if you can reproduce the problem. Remember that Novell testers will have to reproduce your problem before they can attempt to fix it. If the problem fails to appear during Novell's testing, we've wasted time and effort or delayed a fix to your problem. Together we can stamp out logged defects that close with "Could not reproduce."
Provide sufficient information when summarizing the problem. Provide the following information about the problem:
Software versions used
Test configuration, number of servers, relevant settings
The exact steps to reproduce the problem (the fewer the better; try different combinations or scenarios)
The reason why this problem is important to you or to the overall customer perception of the product. This will help Novell set your defect to the correct priority. Every defect will be looked into, so if you want to hurry things along, be sure to provide a good reason why Novell should deal with your problem before the other problems in the queue.
Information about any solutions you may have found for fixing or avoiding the issue.
I hope the information provided thus far has made you better informed and motivated, so that together we can improve the efficacy of the software testing process. Now let's look at ways for you to become more directly involved.
Software Testing and You
The overall quality of product testing will in large part determine whether you have a good experience or a bad experience with that product. We would very much like every customer's experience to be a very good one. And it can be, with your help.
Becoming a Beta Tester
No matter how good a company gets at in-house software testing, you, the customer, are the critical success factor in that process. The thing that makes the biggest difference in product quality is for you to test the really important features early on during beta testing. We may log somewhere in the neighborhood of 10,000 defects over the life of a product, but only 100 of these may be critical. We'd much rather find and fix those 100 issues at the beginning of the product's life cycle. That's the challenge.
As a registered beta tester, you will be able to perform product queries to report the defects you have logged for that product. To take this more proactive approach in improving Novell products, visit the Novell beta testing Web site at: http://beta.novell.com.
Some of you perform testing as part of your job. Your company understands that they can save money by being proactive. Others perform testing because it is something they enjoy or because Novell is their livelihood. You may have to perform testing in addition to your regular job. Whatever category you fall into, let me take this opportunity to extend a big "thank you" to all of our software testers. Thank you-the quality of our products is much improved because you took the time to test them.
Besides our undying gratitude, there are other perks in store for those who want to beta test Novell products. Besides the warm fuzzy feeling you'll get for telling Novell what you think of their software, you'll also learn a lot about the products. To cause a product to break requires a knowledge and skill that not many engineers have. You have to do things with the product that makes it react unexpectedly.
If you are a beta tester, you'll receive a small "thank you" gift. If you're a really hot beta tester, you'll receive a bigger "thank you very much" gift. I'd love for there to be cars and houses on offer. Believe me, this has already been suggested, so please don't call in with any more suggestions; the lines are now closed. But while we're waiting for a bigger budget to be approved, you should try to collect more "thank you very much" gifts than you know what to do with.
To access your skill level at software testing, estimate how many problems you think a particular product might have. Then put the product through its paces and try to find defects. You may find a lot fewer than you expected. This is not because there aren't many bugs, but because it's just not easy to find a defect in testing. Sometimes the real world brings out the worst in software.
Using the Novell SuperLab
These are several things to consider when contemplating the use of Novell's SuperLab facility for large-scale performance testing. First, testing in the SuperLab is only required when size matters. Second, the scope of testing cannot include a test of every feature and function. You need to carefully think through the test cases that will provide you with the most valuable results.
Here are some factors to consider when devising test cases:
General product performance testing. Conduct a general performance test on the product, including scale and stress testing. If the product seems slower, even if it is not in any measurable way, investigate possible reasons.
Specific testing of product features. Conduct more specific tests to measure the performance of key features. Here are some tips to consider:
Compare the previous release to the current release. Include up to two previous releases. (Support packs and maintenance updates count as releases.)
Define which specific tests should be run. Be sure to manage the total number of tests carefully, as it can easily get out of control.
Define the performance metrics to be measured (connections per second, response time, throughput, and so on).
Define what constitutes acceptable results.
Benchmarking Tools. You should use the appropriate benchmarking tools, especially when benchmarking standards for the product being tested are required.
Cross-Platform Automation. Write cross-platform automated tests, if at all possible. This allows you to test different platforms and products.
Define and record configurations for the specific tests:
Software: tuning parameters, simplify the testing environment, software platforms, different client platforms
Hardware: CPU speed, MP box, RAM, Storage, LAN
Number of users: different users, number of connections
Data: size, type, and so on
Input: Different clients, type of connections, protocols
Keep in mind that complex environments can generate varied results which seldom highlight or point to the cause of the problem. Try to keep your testing environment as simple as possible.
Software Testing Tools Used in the SuperLab
A lab the size of the SuperLab would be difficult to use without the help of automation tools. SuperLab engineers use internally-developed tools such as the following to set up hundreds of machines at a time, and to control these machines once they are configured:
SuperLab Automation Test Harness. A console that allows for remote control of clients running MS Win32 operating systems.
SuperLab Automation Agent. A listening agent that facilitates control of the Windows machine it is running on.
Macro utility. A utility which allows distributed macros to be run on client machines.
SuperLab Statistics NLM. A server module that gather statistics from multiple servers and report these to the test harness.
SuperLab Windows Configuration Tool. A tool for customizing a Windows session environment.
SuperLab Imaging. An archive-based imaging tool for uploading, storing, and downloading images.
At present, these tools are not available outside of the SuperLab. Novell's engineers will assist you in using these tools to make the best use of your time in the SuperLab.
Novell testers encourage the use of third-party tools and utilities such as LoadRunner (an integrated client, server, and Web load testing tool). The additional references listed at the end of this AppNote provide links to some very useful tools, as well as to forums and mail groups.
To schedule lab facilities or obtain more information, contact the SuperLab at email@example.com. Or visit the SuperLab Web site at: http://developer.novell.com/devres/slab.
Hopefully the information provided in this AppNote has shown how you can get involved in improving the quality and quantity of software testing at Novell, which will result in better products all around. A better quality product will increase uptime, reduce the number of patches released, increase revenue for both customers and Novell, and allow you to go home on time more often. It also means that less time is spent troubleshooting issues, giving you more time to pursue new products and technology.
Here are some additional resources to help you broaden your knowledge and expertise in software testing.
http://www.stickyminds.com. Decades of collective experience in software and communication have culminated in StickyMinds.com-the place for software managers, testers, and QA folks to gather (and gather information). This site, brought to you by the same people who produce STQE magazine (http://www.stqemagazine.com), contains a mother lode of know-how that will really stick with you.
http://www.badsoftware.com/qindex.htm. "Bad Software: What to Do When Software Fails" is a gathering point for information about software consumer protection. It's a must-visit site that provides very interesting reading for high-tech customers everywhere.
http://www.aptest.com/resources.html. ApTest is a great source for software testing tools and services. If you're interested in having a product or Web site tested, or if you need custom test technology developed, this is the site for you.
http://www.softwaretestinginstitute.com. The Software Testing Institute understands how critical the customer's role is in delivering quality software. It gives you privileged access to quality industry publications, research, and online services that give you the expertise you need to work more efficiently and productively, and with greater satisfaction. There is a great list of books on the subject of system/software testing at this site.
Here are some other interesting and useful sites that you can explore on your own:
* Originally published in Novell AppNotes
The origin of this information may be internal or external to Novell. While Novell makes all reasonable efforts to verify this information, Novell does not make explicit or implied claims to its validity.