DIELE CONSULTING
  • Home
  • Services
  • About
  • DC Blog
  • Media
  • Contact
  • Sustainable Quality
​
​
Diele Consulting Blog​




Posting our latest thoughts and ideas . . .

The Software Quality Challenge

2/3/2023

0 Comments

 
The following is taken from a paper (The Software Quality Challenge) written by Watts Humphrey.  Humphrey is credited with the creation of the Capability Maturity Model (CMM), while leading the Software Engineering Institute at Carnegie Mellon.
Software is an amazing technology.  Once you test it and fix all of the problems found, that software will always work under the conditions for which it was tested. It will not wear out, rust, rot, or get tired. The reason there are not more software disasters is that testers have been able to exercise these systems in just about all of the ways they are typically used. So, to solve the software quality problem, all we must do is keep testing these systems in all of the ways they will be used. So what is the problem?  The problem is complexity.  The more complex these systems become, the more different ways they can be used and the more ways users can use them, the harder it is to test all of these conditions in advance.
The most disquieting fact is that testing can only find a fraction of the defects in a program.  That is, the more defects a program contains at test entry, the more it is likely to have at test completion.  The reason for this is the point made previously about extensive testing.  Clearly, if defects are randomly sprinkled throughout a large and complex software system, some of them will be in the most rarely used parts of the system and others will be in those parts that are only exercised under failure conditions.
At this point, several conclusions can be drawn.  First, today’s large scale systems typically have many defects.  Second, these defects do not generally cause problems as long as they are used in ways they have been tested.  Third, because of the growing complexity of modern systems, it is impossible to test all of the ways in which such systems could be used.  Fourth, when systems are stressed in unusual ways, their software is most likely to encounter undiscovered defects.  Fifth, under these stressful conditions, these systems are least likely to operate correctly or reliably.
In their book, The Economics of Software Quality, Capers Jones & Olivier Bonsignour explain - Testing primarily concentrates on coding defects. But for large systems and large applications, coding defects are not the main source of trouble. Coding defects are also the easiest to eliminate. For large systems, requirements defects, architectural defects, and design defects are the main sources of quality problems. Of course, large databases and large websites have serious quality problems, too.
Modern business applications have become so complex that they have been decomposed into several different subsystems, or tiers, built on different software platforms using different programming languages. Not surprisingly, a National Research Council study on “dependable software” concluded that testing is no longer sufficient to ensure an application will be reliable, efficient, secure, and maintainable (Jackson, 2009). To reduce the business risk of these multi-tier applications, it is essential to supplement testing with static analysis for measuring and controlling application quality and dependability.
The majority of defects that cause system outages, performance degradation, security breaches, and exorbitant maintenance costs are no longer isolated in a single file or piece of code (Hamill, 2009). The most catastrophic problems occur in interactions among the various tiers of an application. Even more challenging, these defects are not failures to satisfy the functional requirements provided by the customer but rather are nonfunctional defects in the engineering of the application’s architecture or source code (Spinellis, 2007). Test cases are usually designed to detect functional defects. To find the defects that cause the most severe damage during operations, one needs to analyze the structural quality of an application—the integrity of its internal structural and engineering characteristics.
So, how is Software Quality handled at other leading companies?  A good contemporary example would be Google.  In a recent book [11], How Google Tests Software, they explain their approach as follows:
 Quality is not equal to test. Quality is achieved by putting development and testing into a blender and mixing them until one is indistinguishable from the other.
At Google, this is exactly our goal: to merge development and testing so that you cannot do one without the other. Build a little and then test it. Build some more and test some more. The key here is who is doing the testing. Because the number of actual dedicated testers at Google is so disproportionately low, the only possible answer has to be the developer. Who better to do all that testing than the people doing the actual coding? Who better to find the bug than the person who wrote it? Who is more incentivized to avoid writing the bug in the first place? The reason Google can get by with so few dedicated testers is because developers own quality. If a product breaks in the field, the first point of escalation is the developer who created the problem, not the tester who didn’t catch it. This means that quality is more an act of prevention than it is detection.
Quality is a development issue, not a testing issue. To the extent that we are able to embed testing practice inside development, we have created a process that is hyper-incremental where mistakes can be rolled back if any one increment turns out to be too buggy. We’ve not only prevented a lot of customer issues, we have greatly reduced the number of dedicated testers necessary to ensure the absence of recall-class bugs. At Google, testing is aimed at determining how well this prevention method works.
The key takeaway from the above excerpts is that “Test & Fix”, in and of itself, is not an effective Quality Strategy for software (this holds true for hardware, as well).  Defect prevention is an important activity in any software project.  In most software organizations, the project team focuses on defect detection (test) and rework (fix & retest). Thus, defect prevention, often becomes a neglected component. It is therefore advisable to take measures that prevent the defect from being introduced in the product at the earliest stages of a project. While the cost of such measures are minimal, the benefits derived due to overall cost saving are significantly higher compared to cost of fixing defects at later a stage.
Investment in the analysis of the defects at early stages reduces the time, cost and the resources required. The goal is to gather the knowledge of how defects get injected and then develop methods and processes that enable the prevention of those defects. Once this knowledge is put into practice, the quality is improved and it also enhances the overall productivity.
0 Comments



Leave a Reply.

Proudly powered by Weebly
  • Home
  • Services
  • About
  • DC Blog
  • Media
  • Contact
  • Sustainable Quality