DIELE CONSULTING
  • Home
  • Services
  • About
  • DC Blog
  • Media
  • Contact
  • Sustainable Quality
​
​
Diele Consulting Blog​




Posting our latest thoughts and ideas . . .

Cost of a Bad Culture

2/4/2023

0 Comments

 
Unfortunately, culture issues are prevalent in businesses today.  In fact, research has shown toxic cultures have cost businesses in excess of $200 billion over the last five years.  A business culture represents the beliefs, values, rules of behavior, attitudes, and norms that reflect a company’s values and modes of operation.  It is the day-to-day environment that everyone works within.  Every company has a culture—but not all cultures are conducive to helping a company achieve its goals.  It is not unusual for companies to have a “stated culture,” and another that is the “real culture”.  The “stated culture” may sound good on paper or on a website but may not be true.
 
Studies have found that 58 percent of workers who quit their jobs cite poor workplace culture as their reason.  If you suspect there could be an issue, here is why it is important to deal with it promptly:
  • Incivility in the workplace isn't just bad for morale — it hurts a company's bottom line. Researchers estimate that disrespectful behavior costs companies $14,000 per employee due to lost productivity.
  • A toxic culture might damage productivity by as much as a 40% decrease.
  • Low-level engagement within companies results in a 33 percent decrease in operating income and an 11 percent decrease in earnings growth
  • Disengagement is particularly expensive when going through a tumultuous time in the company, such as a change initiative albeit continuous improvement or reorganization (or a pandemic). Employees that hate their jobs are not going to fully participate during a transition.
  • This is especially if much of your workforce is ‘just getting by.  These are the employees that do the bare minimum until 5:00 p.m. and ultimately don’t care how well the company performs (so long as they continue to have a paycheck).
  • Trying to establish any type of continuous improvement effort in an uncivil culture is very difficult. 
 
Conversly, look at the business impact of a positive culture:
  • A strong, effective culture, as shown by the research of Kotter and Heskett, can lead to 20-30% more productivity than the competition.
  • A positive culture, consciously developed with positive leadership, might boost performance by at least 20% - if not more. (Estimated based on literature and client work).
  • If you just take the incivility out and become an effective culture, your results could grow by 20%.  If you develop further into a positive culture, results could grow by 20-40%.
  • A strong positive culture can enhance employee engagement by 30%, resulting in up to a 19% increase in operating income, and a 28% increase in earnings growth.
  • On average, organizations that purposefully craft and develop their culture experience a 14% turnover rate, while organizations that ignore their culture experience a 48% turnover rate.
  • Continuous improvement has a much better chance to succeed when built on a culture that embraces change and takes pride in what they do.
0 Comments

Quality Saves Money

2/4/2023

0 Comments

 
What is the financial impact of quality to an organization? Is it money well spent?  How should a business be spending money regarding quality?
 
What if, with relative minimal investment, your company was able to save between 20 and 30 cents of every dollar it earns or increase revenues or market share by like proportions? This isn’t a rhetorical question; rather what is attainable if a business improves the quality of the product or service they deliver.  By eliminating the costs associated with poor quality and investing in good quality - companies save money and improve their bottom line.
 
How much could you save if you did everything “right the first time”?  If you knew that everything was done right, you would not need to spend time (and money) on:  inspecting, re-inspecting, design changes, bug fixes, testing, re-testing, reworking, scrap, trouble-shooting problems.  You also would not need dedicated staff that deals with billing issues, irate customers, complaints about poor service, product issues, or incur all the costs associated with returned products or service calls. 
 
It is not unusual for companies that do not measure their Cost of Poor Quality (COPQ), to be spending as much as 30% of sales on Poor Quality activities.  That means that one & one-half days per week, you are re-doing work that was not done right the first time.  How much could you save if you cut that down to 1 day per week?  How about 4 hours per week?  Or, 1 hour per week?
 
Cost of Quality (CoQ) is a financial model of the costs incurred to operate and maintain a level of quality within a business.  The CoQ model considers all the activities that any typical company would perform toward the intent of providing good products or services to their customers.  There are three major categories of cost in the CoQ model:  Prevention, Appraisal, and Failure.  The failure costs are further broken down into internal and external failure costs.

The first two categories of cost are associated with putting systems and processes in place to reduce the likelihood of a failure or customer issue.  
 
Prevention is the category for those costs associated with preventing a quality problem from occurring in the first place. Typical costs that are included in this category are training, procedure writing, and process or equipment automation. 
 
Appraisal is the next category. Appraisal is where we capture our assurance costs.  This would include any activity that inspects or verifies the quality of a product or service.  Typical costs included here are calibration, instrumentation, audits. inspection and test.
 
Internal Failure is the first of two categories associated with poor quality.  Internal Failure are those costs associated with recognizing a poor-quality characteristic exists before the product or service is delivered to a customer.  The common costs in this category include scrap, rework, failure analysis, redesign, reinspection, and retesting.
 
External Failure is the worst of all possible situations. External failure costs are attributed to the failure of a product or service at the delivery point or usage point of the customer.  External failures are damaging for two key reasons.  First, a delivered product or service is fully burdened, including labor, transportation, and storage costs. Second, the company’s reputation is impacted.  Once a customer has a bad experience, damage to reputation may hinder or lose future sales.  The cost of lost opportunity can dwarf all other costs. 
 
A real-world example may help.  A company had recently introduced their new product.  They initially rolled it out to some large customers that were anxious to give it a try.  Within a few weeks of shipping the new product to these key customers, the complaints started rolling in.  Significant time was spent on investigating the issue, designing, and testing a fix, shipping updated versions of the product and returns of the previous version.  However, the new version did not perform much better.  Customers lost confidence and cancelled planned orders that were substantial.  After documenting the quality costs for this one specific issue, we found: 
​
    Man-hours spent:                   $336,179
    Rework & logistics costs:        $198,968
    Lost opportunity costs:       $10,500,000
    Cost of Poor Quality:       $11,035,147

The goal with CoQ is that over time, you shift your investment into prevention methods (preventative activities) and reduce the costs associated with failures (reactive activities) – which are typically much higher.  Too many companies end up focusing on correction instead of prevention.  Ultimately, doing things right the first time is always faster and cheaper than doing things over.
0 Comments

The Software Quality Challenge

2/3/2023

0 Comments

 
The following is taken from a paper (The Software Quality Challenge) written by Watts Humphrey.  Humphrey is credited with the creation of the Capability Maturity Model (CMM), while leading the Software Engineering Institute at Carnegie Mellon.
Software is an amazing technology.  Once you test it and fix all of the problems found, that software will always work under the conditions for which it was tested. It will not wear out, rust, rot, or get tired. The reason there are not more software disasters is that testers have been able to exercise these systems in just about all of the ways they are typically used. So, to solve the software quality problem, all we must do is keep testing these systems in all of the ways they will be used. So what is the problem?  The problem is complexity.  The more complex these systems become, the more different ways they can be used and the more ways users can use them, the harder it is to test all of these conditions in advance.
The most disquieting fact is that testing can only find a fraction of the defects in a program.  That is, the more defects a program contains at test entry, the more it is likely to have at test completion.  The reason for this is the point made previously about extensive testing.  Clearly, if defects are randomly sprinkled throughout a large and complex software system, some of them will be in the most rarely used parts of the system and others will be in those parts that are only exercised under failure conditions.
At this point, several conclusions can be drawn.  First, today’s large scale systems typically have many defects.  Second, these defects do not generally cause problems as long as they are used in ways they have been tested.  Third, because of the growing complexity of modern systems, it is impossible to test all of the ways in which such systems could be used.  Fourth, when systems are stressed in unusual ways, their software is most likely to encounter undiscovered defects.  Fifth, under these stressful conditions, these systems are least likely to operate correctly or reliably.
In their book, The Economics of Software Quality, Capers Jones & Olivier Bonsignour explain - Testing primarily concentrates on coding defects. But for large systems and large applications, coding defects are not the main source of trouble. Coding defects are also the easiest to eliminate. For large systems, requirements defects, architectural defects, and design defects are the main sources of quality problems. Of course, large databases and large websites have serious quality problems, too.
Modern business applications have become so complex that they have been decomposed into several different subsystems, or tiers, built on different software platforms using different programming languages. Not surprisingly, a National Research Council study on “dependable software” concluded that testing is no longer sufficient to ensure an application will be reliable, efficient, secure, and maintainable (Jackson, 2009). To reduce the business risk of these multi-tier applications, it is essential to supplement testing with static analysis for measuring and controlling application quality and dependability.
The majority of defects that cause system outages, performance degradation, security breaches, and exorbitant maintenance costs are no longer isolated in a single file or piece of code (Hamill, 2009). The most catastrophic problems occur in interactions among the various tiers of an application. Even more challenging, these defects are not failures to satisfy the functional requirements provided by the customer but rather are nonfunctional defects in the engineering of the application’s architecture or source code (Spinellis, 2007). Test cases are usually designed to detect functional defects. To find the defects that cause the most severe damage during operations, one needs to analyze the structural quality of an application—the integrity of its internal structural and engineering characteristics.
So, how is Software Quality handled at other leading companies?  A good contemporary example would be Google.  In a recent book [11], How Google Tests Software, they explain their approach as follows:
 Quality is not equal to test. Quality is achieved by putting development and testing into a blender and mixing them until one is indistinguishable from the other.
At Google, this is exactly our goal: to merge development and testing so that you cannot do one without the other. Build a little and then test it. Build some more and test some more. The key here is who is doing the testing. Because the number of actual dedicated testers at Google is so disproportionately low, the only possible answer has to be the developer. Who better to do all that testing than the people doing the actual coding? Who better to find the bug than the person who wrote it? Who is more incentivized to avoid writing the bug in the first place? The reason Google can get by with so few dedicated testers is because developers own quality. If a product breaks in the field, the first point of escalation is the developer who created the problem, not the tester who didn’t catch it. This means that quality is more an act of prevention than it is detection.
Quality is a development issue, not a testing issue. To the extent that we are able to embed testing practice inside development, we have created a process that is hyper-incremental where mistakes can be rolled back if any one increment turns out to be too buggy. We’ve not only prevented a lot of customer issues, we have greatly reduced the number of dedicated testers necessary to ensure the absence of recall-class bugs. At Google, testing is aimed at determining how well this prevention method works.
The key takeaway from the above excerpts is that “Test & Fix”, in and of itself, is not an effective Quality Strategy for software (this holds true for hardware, as well).  Defect prevention is an important activity in any software project.  In most software organizations, the project team focuses on defect detection (test) and rework (fix & retest). Thus, defect prevention, often becomes a neglected component. It is therefore advisable to take measures that prevent the defect from being introduced in the product at the earliest stages of a project. While the cost of such measures are minimal, the benefits derived due to overall cost saving are significantly higher compared to cost of fixing defects at later a stage.
Investment in the analysis of the defects at early stages reduces the time, cost and the resources required. The goal is to gather the knowledge of how defects get injected and then develop methods and processes that enable the prevention of those defects. Once this knowledge is put into practice, the quality is improved and it also enhances the overall productivity.
0 Comments
Proudly powered by Weebly
  • Home
  • Services
  • About
  • DC Blog
  • Media
  • Contact
  • Sustainable Quality