Quality

Overview

Quality refers to the ability of product or service to consistently meet or exceed customer expectations. Quality means getting what you pay for.

In the 1970s and 1980s, US business organizations tended to focus on cost and productivity rather than quality. It wasn’t that quality was unimportant, it just wasn’t very important. This led to a significant share of US market being captured by Japan business organizations.

Quality Management Evolution

Priori to the Industrial Revolution, skilled craftsman performed all stages of production. Pride of workmanship and reputation often provided the motivation to see that a job was done right.

A division of labor accompanied the Industrial Revolution; each worker was then responsible for only a small portion of each product. Pride of workmanship becomes less meaningful because workers could no longer identify readily with the final product. The responsibility of quality control shifted to the foreman. Inspection was either nonexistent or haphazard, although in some instances 100% inspection was used.

Frederick Winslow Taylor, the “Father of Scientific Management,” gave new emphasis to quality by including product inspection and gauging in his list of fundamental areas of manufacturing management.

G. S. Radford improved Taylor’s methods. Two of his most significant contributions were

  1. the notions of involving quality consideration early in product design stage, and
  2. making connection between high quality, increased productivity, and low costs.

In 1924, W. Shewhart of Bell Telephone Laboratories introduced statistical control charts that could be used to monitor production.

Around 1930, H. F. Dodge and H. G. Romig, also of Bell Labs, introduced tables for acceptance sampling.

World War II caused a dramatic increase in emphasis on quality control.

By the end of the 1940s, professional quality organizations were emerging throughout the country, e.g., the American Society for Quality Control (ASQC, now known as ASQ).

During the 1950s, the quality movement evolved into quality assurance. Quality guru W. Edward Deming introduced statistical quality control methods to Japanese manufacturers.

At about the same time, another quality guru, Joseph Juran, began his “cost of quality” approach. The approach stressed the desirability of lowing costs associated with prevention.

In the mid-1950s, Armand Feigenbaum proposed total quality control, which enlarged the realm of quality efforts from its primary focus in manufacturing to also include product design and incoming raw materials. One important feature of his work was greater involvement of upper management in quality.

During the 1960s, the concept of “zero defects” gained favor. Championed by quality guru Philip Crosby, this approach focused on employee motivation and awareness, and the expectation of perfection from employee. It evolved from the success of the Martin Company in producing a “perfect” missile for the US Army.

In the 1970s, quality assurance methods gained increasing emphasis in services including government operations, health care, banking, and the travel industry.

The evolution of quality took a dramatic shift from quality assurance to a strategic approach to quality in the late 1970s. The strategic approach, advocated by Havard professor David Garvin and others, is proactive, focusing on preventing mistakes from occurring together. Quality and profits are more closely linked. This approach also places greater emphasis on customer satisfaction, and involves all levels of management as well as workers in a continuing effort to increase quality.

Quality Basics

The Dimensions of Quality

The dimensions of quality include:

  1. Performance: main characteristics of the product or service.
  2. Aesthetics: appearance, feel, smell, taste.
  3. Special features: extra characteristics.
  4. Conformance: how well a product or service corresponds to the customer’s expectation.
  5. Safety: risk of injury or harm.
  6. Reliability: consistency of performance.
  7. Durability: the useful life of the product or service.
  8. Perceived quality: indirect evaluation of quality.
  9. Service after sale: handling of complaints or checking on customer satisfaction.

When referring to a product, a customer sometimes judges the first four dimensions by its fitness for use.

The Determinants of Quality

The degree to which a product or a service successfully satisfies its intended purpose has the following four primary determinants.

  1. Design. Quality of design refers to the intention of designers to include or exclude certain features in a product or service. Marketing may organize focus groups of customers to express their views on a product or service. Designers must ascertain that designs are manufacturable; that is, production or service has the equipment, capacity, and skills necessary to produce or provide a particular design. The best workmanship in the world may not be enough to achieve the desired quality; similarly, a superior design usually can not offset poor workmanship.
  2. How well it conforms to the design. Quality of conformance refers to the degree to which goods and services conform (i.e., achieve) the intent of designers. This is affected by factors such as the capability of equipments used; the skills, training, and motivation of workers; the extent to which the design lends itself to production; the monitoring process to access performance; and the taking of corrective action when necessary.
  3. Ease of use and user instructions. They increase the chances, but do not guarantee, that a product will be used for its intended purposes and in such a way that it will continue to function properly and safely. Instructions need to be clearly visible and easily understood. Some examples include the doctor who fails to specify that a medication should be taken before meals and not with orange juice and the attorney who neglects to inform a client of a deadline for filing a claim.
  4. Service after delivery, including recall and repair of a product, adjustment, replacement, or buyback, or reevaluation of a service.

The Consequences of Poor Quality

  1. Loss of business. While a satisfied customer will tell a few people about his or her experience, a dissatisfied person will tell an average of 19 others. Unfortunately, the company is usually the last to know of dissatisfaction. A more common response is simply to switch to a competing product or service. Typically, formal complaints are received from less than 5 percent of dissatisfied customers.
  2. Liability results damages or injuries due to faulty design or poor workmanship. An organization’s liability costs can often be substantial. Express written warranties as well as implied warranties generally guarantee the product or service as safe when used as intended. The courts have tended to extend this to foreseeable uses, even if these uses were not intended by the producer.
  3. Productivity. Products and services with poor quality must be reworked or scrapped and reduce the amount of usable output.
  4. Costs.

The Costs of Quality

Quality Gurus

W. Edwards Deming, Statistics, New York University, 1940s. Went to Japan after WWII. Japanese established the Deming Prize.

Deming compiled a famous list of 14 points. The key elements are constancy of purpose, continual improvement, and profound knowledge. Profound knowledge involves:

  1. An appreciation for a system. Everyone in an organization working to achieve optimization. Management must eliminate internal competition.
  2. A theory of variation. It is necessary to differentiate between random variation and correctable variation, and to focus on the latter.
  3. A theory of knowledge. Learning cannot occur within an organization without a theory of knowledge.
  4. Psychology. Management’s greatest challenge is in motivating workers to contribute their collective efforts to achieve a common goal.

His message is that the cause of the inefficiency and poor quality is the system, not the employees. Management’s responsibility is to correct the system to achieve the desired results. Deming stressed the need to reduce variation in output, which can be accomplished by distinguishing between special causes of variation (i.e., correctable) and common causes of variation (i.e., random).

Joseph M. Juran. Quality Control Handbook, 1951. Juran on Quality, Juran Institute in Wilton. Juran’s approach to quality may be the closest to Deming’s of all the gurus, although his approach differs on the importance of statistical methods and what an organization must do to achieve quality. (See textbook for details of the comparison between Juran and Deming.)

Juran views quality as fitness-for-use. He believes that roughly 80 percent of quality defects are management controllable. He describes quality management in terms of a trilogy consisting of

A key element of Juran’s philosophy is the commitment of management to continual improvement.

Juran is credited as one of the first to measure cost of quality, and he demonstrated the potential for increased profits that would result if the costs of poor quality could be reduced. Juran proposed 10 steps for quality improvement.

Armand Feigenbaum. Cost of Nonconformance. GE top expert on quality at 24. Total Quality Control, 1961. 40 steps of quality principles.

When improvements were made in a process, other areas of the company also achieved improvements. People could learn from each other’s successes. Open work environment led to cross-functional teamwork. It is the customer who defines quality.

Philips Crosby. Martin Marietta, 1960s. Corporate VP for Quality at ITT, 1970s. Quality Is Free (The costs of poor quality are much greater than traditionally defined.), 1979. Quality without Tears: The Art of Hassle-Free Management, 1984.

  1. Top management must demonstrate its commitment to quality and its willingness to give support to achieve good quality.
  2. Management must be persistent in efforts to achieve good quality.
  3. Management must spell out clearly what it wants in terms of quality and what workers must do to achieve that.
  4. Zero defects. Do it right the first time.

Kaoru Ishikawa. Cause-And-Effect / Fishbone diagram. Quality Circles. First to call attention to internal customers --- the next person in a process. First to make quality control “user friendly” for workers.

Genichi Taguchi. Taguchi loss function --- determining the cost of poor quality. Combined effect of deviations of all parts from their standards can be large, even though each individual deviation could be small. Help Ford Motor Company to reduce its warranty losses by achieving less variation in the output of transmissions.

Quality Awards

The Baldrige Award, National Institute of Standards (NIST)

Malcolm Baldrige National Quality Improvement Act (1987). Named after the late Malcolm Baldrige, an industrialist and former secretary of commerce. A maximum of two awards are given annually in each of the three categories: large manufacturer, large service organization, and small business (500 or fewer employees). The following table provides a list of the items included in each evaluation area and their maximum number of points. Note that customer satisfaction has the largest number of points.

 

The Deming Prize, Japan. The major focus of judging is on statistical quality control.

Quality Certification

ISO: International Organization for Standardization, 91 countries.

ANSI: American National Standards Institute.

ISO 9000: International standards on quality management and quality assurance. Certification takes 12 to 18 months. 40,000 companies are registered worldwide, three-fourths of which are located in Europe. Registration needs to be renewed for every three years. Five standards associated with the ISO 9000 series.

ISO14000: Assess a company’s performance in terms of environmental responsibility. The standards for certification bear upon three major areas:

Quality Control

Quality control assures that processes are performing in an acceptable manner using statistical techniques.

Phases of quality assurance:

Quality assurance that relies primarily on inspection after production is referred to as acceptance sampling. Quality control efforts that occur during production are referred to as statistical process control.

Inspection

Monitoring in the production process can occur at three points: before production, during production, and after production. Monitoring before and after production involves acceptance sampling procedures; monitoring during the production process is referred to as process control.

Basic questions of the inspection are

  1. How much to inspect and how often. Two extremes are low-cost-high-volume and high-cost-low-volume. If inspection activities increase, inspection costs increase, but the costs of undetected defectives decrease. The traditional goal was to minimize the sum of these two costs. In other words, it may not pay to attempt to catch every defective.

The frequency of inspection depends largely on the rate at which a process may go out of control or the number of lots being inspected. Many small lots will require more samples (in total) than a few large lots, because it is important to obtain sample data from each lot.

  1. At what points in the process inspection should occur. Typical inspection points include
    1. Raw materials and purchased parts.
    2. Finished products.
    3. Before a costly operation.
    4. Before an irreversible process.
    5. Before a covering process.

In service sector, inspection points are incoming purchased materials and suppliers, personnel, service interfaces (e.g., service counter,) and outgoing completed work.

  1. Whether to inspect in a centralized or on-site location. The central issue is whether the advantages of specialized lab tests are worth the time and interruption needed to obtain the results.
  2. Whether to inspect attributes or variables.

Statistical Process Control

Quality control is concerned with the quality of conformance of a process. Managers use statistical process control to evaluate the output of a process to determine its acceptability. They take periodic samples from the process and compare them with a predetermined standard. If the sample results are not acceptable, they stop the process and take corrective action. If the sample results are acceptable, they allow the process to continue.

Effective control requires the following steps:

  1. Define in sufficient detail what is to be controlled.
  2. Measure characteristics that can be counted or measured.
  3. Compare to a standard which represents the level of quality being sought.
  4. Evaluate by establishing a definition of out of control to distinguish random from nonrandom variability.
  5. Take corrective action if necessary to uncover the cause of nonrandom variability.
  6. Evaluate corrective action. A process must be monitored for a sufficient period of time to verify that the problem has been eliminated.
Variations and Control

All processes that provide a good or a service exhibit a certain amount of “natural” variation in their output. The variations are created by the combined influences of countless minor factors. The variability is often referred to as chance or random variation, although it sometimes is referred to as common variability. For instance, old machines generally exhibit a higher degree of natural variability than new machines.

A second kind of variability in process output is called assignable variation, or special variation. Unlike natural variation, the main sources of assignable variation can usually be identified (assigned to a specific cause) and eliminated. Tool wear, equipment that needs adjustment, defective materials, human factors and problems with measuring devices are typical sources of assignable variation.

When samples of process output are taken, and sample statistics such as the sample mean and range are computed, they exhibit the same kind of variability. The variability of a sample statistic can be described by its sampling distribution, which is a theoretical distribution that describes the random variability of sample statistics. Process distribution describes the variation of every individual output of the process. The goal of sampling is to determine whether nonrandom --- and thus, correctable --- sources of variation are present in the output of a process.

High and low values in samples tend to offset each other, resulting in less variability among sample means than among individual values. Note that the sampling distribution is a Normal distribution, even if the process distribution is not normal (Central Limit Theorem, CLT).

The following figure shows how normal distribution is used to judge whether a process is performing adequately.

Two statistical tools are used for quality control: control charts and run tests. Often, they are used together.

Control Charts

A control chart is a time-ordered plot of sample statistics. It is used to distinguish between random variability and nonrandom variability. The basis for the control chart is the sampling distribution. Control limits are dividing lines between random deviations from the mean of the distribution and nonrandom deviations from the mean of the distribution. The following figure shows how control limits are based on the sampling distribution.

In the figure, the larger value is the upper control limit (UCL), and the smaller value is the lower control limit (LCL). A sample statistic that falls between these two limits suggests (but does not prove) randomness, while a value outside or on either limit suggests (but does not prove) non-randomness.

It is important to recognize that, because any limit will leave some area in the tails of the distribution, there is a small probability that a value will fall outside the limits, even though only random variations are present. The probability is sometimes referred to as the probability of Type I Error, where the “error” is concluding that non-randomness is present, when only randomness is present. The error is also referred to as an alpha risk, where alpha () is the sum of probability in the two tails.

Using wider limits reduces the probability of a Type I error, because it decreases the area in the tails. However, wider limits make it more difficult to detect nonrandom variations, if they are present. For example, the mean of the process might shift enough to be detected by two-sigma limits, but not enough to be readily apparent using three-sigma limits. That could lead to a second kind of error, known as Type II Error, beta risk, which is concluding that a process is in control, when it is really out of control.

In theory, the costs of making each error should be balanced by their probabilities. However, in practice, two-sigma limits and three-sigma limits are commonly used without specifically referring to the probability of a Type II error.

The following figure illustrates the components of a control chart.

The following figure illustrates the concept of judging whether each value is within the acceptable (random) range.

There are four commonly used control charts. Two are used for variables (continues variables), and two are used for attributes (discrete variables). Attribute data are counted; variable data are measured, usually on a continuous scale.

Control Charts for Variables

Mean Chart (-Chart)

Mean charts monitor the central tendency of a process. The mean charts can be constructed in two ways depending on whether the population standard deviation  or sample range is used.

If the standard deviation is known or a reasonable estimate of the standard deviation is available, one can compute control limits of the mean charts using these formulas:

Upper control limit (UCL) :=

Lower control limit (LCL) :=

where

m = number of samples

n = number of observations in a sample

I = sample index, i=1,2,…,m.

J = observation index, j=1,2,…,n.

= value of the observation j in sample i.

z = standard normal deviate.

= process (population) standard deviation.

*= mean of sample i observations,

.

= mean of all sample observations,

.

= standard deviation of sample mean distribution,

If the sample range is used as a measure of process variability, the appropriate formulas for control limits are

Upper control limit (UCL) :=

Lower control limit (LCL) :=

where

*= average of sample ranges, and

* can be found in the following table (Table 10-2 in pp.450 of the textbook).

Range Control Chart (R-Chart)

Range control charts are used to monitor process dispersion; they are sensitive to changes in process dispersion. Although the underlying sampling distribution is not normal, the concepts for the use of range charts are much the same as those for use of mean charts.

Control limits for range charts are found using the average sample range in conjunction with these formulas:

Upper control limit (UCL) :=

Lower control limit (LCL) :=

where  and  can be found in the table above.

Even though decreased variability is desirable, we would want to determine what was causing it. Perhaps an improved method has been used, in which case we would want to identify it.

Mean control charts and range control charts provide different perspectives on a process. Mean charts are sensitive to shifts in the process mean, whereas range charts are sensitive to changes in process dispersion. Because of this difference in perspective, both types of charts might be used to monitor the same process.

For example, in the Figure A above, the mean chart picks up the shift in the process mean, but the dispersion is not changing. In Figure B, a change in process dispersion is less apt to be detected by the mean chart than by the range chart. Thus, use of both charts provides more complete information than either chart alone.

However, due to cost or workflow consideration, a single chart may suffice in some cases to monitor a specific aspect of a process, which tends to cost most problems.

Once control charts have been set up, they can serve as a basis for deciding when to interrupt a process and search for assignable causes of variation. One can use the following procedure to determine initial control limits.

  1. Obtain 20 to 25 samples. Compute the appropriate sample statistic(s) for each sample, e.g., mean.
  2. Establish preliminary control limits using the formulas and graph them.
  3. Plot the sample statistics on the control chart(s), and note whether any point fall outside the control limits.
  4. If you find no out-of-control signals, you can assume that the process is in control. If not, investigate and correct assignable cause of variation. Then resume the process and collect another set of variations upon which control limits can be based.

Control Charts for Attributes

Control charts for attributes are used when the process characteristic is counted rather than measured. There are two types of attribute control charts, one for the fraction of defective items in a sample (a p-chart) and one for the number of defects per unit (a c-chart).

A p-chart is appropriate when the data consist of two categories of items which are countable; otherwise, c-chart is more appropriate. For example, one can count the number of crimes committed during the month of August, but one can not count the number of nonoccurrences.

p-Chart

A p-chart is used to monitor the proportion of defectives generated by a process. The theoretical basis for a p-chart is the binormal distribution, although for large sample sizes, the normal distribution provides a good approximation to it. Conceptually, a p-chart is constructed and used much the same way as a mean chart.

Control limits are computed using the following formulas.

Upper control limit (UCL) :=

Lower control limit (LCL) :=

where

, and

p is the average fraction defectives in a population. Note: Because the formula is an approximation, it sometimes happens that the computed theoretical LCL is negative. In these instances, zero is used as the lower limit.

If p is unknown, it can be estimated from samples, i.e.,

,

where  is the defect probability estimate of sample i, i=1,2,…,m, and m is the total number of samples.

c-Chart

When the goal is to control the number of defects per unit, a c-chart is used. The underlying sampling distribution is the Poisson distribution. Use of the Poisson distribution assumes that defects occur over some continuous region and that the probability of more than one defect at any particular spot is negligible. For practical reasons, the normal approximation to the Poisson is used. The control limits are

Upper control limit (UCL) :=

Lower control limit (LCL) :=

where the mean number of defects per unit is c and the standard deviation is .

If the process average is unknown, c can be estimated from

.

When the computed lower control limit is negative, the effective lower limit is zero. The calculation sometimes produces a negative lower limit due to the use of the normal distribution to approximate the Poisson distribution: the normal is symmetrical whereas the Poisson is not symmetrical, when c is close to 0.

Managerial Considerations Concerning Control Charts

Due to the cost and time needed to obtain control charts, managers must make a number of important decisions about the use of the charts:

  1. At what points in the process to use control charts. The decision should focus on the aspects of the process that (1) have tendency to go out of control and (2) are critical to the successful operation of the product or service.
  2. What sample size to take. Sample size is important for two reasons. One is that cost and time are functions of sample size; the greater the sample size, the greater the cost to inspect those items (and the greater the lost production if destructive testing is involved) and the longer the process must be held up while waiting for the results of sampling. The second reason is that smaller samples are more likely to reveal a change in the process than larger samples, because a change is more likely to take place within the large sample, but between small samples. Consequently, a sample statistic such as the sample mean in the large sample could combine both “before-change” and “after-change” observations, whereas in two smaller samples, the first could contain “before” observations and the second “after” observations, making detection of the change more likely.
  3. What type of control chart to use (i.e., variables or attributes). A manager can choose between using a control chart for variables (a mean chart) and a control chart for attributes (a p-chart). If the manager is monitoring the diameter of a drive shaft, either the diameter can be measured and a mean chart used for control, or the shafts could be inspected using a go, no-go gauge --- which simply indicates whether a particular shaft is within specification without giving its exact dimensions --- and a p-chart could be used. Measuring is more costly and time-consuming per unit than the yes-no inspection using a go, no-go gauge, but because measuring supplies more information than merely counting items as good or bad. Hence, a manager must weight the time and cost of sampling against the information provided.
Run Tests

A run is defined as a sequence of observations with a certain characteristics, followed by one or more observations with a different characteristics. For example, in the series A A A B, there are two runs: a run of three As followed by a run of one B. Underlining each run helps in counting them, i.e., A A A B.

Two useful run tests involve examination of the number of runs up and down and runs above and below the median. In order to count these runs, the data are transformed into a series of Us and Ds (for up and down) and into a series of As and Bs (for above and below the median). (See textbook page 457 for an example.) Notice that, for the runs of up and down, the first value does not receive either a U or a D, because nothing precedes it. If a plot is available, the runs can be easily counted directly from the plot, as illustrated below

To determine whether any pattern is present in control chart data, one must transform the data into both As and Bs and Us and Ds, and then count the number of runs in each case. These numbers must then be compared with the number of runs that would be expected in a completely random series.

For both the median and the up/down run tests, the expected number of runs is a function of the number of observations in the series. The formulas are

where N is the number of observations.

The actual number of runs in any given set of observations will vary from the expected number, due to chance and any pattern that might be present. Chance variability is measured by the standard deviation of runs. The formulas are

Distinguishing chance variability from patterns requires use of the sampling distributions for median runs and up/down runs. Both distributions are approximately normal.

In practice, it is often easiest to compute the number of standard deviations, z, by which an observed number of runs differs from the expected number. This z value would then be compared to the value +/- 2 (for 95.5%) or some other desired value (e.g., +/- 1.96 for 95%, +/- 2.33 for 98%). A test z that exceeds the desired limits indicates patterns are present.

The computation of z takes the form

Consequently, for the median and up/down tests, one can find z using these formulas:

Median:

Up and down:

Where

N= total number of observations

r= observed number of runs of either As and Bs or Us and Ds, depending on which test is involved.

It is desirable to apply both run tests to any given set of observations because each test is different in terms of the types of patterns it can detect. Sometimes both tests will pick up a certain pattern, but sometimes only one will detect non-randomness. If either does, the implication is that some sort of non-randomness is present in the data.

Process Capability

The variability of a process can significantly impact quality. Three commonly used terms refer to the variability of process output. Each term relates to a slightly different aspect of that variability, so it is important to differentiate these terms.

Tolerances or specifications are established by engineering design or customer requirements. They indicate a range of values in which individual units of output must fall in order to be acceptable.

Control limits are statistical limits that reflect the extent to which sample statistics such as means and ranges can vary due to randomness alone.

Process variability reflects the natural or inherent (i.e., random) variability in a process. It is measured in terms of the process standard deviation.

Control limits and process variability are directly related: control limits are based on sampling variability, and sampling variability is a function of process variability. On the other hand, there is no direct link between tolerances and either control limits or process variability. Tolerances are specified in terms of a product or service, not in terms of the process by which the product or service is generated. Hence, in a given instance, the output of a process may or may not conform to specifications, even though the process may be statistically in control.

This is why it is also necessary to take into account the capability of a process. The term process capability refers to the inherent variability of process output relative to the variation allowed by the design specifications.

Capability Analysis

Capability analysis means determining whether the inherent variability of the process output falls within the acceptable range of variability allowed by the design specifications for the process output. If it is within the specifications, the process is said to be “capable.” If it is not, the manager must decide how to correct the situation.

We can not automatically assume that a process that is in control will provide desired output. Instead, we must specifically check whether a process is capable of meeting specifications and not simply set up control chart to monitor it. A process should be in control and within specifications before production begins --- in essence, “Set the toaster correctly at the start. Don’t burn the toast and then scrape it!”

In case C above, a manager might consider a range of possible solutions: (1) redesign the process so that it can achieve the desired output, (2) use an alternate process that can achieve the desired output, (3) retain the current process but attempt to eliminate unacceptable output using 100 percent inspection, and (4) examine the specifications to see whether they are necessary or could be relaxed without adversely affecting customer satisfaction.

Process variability is the key factor in process capability. It is measured in terms of the process standard deviation; process capability is typically deemed to be +/- 3 standard deviations from the process mean. To determine whether the process is capable, compare this +/- 3 standard deviation value to the satisfactions that are expressed as an allowance deviation from an ideal value.

For example, suppose the ideal length of time to perform a service is 10 minutes, and an acceptable range of variation around this time is +/- 1 minute. If the process has a standard deviation of 0.5 minutes, it would not be capable, because +/- 3 standard deviations would be +/- 1.5 minutes, exceeding the specification of +/- 1 minute.

To express the capability of a machine or process, some companies use the ratio of the specification width to the process capability. It can be computed using the following formula:

Using the capability ratio, you can see that for a process to be capable, it must have a capability ratio of 1.00. Moreover, the greater the capability ratio, the greater the probability that the output of a machine or process will fall within design specifications.

The Motorola Corporation is well known for its use of the term six-sigma, which refers to its goal of achieving a process variability so small that the design specifications represent six standard deviations of the process. That means a process capability ratio equal to 2.00, resulting in an extremely small probability of getting any output not within the design specifications.

Acceptance Sampling

Acceptance sampling is a form of inspection that is applied to lots or batches of items either before or after a process instead of during the process. The purpose of acceptance sampling is to decide whether a lot satisfies predetermined standards. Rejected lots may be subject to 100 percent inspection, or may be returned to the supplier for credit or replacement.

Acceptance sampling procedures are most useful when one or more of the following conditions exist:

  1. A large number of items must be processed in a short time.
  2. The cost consequences of passing defectives are low.
  3. Destructive testing is required.
  4. Fatigue or boredom caused by inspecting large numbers of items leads to inspection errors.

Acceptance sampling procedures can be applied to both attribute (counts) and variable (measurements).

Sampling Plans

Sampling plans specify the lot size, N; the sample size, n; the number of samples to be taken; and the acceptance / rejection criteria. A variety of sampling plans are provided below.

Single-Sampling Plan. One random sample is drawn from each lot, and every item in the sample is examined and classified as either “good” or “defective”. If any sample contains more than a specified number of defectives, c, that lot is rejected.

Double-Sampling Plan. A double-sampling plan allows for the opportunity to take a second sample if the results of the initial sample are inconclusive. The plan specifies the lot size, the size of the initial sample, accept/reject criteria for the initial sample (usually two values for the numbers of defective items), the size of the second sample, and a single acceptance number. If the second sample is required, the combined results of both the initial and the second samples are compared to the single acceptance number to decide whether to accept or reject the lot.

Multiple-Sampling Plan. A multiple-sampling plan is similar to a double-sampling plan except that more than two samples may be required. A sampling plan will specify each sample size and two limits for each sample. The values increase with the number of samples. If, for any sample, the cumulative number of defectives found exceeds the upper limit specified for that sample, sampling is terminated and the lot is rejected. If the cumulative number of defectives is less than or equal to the lower limit, sampling is terminated and the lot is passed. If the number is between the two limits, another sample is taken. The process continues until the lot is either accepted or rejected.

The cost and time required for inspection often dictate the sampling method used. Two primary considerations are the number of samples needed and the total number of observations required.

Where the cost to obtain a sample is relatively high compared to the cost to analyze the observations, a single-sampling plan is more desirable. Conversely, where item inspection costs are relatively high, such as destructive testing, it may be better to use double or multiple sampling, because the average number of items inspected per lot will be lower. This stems from the fact that a very good or very poor lot quality will often show up initially, and sampling can be terminated.

Operating Characteristic (OC) Curve

Operating characteristic (OC) curve describes the discriminating ability of a sampling plan. The following graph shows a typical curve for a single-sampling plan.

The graph shows that a lot with 3 percent of defectives would have a probability of about 0.90 of being accepted and a probability of 0.10 of being rejected. Note the downward relationship: As lot quality decreases, the probability of acceptance decreases, although the relationship is not linear.

A sampling plan does not provide perfect discrimination between good and bad lots; some low-quality lots will invariably be accepted, and some lots with very good quality will invariably be rejected.

The degree to which a sampling plan discriminates between good and bad lots is a function of the steepness of the graph’s OC curve: the steeper the curve, the more discriminating the sampling plan.

Note the curve for an ideal plan. To achieve that, you need to inspect 100 percent of each lot. Be aware that the cost and time needed to conduct 100 percent inspection often rule out 100 percent inspection, as does destructive testing, leaving acceptance sampling as the only viable alternative.

For these reasons, buyers are generally willing to accept lots that contain small percentages of defective items as “good,” especially if the cost related to a few defective is low. Often the percentage is in the neighborhood of 1 percent to 2 percent defective. The figure is known as the acceptance quality level (AQL).

Because of the inability of random sampling to clearly identify lots that contain more than this specified percentage of defectives, consumers recognize that some lots that actually contain more will be accepted. However, there is usually an upper limit on the percentage of defectives that a consumer is willing to tolerate in accepted lots. This is known as the lot tolerance percent defective (LTPD).

Thus, consumers want quality equal to or better than the AQL, and are willing to live with some lots with quality as poor as the LTPD, but they prefer not to accept any lot with a defective percentage that exceeds the LTPD.

The probability that a lot containing defectives exceeding the LTPD will be accepted is known as the consumer’s risk, or beta (b), or the probability of making a Type II error. The probability that a lot containing the acceptable quality level will be rejected is known as the producer’s risk, alpha (a), or the probability of making a Type I error. The following graph illustrates an OC curve with AQL, LTPD, producer’s risk, and consumer’s risk.

Many sampling plans are designed to have a producer’s risk of 5 percent and a consumer’s risk of 10 percent, although other combinations are also used. It is possible by trial and error to design a plan that will provide selected values for alpha and beta given the AQL and LTPD. However, standard references such as the government MIL-STD tables are widely used to obtain sample sizes and acceptance criteria for sampling plans.

To construct an OC curve, suppose you want the curve for a situation in which a sample of n=10 items is drawn from lots containing N=2000 items, and a lot is accepted if no more than c=1 defective is found. Because the sample size is small relative to the lot size, it is reasonable to use the binomial distribution to obtain the probability of acceptance for a given lot size. A portion of the cumulative binomial table found in Appendix Table D is reproduced here to construct the following OC curve.

When n>20 and p<0.05, the Poisson distribution is useful in constructing OC curves for proportions. In fact, the Poisson distribution is used to approximate the binomial distribution. The Poisson approximation involves treating the mean of the binomial distribution, i.e., np, as the mean of the Poisson, i.e., m:

.

As with the binomial distribution, you select various values of lot quality, p, and then determine the probability of accepting a lot (i.e., finding one or fewer defectives) by referring to the cumulative Poisson table. Values of p in increments of 0.01 are often used in this regard.

Average Quality of Inspected Lots

An interesting feature of acceptance sampling is that the level of inspection automatically adjusts to the quality of lots being inspected, assuming rejected lots are subject to 100 percent inspection. The poorer the quality of the lots, the greater the number of lots that will come under close scrutiny. This tends to improve overall quality of lots by weeding out defectives. In this way, the level of inspection is affected by lot quality.

If all lots have some given fraction defective, p, the average outgoing quality (AOQ) of the lots can be computed using the following formula, assuming defectives found in rejected lots are replaced with good items.

where

= probability of accepting the lot,

p= fraction defective,

N= lot size, and

n= sample size.

In practice, the last term is often omitted since it is usually close to 1.0 and therefore has little effect on the resulting values. The formula then becomes

.

By allowing the percentage, p, to vary, a curve such as the following one can be constructed in the same way that an OC curve is constructed. The curve illustrates the point that if lots are very good or very bad, the average outgoing quality will be high (with low AOQ values). The maximum point on the curve becomes apparent in the process of calculating values for the curve.

There are several managerial implications of the graph. First, a manager can determine the worst possible outgoing quality. Second, the manager can determine the amount of inspection that will be needed by obtaining an estimate of the incoming quality. Moreover, the manager can use the information to establish the relationship between inspection cost and the incoming fraction defective, thereby underscoring the benefit of implementing process improvements to reduce the incoming fraction defective rather than trying to weed out bad items through inspection.

Total Quality Management (TQM)

Total quality management (TQM) refers to a quest for quality that involves everyone in an organization. There are two philosophies in this approach. One is a never-ending push to improve, which is referred to as continuous improvement; the other is a goal of customer satisfaction, which involves meeting or exceeding customer expectations.

  1. Find out what customers what by using survey, focus groups, interviews, or other techniques. Be sure to include internal customer (the next person in the process) as well as the external customer (the final customer).
  2. Design a product or service that will meet (or exceed) what customers want. Make it easy to use and easy to produce.
  3. Design a production process that facilitates doing the job right the first time. Strive to make the process “mistake-proof.”
  4. Keep track of results, and use those to guide improvement in the system.
  5. Extend these concepts to suppliers and distribution.
  6. Top management must be committed and involved.

Elements of TQM:

  1. Continual improvement. Seek to improve all factors related to the process of converting inputs into outputs on an ongoing basis.  The old adage “If it ain’t broke, don’t fix it” gets transformed into “Just because it isn’t broke doesn’t mean it can’t be improved.”
  2. Competitive benchmarking. Identify companies or other organizations (which may not be in the same line of business as yours) that are the best at something and studying how they do it to learn how to improve your operation. For example, Xerox used the mail-order company, L. L. Bean, to benchmark order filling.
  3. Employee empowerment. Giving workers the responsibility for improvements and the authority to make changes to accomplish them to provide strong motivation for employees. This puts decision making into the hands of those who are closet to the job and have considerable insight into problems and solutions.

The term quality at the source refers to the philosophy of making each worker responsible for the quality of his or her work with the following benefits:

a.       It places direct responsibility for quality on the person(s) who directly affect it.

b.      It removes the adversarial relationship that often exists between quality control inspectors and production workers.

c.       It motivates workers by giving them control over their work as well as pride in it.

  1. Team approach. Use teams for problem solving and to achieve consensus. Take advantage of group synergy, get people involved, and promote a spirit of cooperation and shared values among employees.
  2. Decisions based on factors rather than options. Management gathers and analyzes data as a basis for decision making.
  3. Knowledge of tools. Employees and managers are trained in the use of quality tools.
  4. Supplier quality. Suppliers must be included in quality insurance and quality improvement efforts so that their processes are capable of delivering quality parts and materials in a timely manner. Long-term relationships with suppliers are encouraged.

TQM is about the culture of an organization. To truly reap the benefits of TQM, the culture of an organization must change. The following table illustrates the differences between cultures of a TQM organization and a more traditional organization.

Possible misuse of TQM:

  1. Blind pursuit of TQM, even though other priorities may be more important, e.g., responding quickly to a competitor’s advances.
  2. Programs may not be linked to the strategies of the organization in a meaningful way.
  3. Quality-related decisions may not be tied to market performance. For instance, customer satisfaction may be carried to the extent that its cost far exceeds any direct or indirect benefit of doing so.
  4. Failure to carefully plan a program before embarking on it can lead to false starts, employee confusion, and meaningless results.

Problem Solving

Problem solving is one of the basic procedures of TQM. Basic steps are given in the following table.

An important aspect of problem solving in TQM is eliminating the cause so that the problem does not occur. This is why users of TQM often like to think of problems as “opportunities for improvement.”

Process Improvement

Process improvement is a systematic approach to improving a process. It involves documentation, measurement, and analysis for the purpose of improving the functioning of a process. The following table provides an overview of process improvement.

The plan-do-study-act (PDSA) cycle, also referred to as either the Shewhart cycle or the Deming wheel, is the conceptual basis for continuous improvement activities. The following graph illustrates the cycle.

Basic steps in the cycle are:

Plan. Study the current process. Document the process. Collect data to identify the problems. Survey data and develop a plan for improvement. Specify measures for evaluating the plan.

Do. Implement the plan, on a small scale if possible. Document any change made during this phase. Collect data systematically for the evaluation.

Study. Evaluate the data collected during the do phase. Check how closely the results match the original goals of the plan phase.

Act. If the results are successful, standardize the new method and communicate the new method to all people associated with the process. Implement training for the new method. If the results are unsuccessful, revise the plan and repeat the process or cease this project.

Tools

Tools aid in data collection and interpretation, and provide the basis for decision making. This section describes eight of these tools. The first seven tools are often referred to as the seven basic quality tools. The following graph provides a quick overview of the seven tools.

Check Sheets. A check sheet is a simple tool frequently used for problem identification. Check sheets provide a format that enables users to record and organize data in a way that facilitates collection and analysis. This format might be one of simple checkmarks. Check sheets are designed on the basis of what the users are attempting to learn by collecting data.

One frequently used form of check sheets deals with the type of defect and the time of day each occurred.

In the graph, problems with missing labels tend to occur early in the day and smeared print tends to occur late in the day, whereas off-center labels are found throughout the day. Identifying types of defects and when they occur can help in pinpointing causes of the defects.

Another form of check sheets deals with where defects on the product are occurring.

In this case, defects seem to be occurring on the tips of the thumb and first finger, in the finger valleys (especially between the thumb and first finger) and in the center of the gloves. Again, this may help determine why the defects occur and lead to a solution.

Flowcharts. A flowchart is a visual representation of a process. As a problem-solving tool, a flowchart can help investigators in identifying possible points in a process where problems occur.

The diamond shapes in the flowchart represent decision points in the process, and the rectangular shapes represent procedures. The arrows show the direction of “flow” of the steps in the process.

To construct a simple flowchart, begin by listing the steps in a process. Then, classify each step as either a procedure or a decision (or check) point. Try to not make the flowchart too detailed, or it may be overwhelming, but be careful not to omit any key steps.

Scatter Diagrams. A scatter diagram can be useful in deciding if there is a correlation between the values of two variables. A correlation may point to a cause of a problem. The following graph shows an example of a scatter diagram.

In the graph, there is a positive (upward sloping) relationship between the humidity and the number of errors per hour. High values of humidity correspond to high numbers of errors, and vice versa.

On the other hand, a negative (downward sloping) relationship would mean that when values of one variable are low, values of the other variable are high, and vice versa.

The higher the correlation between the two variables, the less scatter in the points; the points will tend to line up. Conversely, if there were little or no relationship between two variables, the points would be completely scattered. Here, the correlation between humidity and errors seems strong, because the points appear to scatter along imaginary line.

Histograms. A histogram can be useful in getting a sense of the distribution of observed values. Among other things, one can see if the distribution is symmetrical, what the range of values is, and if there are any unusual values.

Note the two peaks in the graph above. This suggests the possibility of two distributions with different centers. Possible causes might be two different workers or two different types of work.

Pareto Analysis. Pareto analysis is a technique for focusing attention on the most important problem areas.

The Pareto concept, named after the 19th-century Italian economist Vilfredo Pareto, is that a relatively few factors generally account for a large percentage of the total cases (e.g., complaints, defects, problems). The idea is to classify the cases according to degree of importance, and focus on resolving the most important, leaving the less important. Often referred to as the 80-20 rule, the Pareto concept states that approximately 80 percent of the problems come from 20 percent of the items.

The Pareto analysis provides a chart that shows the number of occurrences by category, arranged in order of frequency.

In the graph, the dominance of the problem with off-center labels becomes apparent. Presumably, the manager and employees would focus on trying to resolve this problem. Once they accomplished that, they would address the remaining defects in similar fashion: “smeared print” would be the next major category to be resolved, and so on.

Additional check sheets would be used to collect data to verify that the defects in these categories have been eliminated or greatly reduced. Hence, in later Pareto diagrams, categories such as “off-center” may still appear but would be much less.

Control Charts. A control chart can be used to monitor a process to see if the process output is random. It can help to detect the presence of correctable causes of variation.

Control charts can also indicate when a problem occurred and give insight into what caused the problem.

Cause-and-Effect Diagrams. A cause-and-effect diagram offers a structured approach to the search for the possible cause(s) of a problem. It is also known as a fishbone diagram because of its shape, or an Ishikawa diagram, after the Japanese professor who developed the approach to aid workers overwhelmed in problem solving by the number of possible sources of problems.

The tool helps to organize problem-solving efforts by identifying categories of factors that might be causing problems. This tool is often used after brainstorming sessions, to organized the ideas generated. The following graphs illustrates the components of a cause-and-effect diagram and its example.

 

In the example, each of the factors listed is a potential source of ticket errors. Some are more likely causes than others, depending on the nature of the errors. If the cause is still not obvious at this point, additional investigation into the root cause may be necessary, involving a more in-depth analysis. Often, more detailed information can be obtained by asking who, what, where, when, why, and how questions about factors that appear to be the most likely sources of problems.

Methods for Generating Ideas

Run Charts. A run chart can be used to track the values of a variable over time. This can aid in identifying trends or other patterns that may be occurring. The following graph provides an example of a run chart showing a decreasing trend in accident frequency over time.

Important advantages of run charts are ease of construction and ease of interpretation.

Brainstorming. Brainstorming is a technique in which a group of people share thoughts and ideas on problems in a relaxed atmosphere that encourages unrestrained collective thinking. The goal is to generate free flow of ideas on identifying problems, and finding causes, solutions, and ways to implement solutions. In successful brainstorming, criticism is absent, no single member is allowed to dominate sessions, and all ideas are welcomed.

Quality Circles. The quality circles comprise a number of workers who get together periodically to discuss ways of improving products and processes. Not only are quality circles a valuable source of worker input, they also can motivate workers, if handled properly, by demonstrating management interest in worker ideas.

Quality circles are usually less structured and more informal than teams involved in continuous improvement, but in some organizations, quality circles have evolved into continuous improvement teams.

Perhaps, a major distinction between quality circles and teams is the amount of authority to implement any but minor changes; continuous improvement teams are sometimes given a great deal of authority. Consequently, continuous improvement teams have the added motivation generated by empowerment.

The team approach works best when it reach decisions based on consensus. This may involve one or more of the following methods:

  1. List reduction is applied to a list of possible problems or solutions. Its purpose is to clarify items, and in the process, reduce the list of items by posing questions about affordability, feasibility, and likelihood of solving the problem for each item.
  2. A balance sheet approach lists the pros and cons of each item, and focuses discussion on important issues.
  3. Paired comparison is a process by which each item on a list is compared with every other item, two at a time. For each pair, tem members select the preferred item. This approach forces a choice between items. It works best when the list of items is small: say, five or less.

Interviewing. Internal problems may require interviewing employees; external problems may require interviewing external customers.

Ideas for improvement can come from a number of sources: research and development, customers, competitors, and employees. Customer satisfaction is the ultimate goal of improvement activities, and customers can offer many valuable suggestions about products and service processes. They are less apt to have suggestions for manufacturing processes.

Benchmarking. Benchmarking is the process of measuring an organization’s performance on a key customer requirement against the best in the industry, or against the best in any industry. Its purpose is to establish a standard against which performance is judged, and to identify a model for learning how to improve. A benchmark demonstrates the degree to which customers of other organizations are satisfied. Once a benchmark has been identified, the goal is to meet or exceed that standard through improvements in appropriate processes.

The benchmarking process usually involves these steps:

  1. Identify a critical process that needs improvement (e.g., order entry, distribution, service after sale).
  2. Identify an organization that excels in the process, preferably the best.
  3. Contact the benchmark organization, visit it, and study the benchmark activity.
  4. Analyze the data.
  5. Improve the critical process at your own organization.

Selecting an industry leader provides insight into what competitors are doing; but competitors may be reluctant to share this information. Several organizations are responding to this difficulty by conducting benchmarking studies and providing that information to other organizations without revealing the sources of the data.

Selecting organizations that are world leaders in different industries is another alternative. For example, the Xerox Corporation uses many benchmarks: for employee involvement, Procter & Gamble; for quality process, Florida Power and Light, Toyota, and Fuji Xerox; for high-volume production, Kodak and Cannon; for billing collection, American Express; for research and development, AT&T and Hewlett-Packard; for distribution, L. L. Bean and Hershey Foods, and for daily scheduling, Cummins Engine.

The 5W2H Approach. Asking questions about the current process can lead to improvement insights about why the current process isn’t working as well as it could, as well as potential ways to improve it. One method is called 5W2H approach which is provided by the table below.