Quality refers to the ability of product or service to consistently meet or exceed customer expectations. Quality means getting what you pay for.
In the 1970s and 1980s, US business organizations tended to focus on cost and productivity rather than quality. It wasn’t that quality was unimportant, it just wasn’t very important. This led to a significant share of US market being captured by Japan business organizations.
Priori to the Industrial Revolution, skilled craftsman performed all stages of production. Pride of workmanship and reputation often provided the motivation to see that a job was done right.
A division of labor accompanied the Industrial Revolution; each worker was then responsible for only a small portion of each product. Pride of workmanship becomes less meaningful because workers could no longer identify readily with the final product. The responsibility of quality control shifted to the foreman. Inspection was either nonexistent or haphazard, although in some instances 100% inspection was used.
Frederick Winslow Taylor, the “Father of Scientific Management,” gave new emphasis to quality by including product inspection and gauging in his list of fundamental areas of manufacturing management.
G. S. Radford improved Taylor’s methods. Two of his most significant contributions were
In 1924, W. Shewhart of Bell Telephone Laboratories introduced statistical control charts that could be used to monitor production.
Around 1930, H. F. Dodge and H. G. Romig, also of Bell Labs, introduced tables for acceptance sampling.
World War II caused a dramatic increase in emphasis on quality control.
By the end of the 1940s, professional quality organizations were emerging throughout the country, e.g., the American Society for Quality Control (ASQC, now known as ASQ).
During the 1950s, the quality movement evolved into quality assurance. Quality guru W. Edward Deming introduced statistical quality control methods to Japanese manufacturers.
At about the same time, another quality guru, Joseph Juran, began his “cost of quality” approach. The approach stressed the desirability of lowing costs associated with prevention.
In the mid-1950s, Armand Feigenbaum proposed total quality control, which enlarged the realm of quality efforts from its primary focus in manufacturing to also include product design and incoming raw materials. One important feature of his work was greater involvement of upper management in quality.
During the 1960s, the concept of “zero defects” gained favor. Championed by quality guru Philip Crosby, this approach focused on employee motivation and awareness, and the expectation of perfection from employee. It evolved from the success of the Martin Company in producing a “perfect” missile for the US Army.
In the 1970s, quality assurance methods gained increasing emphasis in services including government operations, health care, banking, and the travel industry.
The evolution of quality took a dramatic shift from quality assurance to a strategic approach to quality in the late 1970s. The strategic approach, advocated by Havard professor David Garvin and others, is proactive, focusing on preventing mistakes from occurring together. Quality and profits are more closely linked. This approach also places greater emphasis on customer satisfaction, and involves all levels of management as well as workers in a continuing effort to increase quality.
The dimensions of quality include:
When referring to a product, a customer sometimes judges the first four dimensions by its fitness for use.
The degree to which a product or a service successfully satisfies its intended purpose has the following four primary determinants.
The Consequences of Poor Quality
The Costs of Quality
W. Edwards Deming, Statistics, New York University, 1940s. Went to Japan after WWII. Japanese established the Deming Prize.
Deming compiled a famous list of 14 points. The key elements are constancy of purpose, continual improvement, and profound knowledge. Profound knowledge involves:
His message is that the cause of the inefficiency and poor quality is the system, not the employees. Management’s responsibility is to correct the system to achieve the desired results. Deming stressed the need to reduce variation in output, which can be accomplished by distinguishing between special causes of variation (i.e., correctable) and common causes of variation (i.e., random).
Joseph M. Juran. Quality Control Handbook, 1951. Juran on Quality, Juran Institute in Wilton. Juran’s approach to quality may be the closest to Deming’s of all the gurus, although his approach differs on the importance of statistical methods and what an organization must do to achieve quality. (See textbook for details of the comparison between Juran and Deming.)
Juran views quality as fitness-for-use. He believes that roughly 80 percent of quality defects are management controllable. He describes quality management in terms of a trilogy consisting of
A key element of Juran’s philosophy is the commitment of management to continual improvement.
Juran is credited as one of the first to measure cost of quality, and he demonstrated the potential for increased profits that would result if the costs of poor quality could be reduced. Juran proposed 10 steps for quality improvement.
Armand Feigenbaum. Cost of Nonconformance. GE top expert on quality at 24. Total Quality Control, 1961. 40 steps of quality principles.
When improvements were made in a process, other areas of the company also achieved improvements. People could learn from each other’s successes. Open work environment led to cross-functional teamwork. It is the customer who defines quality.
Philips Crosby. Martin Marietta, 1960s. Corporate VP for Quality at ITT, 1970s. Quality Is Free (The costs of poor quality are much greater than traditionally defined.), 1979. Quality without Tears: The Art of Hassle-Free Management, 1984.
Kaoru Ishikawa. Cause-And-Effect / Fishbone diagram. Quality Circles. First to call attention to internal customers --- the next person in a process. First to make quality control “user friendly” for workers.
Genichi Taguchi. Taguchi loss function --- determining the cost of poor quality. Combined effect of deviations of all parts from their standards can be large, even though each individual deviation could be small. Help Ford Motor Company to reduce its warranty losses by achieving less variation in the output of transmissions.
The Baldrige Award, National Institute of Standards (NIST)
Malcolm Baldrige National Quality Improvement Act (1987). Named after the late Malcolm Baldrige, an industrialist and former secretary of commerce. A maximum of two awards are given annually in each of the three categories: large manufacturer, large service organization, and small business (500 or fewer employees). The following table provides a list of the items included in each evaluation area and their maximum number of points. Note that customer satisfaction has the largest number of points.
The Deming Prize, Japan. The major focus of judging is on statistical quality control.
ISO: International Organization for Standardization, 91 countries.
ANSI: American National Standards Institute.
ISO 9000: International standards on quality management and quality assurance. Certification takes 12 to 18 months. 40,000 companies are registered worldwide, three-fourths of which are located in Europe. Registration needs to be renewed for every three years. Five standards associated with the ISO 9000 series.
ISO14000: Assess a company’s performance in terms of environmental responsibility. The standards for certification bear upon three major areas:
Quality control assures that processes are performing in an acceptable manner using statistical techniques.
Phases of quality assurance:
Quality assurance that relies primarily on inspection after production is referred to as acceptance sampling. Quality control efforts that occur during production are referred to as statistical process control.
Monitoring in the production process can occur at three points: before production, during production, and after production. Monitoring before and after production involves acceptance sampling procedures; monitoring during the production process is referred to as process control.
Basic questions of the inspection are
The frequency of inspection depends largely on the rate at which a process may go out of control or the number of lots being inspected. Many small lots will require more samples (in total) than a few large lots, because it is important to obtain sample data from each lot.
In service sector, inspection points are incoming purchased materials and suppliers, personnel, service interfaces (e.g., service counter,) and outgoing completed work.
Quality control is concerned with the quality of conformance of a process. Managers use statistical process control to evaluate the output of a process to determine its acceptability. They take periodic samples from the process and compare them with a predetermined standard. If the sample results are not acceptable, they stop the process and take corrective action. If the sample results are acceptable, they allow the process to continue.
Effective control requires the following steps:
All processes that provide a good or a service exhibit a certain amount of “natural” variation in their output. The variations are created by the combined influences of countless minor factors. The variability is often referred to as chance or random variation, although it sometimes is referred to as common variability. For instance, old machines generally exhibit a higher degree of natural variability than new machines.
A second kind of variability in process output is called assignable variation, or special variation. Unlike natural variation, the main sources of assignable variation can usually be identified (assigned to a specific cause) and eliminated. Tool wear, equipment that needs adjustment, defective materials, human factors and problems with measuring devices are typical sources of assignable variation.
When samples of process output are taken, and sample statistics such as the sample mean and range are computed, they exhibit the same kind of variability. The variability of a sample statistic can be described by its sampling distribution, which is a theoretical distribution that describes the random variability of sample statistics. Process distribution describes the variation of every individual output of the process. The goal of sampling is to determine whether nonrandom --- and thus, correctable --- sources of variation are present in the output of a process.
High and low values in samples tend to offset each other, resulting in less variability among sample means than among individual values. Note that the sampling distribution is a Normal distribution, even if the process distribution is not normal (Central Limit Theorem, CLT).
The following figure shows how normal distribution is used to judge whether a process is performing adequately.
Two statistical tools are used for quality control: control charts and run tests. Often, they are used together.
A control chart is a time-ordered plot of sample statistics. It is used to distinguish between random variability and nonrandom variability. The basis for the control chart is the sampling distribution. Control limits are dividing lines between random deviations from the mean of the distribution and nonrandom deviations from the mean of the distribution. The following figure shows how control limits are based on the sampling distribution.
In the figure, the larger value is the upper control limit (UCL), and the smaller value is the lower control limit (LCL). A sample statistic that falls between these two limits suggests (but does not prove) randomness, while a value outside or on either limit suggests (but does not prove) non-randomness.
It is important to recognize that, because any limit will leave some area in the tails of the distribution, there is a small probability that a value will fall outside the limits, even though only random variations are present. The probability is sometimes referred to as the probability of Type I Error, where the “error” is concluding that non-randomness is present, when only randomness is present. The error is also referred to as an alpha risk, where alpha () is the sum of probability in the two tails.
Using wider limits reduces the probability of a Type I error, because it decreases the area in the tails. However, wider limits make it more difficult to detect nonrandom variations, if they are present. For example, the mean of the process might shift enough to be detected by two-sigma limits, but not enough to be readily apparent using three-sigma limits. That could lead to a second kind of error, known as Type II Error, beta risk, which is concluding that a process is in control, when it is really out of control.
In theory, the costs of making each error should be balanced by their probabilities. However, in practice, two-sigma limits and three-sigma limits are commonly used without specifically referring to the probability of a Type II error.
The following figure illustrates the components of a control chart.
The following figure illustrates the concept of judging whether each value is within the acceptable (random) range.
There are four commonly used control charts. Two are used for variables (continues variables), and two are used for attributes (discrete variables). Attribute data are counted; variable data are measured, usually on a continuous scale.
Mean charts monitor the central tendency of a process. The mean charts can be constructed in two ways depending on whether the population standard deviation or sample range is used.
If the standard deviation is known or a reasonable estimate of the standard deviation is available, one can compute control limits of the mean charts using these formulas:
Upper control limit (UCL) :=
Lower control limit (LCL) :=
m = number of samples
n = number of observations in a sample
I = sample index, i=1,2,…,m.
J = observation index, j=1,2,…,n.
= value of the observation j in sample i.
z = standard normal deviate.
= process (population) standard deviation.
= mean of sample i observations,
= mean of all sample observations,
= standard deviation of sample mean distribution,
If the sample range is used as a measure of process variability, the appropriate formulas for control limits are
Upper control limit (UCL) :=
Lower control limit (LCL) :=
= average of sample ranges, and
can be found in the following table (Table 10-2 in pp.450 of the textbook).
Range Control Chart (R-Chart)
Range control charts are used to monitor process dispersion; they are sensitive to changes in process dispersion. Although the underlying sampling distribution is not normal, the concepts for the use of range charts are much the same as those for use of mean charts.
Control limits for range charts are found using the average sample range in conjunction with these formulas:
Upper control limit (UCL) :=
Lower control limit (LCL) :=
where and can be found in the table above.
Even though decreased variability is desirable, we would want to determine what was causing it. Perhaps an improved method has been used, in which case we would want to identify it.
Mean control charts and range control charts provide different perspectives on a process. Mean charts are sensitive to shifts in the process mean, whereas range charts are sensitive to changes in process dispersion. Because of this difference in perspective, both types of charts might be used to monitor the same process.
For example, in the Figure A above, the mean chart picks up the shift in the process mean, but the dispersion is not changing. In Figure B, a change in process dispersion is less apt to be detected by the mean chart than by the range chart. Thus, use of both charts provides more complete information than either chart alone.
However, due to cost or workflow consideration, a single chart may suffice in some cases to monitor a specific aspect of a process, which tends to cost most problems.
Once control charts have been set up, they can serve as a basis for deciding when to interrupt a process and search for assignable causes of variation. One can use the following procedure to determine initial control limits.
Control charts for attributes are used when the process characteristic is counted rather than measured. There are two types of attribute control charts, one for the fraction of defective items in a sample (a p-chart) and one for the number of defects per unit (a c-chart).
A p-chart is appropriate when the data consist of two categories of items which are countable; otherwise, c-chart is more appropriate. For example, one can count the number of crimes committed during the month of August, but one can not count the number of nonoccurrences.
A p-chart is used to monitor the proportion of defectives generated by a process. The theoretical basis for a p-chart is the binormal distribution, although for large sample sizes, the normal distribution provides a good approximation to it. Conceptually, a p-chart is constructed and used much the same way as a mean chart.
Control limits are computed using the following formulas.
Upper control limit (UCL) :=
Lower control limit (LCL) :=
p is the average fraction defectives in a population. Note: Because the formula is an approximation, it sometimes happens that the computed theoretical LCL is negative. In these instances, zero is used as the lower limit.
If p is unknown, it can be estimated from samples, i.e.,
where is the defect probability estimate of sample i, i=1,2,…,m, and m is the total number of samples.
When the goal is to control the number of defects per unit, a c-chart is used. The underlying sampling distribution is the Poisson distribution. Use of the Poisson distribution assumes that defects occur over some continuous region and that the probability of more than one defect at any particular spot is negligible. For practical reasons, the normal approximation to the Poisson is used. The control limits are
Upper control limit (UCL) :=
Lower control limit (LCL) :=
where the mean number of defects per unit is c and the standard deviation is .
If the process average is unknown, c can be estimated from
When the computed lower control limit is negative, the effective lower limit is zero. The calculation sometimes produces a negative lower limit due to the use of the normal distribution to approximate the Poisson distribution: the normal is symmetrical whereas the Poisson is not symmetrical, when c is close to 0.
Due to the cost and time needed to obtain control charts, managers must make a number of important decisions about the use of the charts:
A run is defined as a sequence of observations with a certain characteristics, followed by one or more observations with a different characteristics. For example, in the series A A A B, there are two runs: a run of three As followed by a run of one B. Underlining each run helps in counting them, i.e., A A A B.
Two useful run tests involve examination of the number of runs up and down and runs above and below the median. In order to count these runs, the data are transformed into a series of Us and Ds (for up and down) and into a series of As and Bs (for above and below the median). (See textbook page 457 for an example.) Notice that, for the runs of up and down, the first value does not receive either a U or a D, because nothing precedes it. If a plot is available, the runs can be easily counted directly from the plot, as illustrated below
To determine whether any pattern is present in control chart data, one must transform the data into both As and Bs and Us and Ds, and then count the number of runs in each case. These numbers must then be compared with the number of runs that would be expected in a completely random series.
For both the median and the up/down run tests, the expected number of runs is a function of the number of observations in the series. The formulas are
where N is the number of observations.
The actual number of runs in any given set of observations will vary from the expected number, due to chance and any pattern that might be present. Chance variability is measured by the standard deviation of runs. The formulas are
Distinguishing chance variability from patterns requires use of the sampling distributions for median runs and up/down runs. Both distributions are approximately normal.
In practice, it is often easiest to compute the number of standard deviations, z, by which an observed number of runs differs from the expected number. This z value would then be compared to the value +/- 2 (for 95.5%) or some other desired value (e.g., +/- 1.96 for 95%, +/- 2.33 for 98%). A test z that exceeds the desired limits indicates patterns are present.
The computation of z takes the form
Consequently, for the median and up/down tests, one can find z using these formulas:
Up and down:
N= total number of observations
r= observed number of runs of either As and Bs or Us and Ds, depending on which test is involved.
It is desirable to apply both run tests to any given set of observations because each test is different in terms of the types of patterns it can detect. Sometimes both tests will pick up a certain pattern, but sometimes only one will detect non-randomness. If either does, the implication is that some sort of non-randomness is present in the data.
The variability of a process can significantly impact quality. Three commonly used terms refer to the variability of process output. Each term relates to a slightly different aspect of that variability, so it is important to differentiate these terms.
Tolerances or specifications are established by engineering design or customer requirements. They indicate a range of values in which individual units of output must fall in order to be acceptable.
Control limits are statistical limits that reflect the extent to which sample statistics such as means and ranges can vary due to randomness alone.
Process variability reflects the natural or inherent (i.e., random) variability in a process. It is measured in terms of the process standard deviation.
Control limits and process variability are directly related: control limits are based on sampling variability, and sampling variability is a function of process variability. On the other hand, there is no direct link between tolerances and either control limits or process variability. Tolerances are specified in terms of a product or service, not in terms of the process by which the product or service is generated. Hence, in a given instance, the output of a process may or may not conform to specifications, even though the process may be statistically in control.
This is why it is also necessary to take into account the capability of a process. The term process capability refers to the inherent variability of process output relative to the variation allowed by the design specifications.
Capability analysis means determining whether the inherent variability of the process output falls within the acceptable range of variability allowed by the design specifications for the process output. If it is within the specifications, the process is said to be “capable.” If it is not, the manager must decide how to correct the situation.
We can not automatically assume that a process that is in control will provide desired output. Instead, we must specifically check whether a process is capable of meeting specifications and not simply set up control chart to monitor it. A process should be in control and within specifications before production begins --- in essence, “Set the toaster correctly at the start. Don’t burn the toast and then scrape it!”
In case C above, a manager might consider a range of possible solutions: (1) redesign the process so that it can achieve the desired output, (2) use an alternate process that can achieve the desired output, (3) retain the current process but attempt to eliminate unacceptable output using 100 percent inspection, and (4) examine the specifications to see whether they are necessary or could be relaxed without adversely affecting customer satisfaction.
Process variability is the key factor in process capability. It is measured in terms of the process standard deviation; process capability is typically deemed to be +/- 3 standard deviations from the process mean. To determine whether the process is capable, compare this +/- 3 standard deviation value to the satisfactions that are expressed as an allowance deviation from an ideal value.
For example, suppose the ideal length of time to perform a service is 10 minutes, and an acceptable range of variation around this time is +/- 1 minute. If the process has a standard deviation of 0.5 minutes, it would not be capable, because +/- 3 standard deviations would be +/- 1.5 minutes, exceeding the specification of +/- 1 minute.
To express the capability of a machine or process, some companies use the ratio of the specification width to the process capability. It can be computed using the following formula:
Using the capability ratio, you can see that for a process to be capable, it must have a capability ratio of 1.00. Moreover, the greater the capability ratio, the greater the probability that the output of a machine or process will fall within design specifications.
The Motorola Corporation is well known for its use of the term six-sigma, which refers to its goal of achieving a process variability so small that the design specifications represent six standard deviations of the process. That means a process capability ratio equal to 2.00, resulting in an extremely small probability of getting any output not within the design specifications.
Acceptance sampling is a form of inspection that is applied to lots or batches of items either before or after a process instead of during the process. The purpose of acceptance sampling is to decide whether a lot satisfies predetermined standards. Rejected lots may be subject to 100 percent inspection, or may be returned to the supplier for credit or replacement.
Acceptance sampling procedures are most useful when one or more of the following conditions exist:
Acceptance sampling procedures can be applied to both attribute (counts) and variable (measurements).
Sampling plans specify the lot size, N; the sample size, n; the number of samples to be taken; and the acceptance / rejection criteria. A variety of sampling plans are provided below.
Single-Sampling Plan. One random sample is drawn from each lot, and every item in the sample is examined and classified as either “good” or “defective”. If any sample contains more than a specified number of defectives, c, that lot is rejected.
Double-Sampling Plan. A double-sampling plan allows for the opportunity to take a second sample if the results of the initial sample are inconclusive. The plan specifies the lot size, the size of the initial sample, accept/reject criteria for the initial sample (usually two values for the numbers of defective items), the size of the second sample, and a single acceptance number. If the second sample is required, the combined results of both the initial and the second samples are compared to the single acceptance number to decide whether to accept or reject the lot.
Multiple-Sampling Plan. A multiple-sampling plan is similar to a double-sampling plan except that more than two samples may be required. A sampling plan will specify each sample size and two limits for each sample. The values increase with the number of samples. If, for any sample, the cumulative number of defectives found exceeds the upper limit specified for that sample, sampling is terminated and the lot is rejected. If the cumulative number of defectives is less than or equal to the lower limit, sampling is terminated and the lot is passed. If the number is between the two limits, another sample is taken. The process continues until the lot is either accepted or rejected.
The cost and time required for inspection often dictate the sampling method used. Two primary considerations are the number of samples needed and the total number of observations required.
Where the cost to obtain a sample is relatively high compared to the cost to analyze the observations, a single-sampling plan is more desirable. Conversely, where item inspection costs are relatively high, such as destructive testing, it may be better to use double or multiple sampling, because the average number of items inspected per lot will be lower. This stems from the fact that a very good or very poor lot quality will often show up initially, and sampling can be terminated.
Operating characteristic (OC) curve describes the discriminating ability of a sampling plan. The following graph shows a typical curve for a single-sampling plan.
The graph shows that a lot with 3 percent of defectives would have a probability of about 0.90 of being accepted and a probability of 0.10 of being rejected. Note the downward relationship: As lot quality decreases, the probability of acceptance decreases, although the relationship is not linear.
A sampling plan does not provide perfect discrimination between good and bad lots; some low-quality lots will invariably be accepted, and some lots with very good quality will invariably be rejected.
The degree to which a sampling plan discriminates between good and bad lots is a function of the steepness of the graph’s OC curve: the steeper the curve, the more discriminating the sampling plan.
Note the curve for an ideal plan. To achieve that, you need to inspect 100 percent of each lot. Be aware that the cost and time needed to conduct 100 percent inspection often rule out 100 percent inspection, as does destructive testing, leaving acceptance sampling as the only viable alternative.
For these reasons, buyers are generally willing to accept lots that contain small percentages of defective items as “good,” especially if the cost related to a few defective is low. Often the percentage is in the neighborhood of 1 percent to 2 percent defective. The figure is known as the acceptance quality level (AQL).
Because of the inability of random sampling to clearly identify lots that contain more than this specified percentage of defectives, consumers recognize that some lots that actually contain more will be accepted. However, there is usually an upper limit on the percentage of defectives that a consumer is willing to tolerate in accepted lots. This is known as the lot tolerance percent defective (LTPD).
Thus, consumers want quality equal to or better than the AQL, and are willing to live with some lots with quality as poor as the LTPD, but they prefer not to accept any lot with a defective percentage that exceeds the LTPD.
The probability that a lot containing defectives exceeding the LTPD will be accepted is known as the consumer’s risk, or beta (b), or the probability of making a Type II error. The probability that a lot containing the acceptable quality level will be rejected is known as the producer’s risk, alpha (a), or the probability of making a Type I error. The following graph illustrates an OC curve with AQL, LTPD, producer’s risk, and consumer’s risk.
Many sampling plans are designed to have a producer’s risk of 5 percent and a consumer’s risk of 10 percent, although other combinations are also used. It is possible by trial and error to design a plan that will provide selected values for alpha and beta given the AQL and LTPD. However, standard references such as the government MIL-STD tables are widely used to obtain sample sizes and acceptance criteria for sampling plans.
To construct an OC curve, suppose you want the curve for a situation in which a sample of n=10 items is drawn from lots containing N=2000 items, and a lot is accepted if no more than c=1 defective is found. Because the sample size is small relative to the lot size, it is reasonable to use the binomial distribution to obtain the probability of acceptance for a given lot size. A portion of the cumulative binomial table found in Appendix Table D is reproduced here to construct the following OC curve.
When n>20 and p<0.05, the Poisson distribution is useful in constructing OC curves for proportions. In fact, the Poisson distribution is used to approximate the binomial distribution. The Poisson approximation involves treating the mean of the binomial distribution, i.e., np, as the mean of the Poisson, i.e., m:
As with the binomial distribution, you select various values of lot quality, p, and then determine the probability of accepting a lot (i.e., finding one or fewer defectives) by referring to the cumulative Poisson table. Values of p in increments of 0.01 are often used in this regard.
An interesting feature of acceptance sampling is that the level of inspection automatically adjusts to the quality of lots being inspected, assuming rejected lots are subject to 100 percent inspection. The poorer the quality of the lots, the greater the number of lots that will come under close scrutiny. This tends to improve overall quality of lots by weeding out defectives. In this way, the level of inspection is affected by lot quality.
If all lots have some given fraction defective, p, the average outgoing quality (AOQ) of the lots can be computed using the following formula, assuming defectives found in rejected lots are replaced with good items.
= probability of accepting the lot,
p= fraction defective,
N= lot size, and
n= sample size.
In practice, the last term is often omitted since it is usually close to 1.0 and therefore has little effect on the resulting values. The formula then becomes
By allowing the percentage, p, to vary, a curve such as the following one can be constructed in the same way that an OC curve is constructed. The curve illustrates the point that if lots are very good or very bad, the average outgoing quality will be high (with low AOQ values). The maximum point on the curve becomes apparent in the process of calculating values for the curve.
There are several managerial implications of the graph. First, a manager can determine the worst possible outgoing quality. Second, the manager can determine the amount of inspection that will be needed by obtaining an estimate of the incoming quality. Moreover, the manager can use the information to establish the relationship between inspection cost and the incoming fraction defective, thereby underscoring the benefit of implementing process improvements to reduce the incoming fraction defective rather than trying to weed out bad items through inspection.
Total quality management (TQM) refers to a quest for quality that involves everyone in an organization. There are two philosophies in this approach. One is a never-ending push to improve, which is referred to as continuous improvement; the other is a goal of customer satisfaction, which involves meeting or exceeding customer expectations.
Elements of TQM:
The term quality at the source refers to the philosophy of making each worker responsible for the quality of his or her work with the following benefits:
a. It places direct responsibility for quality on the person(s) who directly affect it.
b. It removes the adversarial relationship that often exists between quality control inspectors and production workers.
c. It motivates workers by giving them control over their work as well as pride in it.
TQM is about the culture of an organization. To truly reap the benefits of TQM, the culture of an organization must change. The following table illustrates the differences between cultures of a TQM organization and a more traditional organization.
Possible misuse of TQM:
Problem solving is one of the basic procedures of TQM. Basic steps are given in the following table.
An important aspect of problem solving in TQM is eliminating the cause so that the problem does not occur. This is why users of TQM often like to think of problems as “opportunities for improvement.”
Process improvement is a systematic approach to improving a process. It involves documentation, measurement, and analysis for the purpose of improving the functioning of a process. The following table provides an overview of process improvement.
The plan-do-study-act (PDSA) cycle, also referred to as either the Shewhart cycle or the Deming wheel, is the conceptual basis for continuous improvement activities. The following graph illustrates the cycle.
Basic steps in the cycle are:
Plan. Study the current process. Document the process. Collect data to identify the problems. Survey data and develop a plan for improvement. Specify measures for evaluating the plan.
Do. Implement the plan, on a small scale if possible. Document any change made during this phase. Collect data systematically for the evaluation.
Study. Evaluate the data collected during the do phase. Check how closely the results match the original goals of the plan phase.
Act. If the results are successful, standardize the new method and communicate the new method to all people associated with the process. Implement training for the new method. If the results are unsuccessful, revise the plan and repeat the process or cease this project.
Tools aid in data collection and interpretation, and provide the basis for decision making. This section describes eight of these tools. The first seven tools are often referred to as the seven basic quality tools. The following graph provides a quick overview of the seven tools.
Check Sheets. A check sheet is a simple tool frequently used for problem identification. Check sheets provide a format that enables users to record and organize data in a way that facilitates collection and analysis. This format might be one of simple checkmarks. Check sheets are designed on the basis of what the users are attempting to learn by collecting data.
One frequently used form of check sheets deals with the type of defect and the time of day each occurred.
In the graph, problems with missing labels tend to occur early in the day and smeared print tends to occur late in the day, whereas off-center labels are found throughout the day. Identifying types of defects and when they occur can help in pinpointing causes of the defects.
Another form of check sheets deals with where defects on the product are occurring.
In this case, defects seem to be occurring on the tips of the thumb and first finger, in the finger valleys (especially between the thumb and first finger) and in the center of the gloves. Again, this may help determine why the defects occur and lead to a solution.
Flowcharts. A flowchart is a visual representation of a process. As a problem-solving tool, a flowchart can help investigators in identifying possible points in a process where problems occur.
The diamond shapes in the flowchart represent decision points in the process, and the rectangular shapes represent procedures. The arrows show the direction of “flow” of the steps in the process.
To construct a simple flowchart, begin by listing the steps in a process. Then, classify each step as either a procedure or a decision (or check) point. Try to not make the flowchart too detailed, or it may be overwhelming, but be careful not to omit any key steps.
Scatter Diagrams. A scatter diagram can be useful in deciding if there is a correlation between the values of two variables. A correlation may point to a cause of a problem. The following graph shows an example of a scatter diagram.
In the graph, there is a positive (upward sloping) relationship between the humidity and the number of errors per hour. High values of humidity correspond to high numbers of errors, and vice versa.
On the other hand, a negative (downward sloping) relationship would mean that when values of one variable are low, values of the other variable are high, and vice versa.
The higher the correlation between the two variables, the less scatter in the points; the points will tend to line up. Conversely, if there were little or no relationship between two variables, the points would be completely scattered. Here, the correlation between humidity and errors seems strong, because the points appear to scatter along imaginary line.
Histograms. A histogram can be useful in getting a sense of the distribution of observed values. Among other things, one can see if the distribution is symmetrical, what the range of values is, and if there are any unusual values.
Note the two peaks in the graph above. This suggests the possibility of two distributions with different centers. Possible causes might be two different workers or two different types of work.
Pareto Analysis. Pareto analysis is a technique for focusing attention on the most important problem areas.
The Pareto concept, named after the 19th-century Italian economist Vilfredo Pareto, is that a relatively few factors generally account for a large percentage of the total cases (e.g., complaints, defects, problems). The idea is to classify the cases according to degree of importance, and focus on resolving the most important, leaving the less important. Often referred to as the 80-20 rule, the Pareto concept states that approximately 80 percent of the problems come from 20 percent of the items.
The Pareto analysis provides a chart that shows the number of occurrences by category, arranged in order of frequency.
In the graph, the dominance of the problem with off-center labels becomes apparent. Presumably, the manager and employees would focus on trying to resolve this problem. Once they accomplished that, they would address the remaining defects in similar fashion: “smeared print” would be the next major category to be resolved, and so on.
Additional check sheets would be used to collect data to verify that the defects in these categories have been eliminated or greatly reduced. Hence, in later Pareto diagrams, categories such as “off-center” may still appear but would be much less.
Control Charts. A control chart can be used to monitor a process to see if the process output is random. It can help to detect the presence of correctable causes of variation.
Control charts can also indicate when a problem occurred and give insight into what caused the problem.
Cause-and-Effect Diagrams. A cause-and-effect diagram offers a structured approach to the search for the possible cause(s) of a problem. It is also known as a fishbone diagram because of its shape, or an Ishikawa diagram, after the Japanese professor who developed the approach to aid workers overwhelmed in problem solving by the number of possible sources of problems.
The tool helps to organize problem-solving efforts by identifying categories of factors that might be causing problems. This tool is often used after brainstorming sessions, to organized the ideas generated. The following graphs illustrates the components of a cause-and-effect diagram and its example.
In the example, each of the factors listed is a potential source of ticket errors. Some are more likely causes than others, depending on the nature of the errors. If the cause is still not obvious at this point, additional investigation into the root cause may be necessary, involving a more in-depth analysis. Often, more detailed information can be obtained by asking who, what, where, when, why, and how questions about factors that appear to be the most likely sources of problems.
Run Charts. A run chart can be used to track the values of a variable over time. This can aid in identifying trends or other patterns that may be occurring. The following graph provides an example of a run chart showing a decreasing trend in accident frequency over time.
Important advantages of run charts are ease of construction and ease of interpretation.
Brainstorming. Brainstorming is a technique in which a group of people share thoughts and ideas on problems in a relaxed atmosphere that encourages unrestrained collective thinking. The goal is to generate free flow of ideas on identifying problems, and finding causes, solutions, and ways to implement solutions. In successful brainstorming, criticism is absent, no single member is allowed to dominate sessions, and all ideas are welcomed.
Quality Circles. The quality circles comprise a number of workers who get together periodically to discuss ways of improving products and processes. Not only are quality circles a valuable source of worker input, they also can motivate workers, if handled properly, by demonstrating management interest in worker ideas.
Quality circles are usually less structured and more informal than teams involved in continuous improvement, but in some organizations, quality circles have evolved into continuous improvement teams.
Perhaps, a major distinction between quality circles and teams is the amount of authority to implement any but minor changes; continuous improvement teams are sometimes given a great deal of authority. Consequently, continuous improvement teams have the added motivation generated by empowerment.
The team approach works best when it reach decisions based on consensus. This may involve one or more of the following methods:
Interviewing. Internal problems may require interviewing employees; external problems may require interviewing external customers.
Ideas for improvement can come from a number of sources: research and development, customers, competitors, and employees. Customer satisfaction is the ultimate goal of improvement activities, and customers can offer many valuable suggestions about products and service processes. They are less apt to have suggestions for manufacturing processes.
Benchmarking. Benchmarking is the process of measuring an organization’s performance on a key customer requirement against the best in the industry, or against the best in any industry. Its purpose is to establish a standard against which performance is judged, and to identify a model for learning how to improve. A benchmark demonstrates the degree to which customers of other organizations are satisfied. Once a benchmark has been identified, the goal is to meet or exceed that standard through improvements in appropriate processes.
The benchmarking process usually involves these steps:
Selecting an industry leader provides insight into what competitors are doing; but competitors may be reluctant to share this information. Several organizations are responding to this difficulty by conducting benchmarking studies and providing that information to other organizations without revealing the sources of the data.
Selecting organizations that are world leaders in different industries is another alternative. For example, the Xerox Corporation uses many benchmarks: for employee involvement, Procter & Gamble; for quality process, Florida Power and Light, Toyota, and Fuji Xerox; for high-volume production, Kodak and Cannon; for billing collection, American Express; for research and development, AT&T and Hewlett-Packard; for distribution, L. L. Bean and Hershey Foods, and for daily scheduling, Cummins Engine.
The 5W2H Approach. Asking questions about the current process can lead to improvement insights about why the current process isn’t working as well as it could, as well as potential ways to improve it. One method is called 5W2H approach which is provided by the table below.