Theory of Constraints (Part 2)
As discussed in my previous blog post, a single point in the entire workflow can choke the flow of work. This point otherwise known as the constraint is to be taken as the most important part of the whole process. In hindi this may be known as "Nuss" or the nerve centre of the entire process.
On reading my previous post you may be think that it is the responsibility of a person (Constraint) to improve or that the person is the reason that the whole process is bogged down. Now it is true that a person can be the constraint but it is also true that the constraint can also be a machine or a policy.
From my understanding of TOC, policies constitute 90% of all the constraints. If there is a suggestion for improvement on the constraint, it might be noticed that the answer will be, "This is not how we work here" or "This is not what the client wants" or "If anything goes wrong, I will not be able to answer to my bosses" or "This situation is different" or "The solution is not realistic". If any of these or any similar answers come up, you can be rest assured that a policy is restricting the efficiency of the process. These policies may be written or unwritten (such as a process not done before and hence not to be tried - seems silly when verbalized but very often the reason in our heads).
Next point is, once we think that increasing the capacity of the constraint will solve all the problems, that is wrong again. It is not that the capacity will not be increased. The capacity will increase but other problems will arise. The main issue is of inventory and how it moves ahead in the system. Inventory can be the number of cases (in case of services) or widgets (in case of manufacturing).
Two phenomena known as statistical fluctuations and dependent events cause quite a lot of headache even in a balanced system (where every process is having equal capacity).
To explain what statistical fluctuations are, let us consider an example of estimating the number of bulbs required in a month to replace any bulbs in office. Some people may try to calculate the total number of lights in office and then try to estimate how many % of bulbs gets spoilt on average in a month. Now what if, a voltage fluctuation fries out half the bulbs in the office? What if a 64.45% (A random %) of the total number of bulbs had already been changed in the last 2 months and hence the change of a bulb getting spoilt gets reduced? Or what if 84.76% (Again a random % to illustrate this example) of all the bulbs were last replaced 5 years ago and have a life of around 5 years?
As shown in the above case, to start calculating every variable for every minor item would lead to a high amount of data requirement and even this would only lead to an intelligent "Guesstimate" (Guess + Estimate). This difference in the number of bulbs getting burnt out in any given month is an example of "Statistical Fluctuation". i.e. the average number of bulbs getting burnt out for the last ten years may be easily calculated but to get an accurate answer on the actual number of bulbs getting burnt out in any month is essentially impossible.
Dependent events are those events which can take place only based on the events which take place before it. The effect of these two phenomena leads to huge inventories and loss of control of the process. Let us take the example of a line of 6 people rolling a dice. Based on the number thrown, a batch of matchsticks move from the first person to the last person.
One would assume that the matchsticks that get passed on at the end of the line would be on an average of 3.5 i.e. [(1+2+3+4+5+6)/6] Hence at the end of 10 rounds, the number of matchsticks should be around 35. But this is not so. Let us check the below scenario of the first round.
When the first person throws the number 4, 4 matchsticks are passed on to the second person. When the second person throws a 6, only 4 matchsticks are passed on to the next person as there are no more matchsticks with the second person. When the third person throws a 2, only 2 matchsticks are passed on with 2 remaining with the third person as "Inventory". By the time the last (sixth) person throws the dice, he will have only 1 or 2 matchsticks to pass on to the end.
After a few rounds of doing this (With truly random numbers of 1 to 6 rolled by each person), it is seen that two things happen:
- Inventory increases all over the system and
- The output comes in the form of waves (There is no uniformity in output)
Allocating buffer at every point also does not help much as the situation again changes back to chaos without any control on the system.
To prevent such issues, the constraint is very critical. We need to pinpoint the constraint and use it to control the entire system. How we do such a process shall be explained in future blog posts.