I love John Maxwell’s definition of a system: “A system is a process for predictably achieving a goal based on specific, orderly, repeatable principles and practices.” I would make a simple change and state that a system “is a process or set of processes,” which better implies the possible complexity of a system. At a high level, a process involves inputs, resources, interrelated procedures, decisions, and outputs. Depending on the process, inputs can be objects such as raw materials and components or more abstract such as data and information. Ideally, inputs are well defined and clear. Inputs initiate some type of action to be taken. These actions involve tasks, practices, or procedures that require resources such as people, time, machines, etc. where decisions are made that analyze the inputs and transform them into something new. The goal is to acquire an outcome from all of this energy. The outcome or output may become an input for a new process. Imagine a factory that brings in raw materials and parts from all over the world where people and machines assemble a vehicle. This vehicle eventually becomes an input into the sales process. An outcome could also be a decision (e.g., yes, you qualify for the loan or no, you were denied the benefit). In a perfect system, the outputs would always come out as designed and perfect in every way. But, as we all know, this doesn’t happen.
An aspect that is often forgotten is that all systems have error. There is variation in the inputs, resources, and how procedures are carried out. All of which contribute to error in the output when compared to some perfect standard. I know of no system which is devoid of all variation and therefore perfect in every respect. So, if there is variation in the output, how do we know it’s still “good?” The output is usually compared to a standard and can be tested by checking or measuring certain characteristics and features. Some variation is allowed and because a difference is expected, an acceptable tolerance is assigned. However, even the test and measurement method have error and sometimes an object that is actually good is determined to be bad and vice versa. An output that is a decision such as yes/no is relative to a truly correct decision and the same type of mistakes can happen (it was a “no” when it should have been a “yes”) because of the overwhelming amount of information and data that must be analyzed. The information, data, and the method of analysis are all subject to error of some kind.
When it comes to decisions, there can be Type I or Type II errors. In general terms, a Type I error (false positive) is when a decision has been made that something is not good, rejected, deemed too different, or has changed in some significant way compared to a standard or truth when in fact, it hasn’t. A Type II error (false negative) exists when there has been a failure to detect something truly wrong or different. Depending on the circumstances, steps can be taken to reduce the probability of either the Type I or Type II error. When the probability of one type is decreased, the probability of the other type is increased. I will attempt to explain this better with three examples that experience Type I and Type II errors:
When a pharmaceutical company is researching the possibility of a new drug to be introduced to the population for some condition, a lot of testing is required to ensure its efficacy. If a false positive or Type I error occurs, then the research suggests that the drug has a desired effect when it truly doesn’t. If the drug has known harsh side effects and possibly some that are unknown, then this is an undesirable outcome because a significant amount of time and money will be spent in marketing and preparing the drug to be released to the population. The drug would be released to the public with the negative side effects that will be experienced by the users with no real positive effects of helping the condition. It is best to reduce this type of error. When the probability of this type of error is reduced, the probability of a Type II error is increased. For the given scenario, it is better to fail to recognize that a desired effect really exists than to incorrectly accept that the drug works, when it really doesn’t. The cost of negatively affecting people’s lives with no benefit for their condition is greater than the missed opportunities of a drug that works.
In a manufacturing environment, it is important to identify parts that vary too far from the standard which are then deemed to be defective. Often times, testing occurs with the intention of rejecting bad parts and passing parts deemed acceptable. This ideal situation would ensure that no defective parts make it to the customer. If a false positive or Type I error occurs, then a good part is rejected as though it had an unacceptable level of variation and potential defects when it was actually a non-defective part. When a false negative or Type II error occurs, a part is passed as though it had an acceptable level of variation or no defects when in fact, the part was unacceptable. A Type II error is an undesirable outcome because the defective part could be passed along to the customer and that would be much more costly. When the probability of a Type II error is reduced, the probability of a Type I error is increased. Again, it is better to absorb the cost of some falsely rejected parts than to pass defective parts on to the customer.
Criminal Justice System
Sometimes, a truly innocent person is arrested and found guilty. This is a Type I error. When a truly guilty person is found to be “not guilty” then it is a Type II error. (Notice that innocence is not proven, but instead there isn’t enough evidence to find a person guilty.) The American system is purposely designed to avoid Type I errors. Finding an innocent person guilty not only sends the wrong person to prison, but the guilty person is still free. While a Type II error also sets a guilty person free and is not desirable, at least an innocent person doesn’t go to jail.
In all cases, we want the system to be perfect. It rubs us the wrong way that we must accept these mistakes. And it’s really annoying that we have to think about which mistake is better and then design the system to be biased in that direction. The best we can hope for is to minimize the variation and improve the process so that decisions are more accurate. But, that’s not as easy as it sounds. There are numerous factors or inputs (information, data, measurements, etc.) that must be considered, appropriately weighted, and analyzed by people and software using processes that hopefully provide an output to aid in a decision. There are going to be mistakes. Accept it.
Think about people in our society that receive financial assistance or free health care. There is a system in place to decide who gets it and who doesn’t. Some people get it that truly don’t qualify. Other people that truly qualify and need it won’t get it. Remember, when you reduce the probability of one type of error, you increase the probability of the other. Where do you stand? Yes, it is better to have a more accurate system. But, until that happens, you must make a choice. Given that there is variation in the system and inherent error in the decisions, would you (a) rather see less people get the assistance that don’t qualify while also reducing the number of people getting it that do qualify or (b) accept the fact that there are people that will get the assistance that don’t qualify while making sure the maximum number of people get the assistance that do qualify?
This question applies to any system that is subject to these types of errors. I’m not here to answer the question for you. However, I do want you to think about the consequences when you desire the error to be biased in one direction or the other.