I've been reading the TapRooT Blog lately, as it covers topics related to root cause analysis quite frequently. It's well-written, and while I don't agree with everything I see there, it does make me think. Anyway, the latest article asks, and tries to answer, a question on a topic that has many engendered spirited "discussions" on the Root Cause Live forum -- "Are categories a hindrance to good root cause analysis?" (link)
This is a good question. The author, Mark Paradies (developer of TapRooT), says in the article that:
As humans, we develop mental models of how things should work (a red light means stop or a computer mouse should operate in a certain way). Thus without language and categories, communication, thought, and life become very difficult.
By the way, in my experience most investigators who use a "non-categorical" systems and brag about their thinking not being constrained by categories actually have a SMALLER number of categories that they "practically" choose from when investigating and finding root causes than investigators who use TapRooT® (a system that some may call categorical).
And being an "experienced" investigator doesn't necessarily mean your number of categories is bigger. Some people just become more convinced that their favorite category (or categories) are really the root cause (or root causes) of all the worlds problems.
There are valid points to be considered here. It is possible to become so enamoured of your favourite types of causes that you start to see them everywhere. Furthermore, this type of tunnel-vision can blind you to other considerations that might invalidate your findings. However, there are other issues that should be considered. Here's the text of a comment I left at the TapRooT Blog site:
I believe the real issue is not the use of categories, per se. In my opinion, the real issue is the potential for blind use of a system, where the logic of a pre-defined cause tree becomes a substitute for investigation. I'm sure you train people not to do this with TapRooT. Nevertheless, the fact remains that people do this to save time. I have seen it.
Another important issue is the potential for important causes to lie outside the solution space contained within the pre-defined tree. Most of the systems I have seen do not have a category for "conflicting administrative program requirements," for example. Most systems would lump that under "program definition LTA", totally missing that the problem lies at the intersection of different programs.
Having said all that, I do recognize the value of pre-defined cause trees (and other category-based systems) in ensuring adequate depth/breadth of analysis, after a "manual" investigation has been completed. I would advocate the use of such a system as an organization performance model, for instance... but not as a "map to root causes."
I have written about the use of models in root cause analysis before (link). I still stand by what I said previously:
In the end, we are left with models upon models upon models... each with their own rules and assumptions, strengths and weaknesses. As stated previously, models are useful because they help us abstract away unimportant data so we can increase our focus on useful information. This is the strength of using models; unfortunately, it is also the main weakness. If models are used without knowledge of their assumptions and limitations, we could end up discounting potentially important facts and misdirecting our investigations.
So, I say "go out and use a pre-defined cause tree"... but first, use it as a tool to ensure breadth and depth of analysis after you've completed a "manual" investigation. Second, make sure you understand the bases of the models embedded in the tool, especially the scope and range of applicability. Otherwise, you'll be flying with your eyes on a potentially inaccurate map, instead of with your eyes on what you're doing. As aircraft pilots say, "... whatever else you're doing, fly the plane."
by Bill Wilson