Effective visual controls are, among other things, self-explaining. What does that mean? It means that someone with no inside knowledge of a process should be able to quickly understand the “system” without human assistance. This understanding should extend to the purpose of the system, the operating rules and the owner. From that, the casual observer should be able to easily discern a normal versus abnormal condition. The non-casual observer should be able to do the same and then start thinking about identifying root causes and implementing countermeasures.
Many people think I’m crazy when I suggest that they create a “visual of the visual.” Sounds redundant right? Sounds like muda. However, how many times have you been in a plant, office, lab, clinic, etc. and wondered what the heck that thing is, how it works, and/or whether it is working? That “thing” could be a heijunka box, kanban batch board, TPM autonomous maintenance board, document aging bin system…fill in the blank.
A simple test at the gemba can often reveal how UNself-explaining systems can be. Simply ask employees to explain the system. Often, they can’t. So much for engaged workers, so much for sustainability. If the system is not self-explaining, then it certainly can’t be, like all good visual systems, worker managed.
One more point. The very task of creating a visual of the visual requires the creator(s) to think through the operating rules (essentially the standard work) and how best to articulate them so others understand. The same goes for the defined purpose and for the selection and identification of the process owner.
Does this make sense?