Archive for category Lean Math (TM)

Holiday Lean Math

As some of you may recall, I launched a new blog called Lean Math back in February with a couple of my buddies. In my humble opinion, I think that the ever-growing content is pretty useful stuff for lean practitioners.

In any event, I just wanted to share some basic holiday lean math…




I am painfully aware that while the equation is simple, the successful and sustained (mathematical) execution has been eluding humans for a long, long, long time.

Here’s to the sincere hope that you and yours may have a happy and blessed holiday season.


Related post: New Blog Launch – Lean Math!

New Blog Launch – Lean Math!

In the fall of 2009, I launched Gemba TalesTM in anticipation of the Kaizen Event Fieldbook. Truthfully, it was something that I was told authors do – “You need to have a blog to promote your book.”

Well, sort of.

Blogs, in my opinion, should emanate less from a marketing imperative and more from a sense of sharing and community. That’s a whole lot more fulfilling.

So, with like mind, I would like to announce a new entrant into the lean blogosphere, it’s called Lean MathTM (

I know what you’re thinking, “Lean Math?!” Now, that’s a subject that evokes passion in the heart of every lean practitioner…right?

But, the truth is effective lean transformations require some level of math, whether it’s the often deceptively simple takt time calculation, sizing kanbans, calculating process capability, or anything in between. It’s hard to get away from math.

There is no such thing as math-free lean and certainly not math-free six sigma!

Lean MathTM is not intended to be some purely academic study and it does not pretend to be part of the heart and soul of lean principles. (Can you say niche?) Rather, it’s a tool and a construct for thinking. Here we want to integrate lean math theories and examples with experimentation and application.

Some background. Within the next year, the Society of Manufacturing Engineers will be publishing a book, tentatively entitled, Lean Math. I started this thing a LONG time ago, just ask SME!  And, I’m not going it alone this time, Michael O’Connor, Ph.D. (a.k.a. Dr. Mike) is co-authoring this work. We’re also getting a ton (!) of help from Larry Loucka, friend, colleague, and fellow-blogger at Lean Sigma Supply Chain.

No surprise, we’re the three folks who are launching the Lean Math Blog. The formal launch date is February 14th – because we LOVE math! Ok, love may be a bit strong. We really LIKE math.

Here are some of our first blog posts:

  • Time
  • Cycle Time
  • Square Root Law
  • Available Time

I even made an introductory video for the new site. First video ever. And it’s about math…!?! Scary.

The categories or topics that we’ll ultimately address with future posts include the following. Go here if you want to see the detail.

  • Systems
  • Time
  • “Ilities”
  • Work
  • Inventory
  • Metrics
  • Basic Math
  • Measurement

Yes, there’s a lot of ground to cover. That’s why the book draft is so stinking big!

Please check out the site and subscribe to RSS or email to catch future posts. If you’re so inclined, make a comment and start a conversation and/or share the posts with other folks  through social media (we’ve got the buttons). Also, please like us on Facebook (Lean Math Blog) and follow us on Twitter (@LeanMath) and on our LinkedIn company page (Lean Math Blog).

Admittedly, we’re just getting started, we will continue to add new content in a variety of categories. Through our own application of PDCA, we’ll endeavor to improve the site and increase the value to our readers.

Ultimately, we hope that you will join our fledgling Lean MathTM community and that it lives up to our blog tag line, “Figuring to improve.”

Related posts: Does Your Cycle Time Have a Weight Problem?, Musings About FIFO Lane Sizing “Math”, Guest Post: “Magical Thinking”


Calculating Average Daily Demand, Not a No-Brainer

Lean is largely about satisfying customer requirements. That’s near impossible if the lean practitioner does not understand demand. In fact, misunderstand average daily demand (ADD) and the impact can be significant – inaccurate takt times, improper demand segmentation, poorly sized kanban, incorrect reorder points, etc.

Calculating average daily demand can be deceptively complex. There are a handful of things to consider.

  • SKU and part number versus product family. Kanban is applied at the SKU and part number level, so ADD must be calculated at that level as well. When calculating takt time, ADD is often, but not always, determined at the product family level or at least the group of products or services that are produced or delivered within a given line, cell or team.
  • True demand. Do not blindly accept what was sold, produced, processed, purchased, or issued as true historical demand. Often this demand is: 1) capped by internal constraints, whether capacity or execution related, leaving unmet demand (that may or may not be fulfilled by competitors or may become backordered), or 2) artificially inflated due to overproduction, purchasing of excess stock, etc. If the barriers to constrained demand will be addressed in the near future, then include both historical met and unmet demand. In the area of overproduction or over-purchasing, identify the real demand and use it.
  • Historical versus forecasted demand. If forecasted demand is different than historical and the lean practitioner has faith in the forecast accuracy, then forecast should be used to determine ADD (with historical most likely used to determine demand variation). Otherwise, use historical demand.
  • Abnormal historical demand. Historical demand, whether considered for the purpose of determining ADD or/and demand variation may very well contain abnormal data. If it is significant and there is a reasonable probability that something of that nature and magnitude will not occur in the future (i.e. one time order or marketing promotion), then it may be prudent to exclude that data from the analysis.
  • Demand horizon. Demand is rarely constant over extended periods of time. Narrowing the demand horizon will increase the risk of missing seasonality, cyclicality and/or other significant variation. This is important for the calculation of both ADD and demand variation. The historical horizon often should be as much as 12 to 36 months, with forecasted future horizon 3 to 18 plus months. Statistically speaking, the practitioner needs 25 +/- data points to make valid calculations.
  • Demand time buckets. Clearly, the size of demand time buckets does not impact the purely mathematical calculation of ADD. However, the use of daily or weekly demand time buckets, as opposed to monthly or quarterly, does provide the necessary insight to visually identify abnormal demand, inflection points for seasonal demand changes, etc. Furthermore, smaller buckets are required for calculating statistically valid demand variation (really, the coefficient of variation (CV)).
  • Number of operating days. “Average daily” presumes a denominator in days. The number of days must correspond to the number of operating days for the resource that is satisfying the demand. For kanban we have to remember that the resource is the “owner” of the supermarket.
  • Operating days without activity. Demand analysis will sometime reveal SKUs or parts that have days (or even weeks) that do not have any demand. This, by its nature, typically is indicative of relatively high demand variation. Depending upon the situation, the lean practitioner, when sizing kanban, may consciously want to include the zeros within the calculations or not (or not use kanban at all). For example, excluding zeros will drive a higher ADD and a lower CV versus including zeros and calculating a lower ADD and a higher CV. The excluded zero approach will more likely ensure that the kanban can meet the spikey demand, but at a price…more inventory.

Any thoughts or war stories?

Related posts: Does Your Cycle Time Have a Weight Problem?, Musings About FIFO Lane Sizing “Math”

Tags: ,

Does Your Cycle Time Have a Weight Problem?

Understanding a process’ cycle time is extremely important, especially in the context of takt time. In a mixed model environment, cycle time can be a bit less straight-forward. That’s where weighted averages may make sense.

Weighted average cycle time, also known as “average weighted cycle time,” provides a representative average cycle time. Varied models or services in a given cell, line or work area often have varied work contents due to different steps, duration of steps, sequence of steps, etc. Accordingly, the cycle times vary.

Weighted average cycle times can be calculated for operator cycle times, machine cycle times and effective machine cycle times. Often weighted average cycle times are presumed to be operator related, but this is not always the case.

As we endeavor to maintain a cycle time that is less than or, at most, equal to takt time, mixed models and their varying work content will likely have cycle times for some products or services that are below takt time, while others exceed takt time. The weighted average cycle time serves as an average proxy for cycle time and is often the same as the planned cycle time.

Clearly, change in product or service mix will change the weighted average cycle time. As the demand mix shifts to one with a greater proportion of cycle time(s) that exceed the average, then the weighted average cycle time will approach and may exceed takt time. The lean practitioner must be aware of these dynamics and should proactively address the situation through reducing work content, optimizing balance between operators, adding additional operator(s) or lines, strategically applying/sizing FIFO lanes, etc.

See below for the weighted average cycle time formula and an example (click to enlarge).

Related post: Musings About FIFO Lane Sizing “Math”


Labor Density – When Dense is Good

Labor density is not a measurement that is thrown around very often, at least explicitly. Conceptually however, it must be resident somewhere in the lean thinker’s headset. Hey, it was important enough for Taiichi Ohno to discuss!

Labor density is a measure of value-add intensity relative to total worker motion. The measurement provides insight into the extent that a worker’s motion transforms the materials or information (or in the instance of health care – helps the health or comfort of the patient) into something that is valued by the customer. Ideally, the labor density should be 100%.

The waste of motion, both physical (searching, twisting, bending, etc.,) and virtual (searching within a database, moving from computer screen to screen), consumes time and resources, but does not add value. While total work content is not necessarily limited to only motion, labor density can help highlight wasted motion whether it is an act of omission (motion that substitutes for real value-added work, like “apparent” work instead of properly securing the required three fasteners) or plain old, waste of motion.

The math:

  • Labor density = work / motion
  • Example:  If value-added work is 32” and total worker motion is 40” per cycle: 80% = 32” / 40”

Admittedly, the labor density measurement is not very sexy at all, but it should challenge us to more rigorously observe, identify and eliminate waste!

Related post: Musings About FIFO Lane Sizing “Math”


Lean Metric: Waste Elimination Effectiveness

It happened about 15 years ago, but I remember it very clearly. My sensei, never one to mince words, shared his thoughts on the performance of the four teams. He grabbed a flipchart and scratched out a formula – one that I now call “waste elimination effectiveness.”

The W.E.E. = identified waste X acknowledged waste X eliminated waste. It’s cumulative, like rolled throughput yield (i.e., 80% X 60% X 65% = 31%). A low % in any of the factors is NOT good, multiple factors, disaster.

Some teams fared a lot better than others in the sensei’s semi-quantitative assessment. I don’t remember the scores. Not really important. What is important are the underlying principles and perspective. Here are some of my humble W.E.E. reflections.

The great Hiroyuki Hirano calls the practice of identifying waste “wastology.” Pretty cool term.  In my estimation, it’s about 85% technical skill and 15% behavioral. In other words, with study, hard work , the right tools/techniques, and a lot of practice, you can learn how to identify waste. In order to drive the W.E.E.’s waste identification number, you also have to apply sufficient rigor and stamina.

Now, you can teach a person to identify waste, but you can’t MAKE them acknowledge it (kind of like that horse and water thing). The willingness to acknowledge waste is primarily behavioral. I put this at a 10% technical and 90% behavioral “skill mix.” A retributive culture and/or a lack of humility will minimize acknowledgment. Of course, lazy folk know that if they don’t acknowledge the waste, then they won’t be obligated to try to eliminate it (“Waste? What waste?”).

…And even if people acknowledge the waste, you can’t MAKE them eliminate it.  Some just don’t have the killer instinct. I see elimination as a 50%/50% split between technical and behavioral. A lack of bias for action or aggressiveness will limit waste elimination. Similarly, from a technical perspective, if the kaizener does not apply adequate countermeasures, and apply them against the real root cause(s), they’re just spinning their wheels.

So, generating a high waste elimination effectiveness level is not easy…but, pretty much anything worth accomplishing isn’t easy.

Related posts: Kaizen Principle: Bias for Action, Time Observations – 10 Common Mistakes, The Truth Will Set You Free!


Musings About FIFO Lane Sizing “Math”

First in, first out (FIFO) lanes are the core of sequential pull. When properly sized, constructed and managed they ensure process and conveyance sequence, provide a buffer to facilitate flow during upstream changeovers, chronic failures, etc., and guard against overproduction. FIFO lanes, among other things, must reflect a maximum level of inventory – number of parts or pieces or total work content (minutes, hours, or days). Without enforced maximum levels the upstream process may produce more or faster than the downstream process can routinely consume.

So, how do you size your FIFO lane? There’s different levels of math that can be thrown at it. Often folks apply some pretty rudimentary thinking, especially initially if they’re in the midst of value stream analysis. Generically, the equation is:

FIFO lane max = desired lead time/takt time (TT)

Of course, then you have to get into the definition of desired lead time. In a perfect world it would be zero, but very few value streams are perfect. In fact the reason we typically use a FIFO lane is that we cannot connect the upstream and downstream process via continuous flow (or supermarket pull, for that matter). So, there obviously are barriers to continuous flow (and pull) – like those pesky changeovers, cycle time mismatches between upstream and downstream, process instability, shelf-life considerations, cure times, shared processes, etc. We must always try to eliminate the barriers, but in the meantime, we often need to live with sequential pull.

…Anyway, back to desired lead time. Below are a handful of possible equations that can be applied. Admittedly, they are not failsafe, but they do prompt some necessary thinking. Like kanban sizing math (often much more complicated), these are principle-based and should be tested out and adjusted as necessary first through table-top simulations and again after real-life piloting and forever, really. You can definitely get carried away calculating factors of safety, applying standard deviation driven coefficients to address variation and the like. I’ll leave that for another time. For now, here are a handful of equations that may be helpful.

If we’re talking cure time, for example:

  • FIFO lane max = (cure time/TT) X factor of safety (i.e., to address cure time variation and/or upstream stability issues)

If the issue is shelf life, it can be:

  • FIFO lane max = (shelf life/TT) – factor of safety (it makes sense to have margin here)

If the upstream operation has significant set-up time and thus there is a risk that it may “starve” the downstream, then the calculation may look something like:

  • FIFO lane max = (Upstream internal set-up time/TT) X factor of safety

The same type of thinking can be applied if the upstream process is shared (i.e., supplying other value streams). Here we may need insight into the “every part every interval” and translate it into an every line every interval (ELEI…just made that one up) thing. The equation may then be:

  • FIFO lane max = (ELEI/downstream TT) X factor of safety

If the upstream operation has substantial and chronic failures (i.e., unplanned downtime), and frankly this issue is probably implicit within most factors of safety referenced above, then you may want to consider something like:

  • FIFO lane max = (average upstream unplanned downtime event/TT) X factor of safety (to address unplanned downtime duration variation and/or time between unplanned downtime events)

Within a mixed model value stream, sometimes the cycle time (CT) of the downstream process is greater than the upstream CT for some models. (Of course, the average weighted CT of the downstream process is less than or equal to the average weighted CT of the upstream process.) In that situation, the math may look something like:

  • FIFO lane max = ((longest downstream CT – TT) X batch volume for longest CT item)/TT

I am sure there is other (and better) math out there. Please share your expertise here!

Of course, lean practitioners aren’t only concerned about the maximum levels. When we exceed maximum levels, we definitely have an abnormal condition that requires real time response. But what about when the FIFO lane has dwindled, when do we signal an abnormal condition? Obviously, when the FIFO lane is empty; but that’s a bit late. This is where we can, for example, use the factor of safety (divided by TT) to help calculate the “red zone.” And there are other conventions that can be used. For another time…

Tags: ,