Posts Tagged takt

Calculating Average Daily Demand, Not a No-Brainer

Lean is largely about satisfying customer requirements. That’s near impossible if the lean practitioner does not understand demand. In fact, misunderstand average daily demand (ADD) and the impact can be significant – inaccurate takt times, improper demand segmentation, poorly sized kanban, incorrect reorder points, etc.

Calculating average daily demand can be deceptively complex. There are a handful of things to consider.

  • SKU and part number versus product family. Kanban is applied at the SKU and part number level, so ADD must be calculated at that level as well. When calculating takt time, ADD is often, but not always, determined at the product family level or at least the group of products or services that are produced or delivered within a given line, cell or team.
  • True demand. Do not blindly accept what was sold, produced, processed, purchased, or issued as true historical demand. Often this demand is: 1) capped by internal constraints, whether capacity or execution related, leaving unmet demand (that may or may not be fulfilled by competitors or may become backordered), or 2) artificially inflated due to overproduction, purchasing of excess stock, etc. If the barriers to constrained demand will be addressed in the near future, then include both historical met and unmet demand. In the area of overproduction or over-purchasing, identify the real demand and use it.
  • Historical versus forecasted demand. If forecasted demand is different than historical and the lean practitioner has faith in the forecast accuracy, then forecast should be used to determine ADD (with historical most likely used to determine demand variation). Otherwise, use historical demand.
  • Abnormal historical demand. Historical demand, whether considered for the purpose of determining ADD or/and demand variation may very well contain abnormal data. If it is significant and there is a reasonable probability that something of that nature and magnitude will not occur in the future (i.e. one time order or marketing promotion), then it may be prudent to exclude that data from the analysis.
  • Demand horizon. Demand is rarely constant over extended periods of time. Narrowing the demand horizon will increase the risk of missing seasonality, cyclicality and/or other significant variation. This is important for the calculation of both ADD and demand variation. The historical horizon often should be as much as 12 to 36 months, with forecasted future horizon 3 to 18 plus months. Statistically speaking, the practitioner needs 25 +/- data points to make valid calculations.
  • Demand time buckets. Clearly, the size of demand time buckets does not impact the purely mathematical calculation of ADD. However, the use of daily or weekly demand time buckets, as opposed to monthly or quarterly, does provide the necessary insight to visually identify abnormal demand, inflection points for seasonal demand changes, etc. Furthermore, smaller buckets are required for calculating statistically valid demand variation (really, the coefficient of variation (CV)).
  • Number of operating days. “Average daily” presumes a denominator in days. The number of days must correspond to the number of operating days for the resource that is satisfying the demand. For kanban we have to remember that the resource is the “owner” of the supermarket.
  • Operating days without activity. Demand analysis will sometime reveal SKUs or parts that have days (or even weeks) that do not have any demand. This, by its nature, typically is indicative of relatively high demand variation. Depending upon the situation, the lean practitioner, when sizing kanban, may consciously want to include the zeros within the calculations or not (or not use kanban at all). For example, excluding zeros will drive a higher ADD and a lower CV versus including zeros and calculating a lower ADD and a higher CV. The excluded zero approach will more likely ensure that the kanban can meet the spikey demand, but at a price…more inventory.

Any thoughts or war stories?

Related posts: Does Your Cycle Time Have a Weight Problem?, Musings About FIFO Lane Sizing “Math”

Tags: ,

Does Your Cycle Time Have a Weight Problem?

Understanding a process’ cycle time is extremely important, especially in the context of takt time. In a mixed model environment, cycle time can be a bit less straight-forward. That’s where weighted averages may make sense.

Weighted average cycle time, also known as “average weighted cycle time,” provides a representative average cycle time. Varied models or services in a given cell, line or work area often have varied work contents due to different steps, duration of steps, sequence of steps, etc. Accordingly, the cycle times vary.

Weighted average cycle times can be calculated for operator cycle times, machine cycle times and effective machine cycle times. Often weighted average cycle times are presumed to be operator related, but this is not always the case.

As we endeavor to maintain a cycle time that is less than or, at most, equal to takt time, mixed models and their varying work content will likely have cycle times for some products or services that are below takt time, while others exceed takt time. The weighted average cycle time serves as an average proxy for cycle time and is often the same as the planned cycle time.

Clearly, change in product or service mix will change the weighted average cycle time. As the demand mix shifts to one with a greater proportion of cycle time(s) that exceed the average, then the weighted average cycle time will approach and may exceed takt time. The lean practitioner must be aware of these dynamics and should proactively address the situation through reducing work content, optimizing balance between operators, adding additional operator(s) or lines, strategically applying/sizing FIFO lanes, etc.

See below for the weighted average cycle time formula and an example (click to enlarge).

Related post: Musings About FIFO Lane Sizing “Math”

Tags:

Guest Post: Beyond Toast Kaizen – Lean Breakfast Concepts, Circa 1937

I was in Boston this weekend with my wife and we were told the best place for breakfast was Paramount’s. As we waited in line to order food, I noticed their sign told us to “Please Order and Pay before being seated”.  They claimed not saving a table “ensures all customers will have a table when needed” and although “it may seem hard to believe, it’s been working well since 1937”. Like much in lean this seemed counterintuitive. I decided to do a few time observations while we waited in line. Fortunately, my wife puts up with my curiosity.

Customers came out of the breakfast line and cashier every 90 seconds. So, customers needed a table every 90 seconds (Takt Time). I watched several tables that were filled before we sat down and the time to eat was about 18 minutes. (This is not the type of place where you bring the paper and the server keeps filling your coffee. )

If customers were sitting down at a table every 90 seconds and it takes 18 minutes to eat, the restaurant would need 12 tables to balance the seating capacity with customer requirements (Cycle Time/Takt Time). The restaurant has 14 tables. So, the overall system Cycle Time (think “drop off rate”) was less than Takt Time. I convinced myself, and my wife, why their seating policy worked.

I am confident that Paramount’s system works and that now…and in the future, we will not have to save a table. One should always be available (assuming no substantial change in Takt Time). I wonder if when they started in 1937 they fully understood why it worked. Oh well, perhaps all that really matters is that their breakfast is outstanding and customers keep returning.

John Rizzo authored this blog post. He is a fellow Lean Six Sigma implementation consultant and friend of Mark Hamel. John also enjoys a good breakfast!

Tags: ,

Musings About FIFO Lane Sizing “Math”

First in, first out (FIFO) lanes are the core of sequential pull. When properly sized, constructed and managed they ensure process and conveyance sequence, provide a buffer to facilitate flow during upstream changeovers, chronic failures, etc., and guard against overproduction. FIFO lanes, among other things, must reflect a maximum level of inventory – number of parts or pieces or total work content (minutes, hours, or days). Without enforced maximum levels the upstream process may produce more or faster than the downstream process can routinely consume.

So, how do you size your FIFO lane? There’s different levels of math that can be thrown at it. Often folks apply some pretty rudimentary thinking, especially initially if they’re in the midst of value stream analysis. Generically, the equation is:

FIFO lane max = desired lead time/takt time (TT)

Of course, then you have to get into the definition of desired lead time. In a perfect world it would be zero, but very few value streams are perfect. In fact the reason we typically use a FIFO lane is that we cannot connect the upstream and downstream process via continuous flow (or supermarket pull, for that matter). So, there obviously are barriers to continuous flow (and pull) – like those pesky changeovers, cycle time mismatches between upstream and downstream, process instability, shelf-life considerations, cure times, shared processes, etc. We must always try to eliminate the barriers, but in the meantime, we often need to live with sequential pull.

…Anyway, back to desired lead time. Below are a handful of possible equations that can be applied. Admittedly, they are not failsafe, but they do prompt some necessary thinking. Like kanban sizing math (often much more complicated), these are principle-based and should be tested out and adjusted as necessary first through table-top simulations and again after real-life piloting and forever, really. You can definitely get carried away calculating factors of safety, applying standard deviation driven coefficients to address variation and the like. I’ll leave that for another time. For now, here are a handful of equations that may be helpful.

If we’re talking cure time, for example:

  • FIFO lane max = (cure time/TT) X factor of safety (i.e., to address cure time variation and/or upstream stability issues)

If the issue is shelf life, it can be:

  • FIFO lane max = (shelf life/TT) – factor of safety (it makes sense to have margin here)

If the upstream operation has significant set-up time and thus there is a risk that it may “starve” the downstream, then the calculation may look something like:

  • FIFO lane max = (Upstream internal set-up time/TT) X factor of safety

The same type of thinking can be applied if the upstream process is shared (i.e., supplying other value streams). Here we may need insight into the “every part every interval” and translate it into an every line every interval (ELEI…just made that one up) thing. The equation may then be:

  • FIFO lane max = (ELEI/downstream TT) X factor of safety

If the upstream operation has substantial and chronic failures (i.e., unplanned downtime), and frankly this issue is probably implicit within most factors of safety referenced above, then you may want to consider something like:

  • FIFO lane max = (average upstream unplanned downtime event/TT) X factor of safety (to address unplanned downtime duration variation and/or time between unplanned downtime events)

Within a mixed model value stream, sometimes the cycle time (CT) of the downstream process is greater than the upstream CT for some models. (Of course, the average weighted CT of the downstream process is less than or equal to the average weighted CT of the upstream process.) In that situation, the math may look something like:

  • FIFO lane max = ((longest downstream CT – TT) X batch volume for longest CT item)/TT

I am sure there is other (and better) math out there. Please share your expertise here!

Of course, lean practitioners aren’t only concerned about the maximum levels. When we exceed maximum levels, we definitely have an abnormal condition that requires real time response. But what about when the FIFO lane has dwindled, when do we signal an abnormal condition? Obviously, when the FIFO lane is empty; but that’s a bit late. This is where we can, for example, use the factor of safety (divided by TT) to help calculate the “red zone.” And there are other conventions that can be used. For another time…

Tags: ,

Plan Vs. Actual – The Swiss Army Knife of Charts

swiss army knifeImagine that you were only allowed one chart (or board) at the gemba. What would you pick? What is the Swiss Army knife (I’m more of a Leatherman Multitool fan myself) of charts that gives you insight into process adherence and process performance?

For me, it’s the plan vs. actual chart – also known as the production analysis board (or chart), day-by-the-hour chart, etc. It is typically a paper chart (my preference) or dry erase board that is positioned at the pacemaker process. It’s refreshingly low-tech and reflects, at a minimum, the line, cell or team name, output requirements (number of picks, assemblies, invoices, etc.) for the day or shift, the related takt time, the planned hourly (or smaller time increment) and cumulative outputs for the day or shift, the actual hourly and cumulative outputs (or in some practices the cumulative deficit or surplus) and fields to record the problem or reason for any hourly plan vs. actual deltas as well as a sign-off by lean leader(s) as proof of review.

So, why is the plan vs. actual so powerful? Here’s 5 reasons.

  1. Communicates customer requirements. The chart reflects the demand, by type or product, quantity, and timing and sequence. It reflects a takt image.
  2. Forces the matching of cycle time to takt time. Standard work should dictate the requisite staffing (and related cycle time, work sequence and standard WIP) to satisfy the customer requirements.
  3. Engages the employee and drives problem-solving. Like any visual control worth its salt, the plan vs. actual is worker-managed in a relatively real-time way. It highlights abnormal conditions (hourly and/or cumulative shortfalls or overproduction) and drives self-correction or at least notification/escalation and containment. The plan vs. actual also spurs PDCA in that the worker is required to identify the root cause of the abnormal condition and ultimately points the worker, team and leadership to effective countermeasures.
  4. Focuses lean leaders within the context of leader standard work. A good plan vs. actual will have fields for team leader/supervisor sign-offs on the hour and managers twice daily. This is essentially proof of the execution of leader standard work in which the leader should ensure that the plan vs. actual is maintained real-time, is complete (i.e., no unexplained abnormalities), and that countermeasures are being employed in order to effectively satisfy customer requirements.
  5. Focuses associates and lean leaders within the context of the daily accountability process. The prior day’s plan vs. actual and trended performance (including pitch logs) should be reviewed within daily tiered meetings. These meetings help drive the identification of improvement opportunities and countermeasures at the individual, team and value stream level.

So, what’s your Swiss Army Knife chart and why?

Related posts: Leader Standard Work Should Be…Work!, Leader Standard Work – You can pay me now, or you can pay me later

Tags: , , , , , , ,