The Principles of Product Development Flow

SUMMARY: 

Top 3 things I learned:

  1. If you run everything “efficiently” at full capacity, then you have no bandwidth for innovation or flexibility.  This is something I learned from Theresa King at Verizon.  She always refused work even when there was capacity left over so that she could respond if needed.  At first I did not understand this concept as it seems counter intuitive, but (to a point) the less we did, the more we accomplished.
  2. Optimal learning happens when you fail half the time.  This comes from R.L. Hartley’s forumula.

NOTES: Chapter 1: The Principles of Flow

IDEA:The Problem

  • When our change fails to produce benefits, we revert to old ways
  • Risk adversion will drive innovation out of our development  process

CONCEPT: Problems with the Current Orthodoxy

  • Failure to correctly quantify economics: Avoid vodoo economics and understand your costs
  • Blindness to Queues: Not visible like boxes in a warehouse
  • Worship of efficiency: need spare bandwidth
  • Hostility to Variability: kills innovation
  • Worship of conformance: live in an uncertain world
  • Institutionalization of large batch sizes: illusion of efficiencies
  • Underutilization of Cadence: lowers transaction cost
  • Managing timelines instead of queues: emphasize flow you focus on queues rather than timelines
  • Absence of WIP constraints: best way to manage queues
  • Inflexibility: high capacity leaves you inflexible
  • Noneconomic flow control: economic frame resolves scheduling dilemas
  • Centralized Control: value efficiency over response type

CONCEPT: Major themes in the book (these are chapters)

  • Economics: key is to think the problem through
  • Queues: cause problems
  • Variability: reduce variability and the cost of variability
  • Batch Size: reduce batch size, single most effective way to reduce queues
  • WIP Constraints: reduce cycle time
  • Cadence, Synchronization, and Flow Control: Regular intervals, start/stop times, and sequence
  • Fast Feedback: make better choices
  • Decentralized Control:

NOTES: Chapter 2: The Economic View

CONCEPT: The Economic Principles

  1. Quantified Overall Economics.  Have a good economic framework so that tradeoffs are easy to make
  2. Interconnected Variables.  You can’t change just one thing.  Here are the common inter-related variables
    1. Cycle Time
    2. Product Cost
    3. Development Expense
    4. Product Value
    5. Risk
  3. Quantified Cost of Delay (COD).  Most important to quantify
  4. Value-added
  5. Inactivity principle: watch the work product not the worker.  Inventory is the biggest waste
  6. U-Curve principle: important trade-offs have a U curve
  7. Imperfection: imperfect answers improve decision making
  8. Small decisions: influence the many small decisions, we focus on the big decisions
  9. Continuous economic trade-offs
  10. First perishability principle: many economic choices are more valuable when made quickly
  11. Subdivision principle: inside every bad choice lies a good choice
  12. Early harvesting: harvest the early cheap opportunities
  13. First decision rule: use decision rules to decentralize economic control
  14. First market: decision makers must feel both cost and benefit
  15. Optimum decision timing: every decision has optimum economic timing
  16. Marginal economics: always compare marginal cost and marginal value
  17. Sunk cost: do not consider money already spent
  18. Buying information: the value of information is its expected economic value
  19. Insurance principle: don’t pay for more insurance than the expected loss
  20. Newsboy principle: High probability of failure does not equal bad economics. Large payoff asymmetries
  21. Show me the money: to influence financial decisions, speak the language of money

CHART: Economic Batch Size: We do not need to find the perfect optimum to capture most of the value because it is a U-curve

  • X axis: Batch Size
  • Y axis: Cost
  • Line: 45 diagonal: Holding Cost
  • Line: Inverse Log: Transaction Cost
  • U-Curve: Total Cost

CHART: Profit vs. Marginal Profit: Profit is maximized when total value is furthest away from total cost.  Easier to see when the marginal value equals (crosses) the marginal cost

  • X axis: Feature Performance
  • Y axis: Dollars
  • Line: Log: Total Value
  • Line: Invers Log: Total Cost
  • Line: Inverse Log: Marginal Value
  • Line: 45 diag: marginal cost

CHART: Sequencing Risk Reduction.  Sequence first those activities that remove the most risk for the least expense

  • X axis: Expense of Risk Reduction
  • Y axis: Value of Risk Reduction
  • Line: 75 degree: Sequence Early
  • Line: 30 degree: Sequence Late

CHART: Economics of Parallel Paths.  incremental benefit of adding parallel paths progressively decreases

  • X axis: Number of parallel paths
  • Y axis: Dollars
  • Line: U Curve: Total Cost
  • Line: Inverse Log: Cost of Failure
  • Line: 45 degree Development Cost

NOTES: Chapter 3: Managing Queues

NOTATION: M/M/1/** where

  • M = Arrival Process: work arrives
  • M = Service Process: time to accomplish work
  • 1 = Number of parallel servers
  • ** = Upper limit of queue size

CONCEPT: Queue Principles

  1. Invisible Inventory: product development inventory is physically and financially invisible
  2. Queuing Waste: queues are responsible for most of the economic waste
  3. Queuing Capacity Utilization: Capacity utilization increases queues exponentially
  4. High-Queue States: Most of the damage done by a queue is caused by high-queue states
  5. Queue Variability: Variability increases queues linearly
  6. Variability Amplification: Operating at high levels of capacity utilization increases variability
  7. Queueing Structure: Serve pooled demand with reliable high-capacity servers
  8. Linked Queues: Adjacent queues see arrival or service variability depending on loading
  9. Queue Size Optimization: Optimum queue size is an economic trade-off
  10. Queueing Discipline: Queue cost is affected by the sequence in which we handle the jobs in the queue
  11. Use Cumulative Flow diagrams to monitor queues
  12. Little’s formula: Wait Time = Queue Size/Processing Rate
  13. Queue Size Principle: Don’t control capacity utilization, control queue size
  14. Queue Control Size Control: Don’t control cycle time, control queue size
  15. Diffusion Principle: Over time, queues will randomly spin seriously out of control and will remain in this state for long periods
  16. Intervention: We cannot rely on randomness to correct a random queue

TERM: WIP: Work In Process

TERM: DIP: Design in Process

CONCEPT: Queues create:

  • Longer cycle time
  • Increased Risk
  • More Variability
  • More Overhead
  • Lower Quality
  • Less Motivation

CHART: Queue size vs Capacity Utilization: Queue size increases rapidly with capacity utilization

CHART: Queue Size with Different Coefficients of Variation: Reducing variability has much less effect on queue size than lowering capacity utilization

CHART: Queues Amplify Variability: As we increase capacity utilization, our processes become increasingly unstable at high levels of capacity utilization

CONCEPT: Theoretical Optimum Capacity: When the cost of capacity rises then optimal capacity decreases

CONCEPT: Little’s Formula: Wait Time = Queue Size / Processing Rate


NOTES: Chapter 4: Exploiting Variability

CONCEPT: Principles of Variability

  1. Variability can increase value
  2. Asymmetric payoffs enable variability to create economic value
  3. Variability should neither be minimized nor maximized
  4. Optimum failure rate is 50%
  5. Overall variation decreases when uncorrelated random tasks are combined
  6. Forecasting becomes exponentially better in short time frames
  7. Many small experiments produce less variation than 1 big one: 4 quarters bet on 1 flip or 1 quarter bet on 4 flips
  8. Repetition reduces variation
  9. Reuse reduces variability
  10. We can reduce variability by applying a counter balancing effort (sailboat)
  11. Buffers trade money for variability reduction
  12. Reducing consequence is the best ways to reduce the cost of variability (broken thread in a weave)
  13. Operate in the linear range of system performance (sailboat tipping)
  14. Substitute cheap variability for expensive variability
  15. Better to improve iteration speed then defect rate
  16. Move variability to the process stage where the cost is the lowest (airplanes slow down vs circle)

TERM: jidoka: stopping a processs from making waste


NOTES: Chapter 5: Reducing Batch Size

QUOTE: Don’t test the water with both feet – Charles de Gaulle

CONCEPT: Batch Size Principles

  1. Reduce batch size reduces cycle time: Little’s formula says average queue size determines batch size
  2. Reducing batch size reduces variability in the flow
  3. Reducing batch size accelerates feedback
  4. Reducing batch size reduces risk — which is why IP uses small packets
  5. Reducing batch size reduces overhead — If I do an activity once I am poor at it, 10 times and I get better, 1000 times and I look for ways to make it really better
  6. Large batches reduce efficiency — might be more efficient for one engineer, but comes at the expense of destroying important feedback loops and lowering overall efficiency
  7. Large batches inherently lower motivation and urgency
  8. Large batches cause exponential cost and schedule growth
  9. Large batches lead to even larger batches — golden project syndrome
  10. Least common denominator: the entire batch is limited by its worst element
  11. Economic batch size is a U-curve optimization
  12. Reducing transaction cost per batch lowers overall cost
  13. Batch Size diseconomies: Batch size reduction saves much more than you think
  14. Batch size packing: Small batches allow finer tuning of capacity utilization
  15. Fluidity: loose coupling between product sub-systems enables small batches–mock objects
  16. The most important batch is the transport batch
  17. Proximity enables small batch sizes — hallway conversations vs. VTCs
  18. Short run lengths reduce queues.
  19. Good infrastructure enables small batches — test automation
  20. Sequence first which add value most cheaply
  21. Reduce batch size before you attack bottlenecks — boy scout march example
  22. Adjust batch size dynamically to respond to changing economies

CHART: Slippage Effects: Slippage in projects rises exponentially with duration

FORMULA: Optimum batch size is a function of holding cost and transaction cost

TERM: heijunka: mixed leveling done in production planning


NOTES: Chapter 6: Applying WIP Constraints

CONCEPT: WIP Control

  1. Constrain WIP to control cycle time and flow
  2. WIP constraints force rate-matching
  3. Use global constraints for predictable and permanent bottlenecks – Theory of Constraints by Eli Goldratt in The Goal
  4. If possible constrain local WIP pools – Feedback from kanban is quicker than that of a TOC system
  5. Use WIP ranges to decouple the batch sizes of adjacent processes
  6. Block all demand when WIP reaches its upper limit — like the telephone busy signal
  7. When WIP is high, purge low-value projects
  8. Control WIP by shedding requirements
  9. Quickly apply extra resources to an emerging queue
  10. Use part-time resources for high variability tasks
  11. Pull high-powered experts to emerging bottlenecks — keep big guns idle until needed
  12. Develop people who are deep in one area and broad in many
  13. Cross-train resources at adjacent processes
  14. Use upstream mix changes to regulate queue size
  15. Watch the outliers
  16. Create a preplanned escalation process for outliers — increase priority for aged items
  17. Increase throttling as you approach the queue limit
  18. Differentiate quality of service by worksteam
  19. Adjust WIP constraints as capacity changes
  20. Prevent uncontrolled expansion of work
  21. Constrain WIP in the section of the system where the queue is the most expensive
  22. Small WIP reductions accumulate
  23. Make WIP continuously visible

NOTES: Chapter 7: Controlling Flow Under Uncertainty

QUOTE: Anyone can be captain in a calm sea

CHART: Highway throughput is a parabolic function of speed.  It becomes very low at both low and high speeds.  Note: when the density is too high, flow is inherently unstable since the feedback cycle is regenerative.  In contrast operating to the right of the throughput peak promotes stability because of a negative feedback effect.

CONCEPT: Controlling the flow under uncertainty

  1. When loading becomes too high, we will see a sudden and catastrophic drop in output
  2. Control occupancy to sustain high throughput in systems prone to congestion.  Lights on I-66 controlling cars in.
  3. Use forecasts of expected flow time to make congestion visible. Disney says you have a 30 min wait from here, not a queue size.
  4. Use pricing to reduce demand during congested periods.  Off-season hotels.
  5. Use a regular cadence to limit the accumulation of variance.  Make sure buses start on time.
  6. Provide sufficient capacity margin to enable cadence.  We can only resynchronize to a regular cadence if we have sufficient capacity margin
  7. Use cadence to make waiting times predictable.  Good ideas have to wait a predictable amount of time.
  8. Use a regular cadence to enable small batch sizes.
  9. Schedule frequent meetings using a predictable cadence.
  10. To enable synchronization, provide sufficient capacity margin.  International flight must have enough delay from connecting flights.
  11. Exploit economies of scale by synchronizing work from multiple projects.
  12. Use synchronized events to facilitate cross functional trade-offs.  Grab all reviewers and put them in the same room.
  13. To reduce queues, synchronize the batch size and timing of adjacent processes.
  14. Make nested cadences harmonic multiples.  Nest reporting in the monthly, quarterly, and yearly
  15. When delay costs are equal, do the shortest job first.
  16. When job durations are equal, do the highest cost of delay first.
  17. When both job duration and cost of delay are not homogeneous, use WSJF
  18. Priorities are inherently local
  19. When task duration is unknown, time-share capacity
  20. Only preempt when switching costs are low
  21. Use sequence to match jobs to appropriate resources.  Use with specialty
  22. Select and tailor the sequence of subprocesses to the task at hand. Only visit nodes that add economic value.
  23. Route work based on the current most economic route.
  24. Develop and maintain alternate routes around points of congestion.  Trickle.
  25. Use flexible resources to absorb variation.  T-Shaped people.
  26. The later we bind demand to resources, the smoother the flow.
  27. Make tasks and resources reciprocally visible at adjacent processes.
  28. For fast responses, preplan and invest in flexibility.
  29. Correctly managed, centralized resources can reduce queues.  Concentrate your fire on the nearest star destroyer
  30. Reduce variability before a bottleneck.  Condition the flow just before the bottleneck

TERM: Cadence.  Use of a regular, predictable rhythm within a process

TERM: Synchronization.  Synchronization causes multiple events to happen at the same time

CONCEPT: FIFO queues work well for similar task durations and similar costs of delay

CONCEPT: WSJF: Weighted Shortest Job First.  WSJF = COD/Duration


NOTES: Chapter 8: Using Fast Feedback

QUOTE: A little rudder early is better than a lot of rudder late

CONCEPT: Manufacturing payoff-function vs. Product Development Payoff-function

  • Manufacturing has an inverted U-shape curve for performance/payoff which means that large variances create large losses.  In product development fast feedback is capable of altering this curve.
  • Product Development: Fast Feedback reduces loss from bad outcomes and enables exploitation of good outcomes

CONCEPT: Fast Feedback

  1. Focus control on project and process parameters with the highest economic influence – common sense
  2. Control parameters that are both influential and efficient – again common sense
  3. Select control variables that predict future system behavior — early interventions
  4. Set tripwires at points of equal economic impact — don’t ignore the small parameters because if they go wonkers they can have an impact
  5. Know when to purse of dynamic goal — our original goal is based on noisy assumptions
  6. Exploit unplanned economic opportunities — lack of adding an mp3 jack to the car
  7. Fast feedback enables smaller queues — use buffers but don’t go wonkers
  8. Use fast feedback to make learning faster and more efficient – sailing two nearly identical ships to see the difference
  9. What gets measured may not get done — just because you have a metric doesn’t mean it will help you
  10. We don’t need long planning horizons when we have a short turning radius — don’t be the B-1
  11. Small batches yield fast feedback
  12. To detect a smaller signal reduce the noise
  13. Control the economic logic behind a decision not the entire decision — reduce 1 lb of weight is work $300 of increased unit cost
  14. Whenever possible make feedback local — local wip of kanbaan vs global wip of toc
  15. Have a clear, predetermined economically-justified relief value
  16. Embed fast control loops inside slow loops
  17. Keep deviations within the control range — if testing Q gets too big as for more tests
  18. To minimize queues provide advance notice of heavy arrival rates
  19. Colocation improves almost all aspects of communication
  20. Fast feedback gives a sense of control
  21. Large queues make it hard to create urgency
  22. The human element tends to amplify large excursions – homeostatis
  23. To align behaviors, reward people for the work of others
  24. Time counts more than money

CONCEPT: Metrics for flow-based product development

  • Queues
    • Design In-process Inventory
    • Queue Size
    • Trends in Queue Size
    • Cost of Queues
    • Aging of items in Queue
  • Batch Sizes
    • Batch size
    • Trends in batch size
    • Transaction cost per batch
    • Trends in Transaction cost
  • Cadence
    • Process using cadence
    • Trends in Cadence
  • Capacity Utilization
    • Capacity utilization rate
  • Feedback
    • Feedback Speed
    • Decision Cycle time
    • Aging of Problems
  • Flexibility
    • Breadth of skill sets
    • Number of Multipurpose resources
    • Number of processes with alternative routes
  • Flow
    • Efficiency of flow
    • DIP Turns

NOTES: Chapter 9: Achieving Decentralized Control

SUMMARY: This chapter attempts to summarize Jez Humbles’s book.  It starts with a list of terms and definitions.  The next sub-part is on continuous delivery architecture, then post-deployment validation and finally CD.

CONCEPT: Configuration is under revision control to ensure repeatability.

CONCEPT: Core pieces of CD:

  • Continuous Integration
  • Scripted environments
  • Scripted deployments
  • Evolutionary database design
  • Test automation
  • Deployment pipeline
  • Orchestrator

TERM: Continuous Delivery: Automation (as much as possible) of configuration, deployment, and testing

TERM: Continuous Integration: A process that monitors the source code management and triggers a build.

TERM: Scripted environment: Scripts are created to configure everything from the O/S to the container.

TERM: Scripted Deployment: Process of scripting the deployments across all environments.

TERM: Evolutionary database design: Managing database changes to ensure that schema changes won’t ever break your application.

TERM: Deployment pipeline: defines how new code is integrated to the system deployed.

TERM: Orchestrator: Tool that coordinates all of the automation.

CONCEPT: One common script that uses variables for everything.


 

NOTES: Chapter 10: Designing the Deployment Pipeline

SUMMARY: Some very complex ideas are presented in just a few pages.  Much more information here would have been useful and the lack of it reduces the value of the book.

CONCEPT: Quickly locating the offending code down to the fewest number of developers possible is the basic principle that should drive the design.

CONCEPT: Testing layers.  First run unit tests, static code analysis, second run service or api tests, lastly test the User Interface.

CONCEPT: Testing stages.

  1. Build and test as much of the system as possible
  2. Break down the problem by using service virtualization to isolate different parts of the system
  3. Pick a subset of testing for fast feedback before promoting to later stages for more extensive testing

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s