SUMMARY:
Top 3 things I learned:
- If you run everything “efficiently” at full capacity, then you have no bandwidth for innovation or flexibility. This is something I learned from Theresa King at Verizon. She always refused work even when there was capacity left over so that she could respond if needed. At first I did not understand this concept as it seems counter intuitive, but (to a point) the less we did, the more we accomplished.
- Optimal learning happens when you fail half the time. This comes from R.L. Hartley’s forumula.
NOTES: Chapter 1: The Principles of Flow
IDEA:The Problem
- When our change fails to produce benefits, we revert to old ways
- Risk adversion will drive innovation out of our development process
CONCEPT: Problems with the Current Orthodoxy
- Failure to correctly quantify economics: Avoid vodoo economics and understand your costs
- Blindness to Queues: Not visible like boxes in a warehouse
- Worship of efficiency: need spare bandwidth
- Hostility to Variability: kills innovation
- Worship of conformance: live in an uncertain world
- Institutionalization of large batch sizes: illusion of efficiencies
- Underutilization of Cadence: lowers transaction cost
- Managing timelines instead of queues: emphasize flow you focus on queues rather than timelines
- Absence of WIP constraints: best way to manage queues
- Inflexibility: high capacity leaves you inflexible
- Noneconomic flow control: economic frame resolves scheduling dilemas
- Centralized Control: value efficiency over response type
CONCEPT: Major themes in the book (these are chapters)
- Economics: key is to think the problem through
- Queues: cause problems
- Variability: reduce variability and the cost of variability
- Batch Size: reduce batch size, single most effective way to reduce queues
- WIP Constraints: reduce cycle time
- Cadence, Synchronization, and Flow Control: Regular intervals, start/stop times, and sequence
- Fast Feedback: make better choices
- Decentralized Control:
NOTES: Chapter 2: The Economic View
CONCEPT: The Economic Principles
- Quantified Overall Economics. Have a good economic framework so that tradeoffs are easy to make
- Interconnected Variables. You can’t change just one thing. Here are the common inter-related variables
- Cycle Time
- Product Cost
- Development Expense
- Product Value
- Risk
- Quantified Cost of Delay (COD). Most important to quantify
- Value-added
- Inactivity principle: watch the work product not the worker. Inventory is the biggest waste
- U-Curve principle: important trade-offs have a U curve
- Imperfection: imperfect answers improve decision making
- Small decisions: influence the many small decisions, we focus on the big decisions
- Continuous economic trade-offs
- First perishability principle: many economic choices are more valuable when made quickly
- Subdivision principle: inside every bad choice lies a good choice
- Early harvesting: harvest the early cheap opportunities
- First decision rule: use decision rules to decentralize economic control
- First market: decision makers must feel both cost and benefit
- Optimum decision timing: every decision has optimum economic timing
- Marginal economics: always compare marginal cost and marginal value
- Sunk cost: do not consider money already spent
- Buying information: the value of information is its expected economic value
- Insurance principle: don’t pay for more insurance than the expected loss
- Newsboy principle: High probability of failure does not equal bad economics. Large payoff asymmetries
- Show me the money: to influence financial decisions, speak the language of money
CHART: Economic Batch Size: We do not need to find the perfect optimum to capture most of the value because it is a U-curve
- X axis: Batch Size
- Y axis: Cost
- Line: 45 diagonal: Holding Cost
- Line: Inverse Log: Transaction Cost
- U-Curve: Total Cost
CHART: Profit vs. Marginal Profit: Profit is maximized when total value is furthest away from total cost. Easier to see when the marginal value equals (crosses) the marginal cost
- X axis: Feature Performance
- Y axis: Dollars
- Line: Log: Total Value
- Line: Invers Log: Total Cost
- Line: Inverse Log: Marginal Value
- Line: 45 diag: marginal cost
CHART: Sequencing Risk Reduction. Sequence first those activities that remove the most risk for the least expense
- X axis: Expense of Risk Reduction
- Y axis: Value of Risk Reduction
- Line: 75 degree: Sequence Early
- Line: 30 degree: Sequence Late
CHART: Economics of Parallel Paths. incremental benefit of adding parallel paths progressively decreases
- X axis: Number of parallel paths
- Y axis: Dollars
- Line: U Curve: Total Cost
- Line: Inverse Log: Cost of Failure
- Line: 45 degree Development Cost
NOTES: Chapter 3: Managing Queues
NOTATION: M/M/1/** where
- M = Arrival Process: work arrives
- M = Service Process: time to accomplish work
- 1 = Number of parallel servers
- ** = Upper limit of queue size
CONCEPT: Queue Principles
- Invisible Inventory: product development inventory is physically and financially invisible
- Queuing Waste: queues are responsible for most of the economic waste
- Queuing Capacity Utilization: Capacity utilization increases queues exponentially
- High-Queue States: Most of the damage done by a queue is caused by high-queue states
- Queue Variability: Variability increases queues linearly
- Variability Amplification: Operating at high levels of capacity utilization increases variability
- Queueing Structure: Serve pooled demand with reliable high-capacity servers
- Linked Queues: Adjacent queues see arrival or service variability depending on loading
- Queue Size Optimization: Optimum queue size is an economic trade-off
- Queueing Discipline: Queue cost is affected by the sequence in which we handle the jobs in the queue
- Use Cumulative Flow diagrams to monitor queues
- Little’s formula: Wait Time = Queue Size/Processing Rate
- Queue Size Principle: Don’t control capacity utilization, control queue size
- Queue Control Size Control: Don’t control cycle time, control queue size
- Diffusion Principle: Over time, queues will randomly spin seriously out of control and will remain in this state for long periods
- Intervention: We cannot rely on randomness to correct a random queue
TERM: WIP: Work In Process
TERM: DIP: Design in Process
CONCEPT: Queues create:
- Longer cycle time
- Increased Risk
- More Variability
- More Overhead
- Lower Quality
- Less Motivation
CHART: Queue size vs Capacity Utilization: Queue size increases rapidly with capacity utilization
CHART: Queue Size with Different Coefficients of Variation: Reducing variability has much less effect on queue size than lowering capacity utilization
CHART: Queues Amplify Variability: As we increase capacity utilization, our processes become increasingly unstable at high levels of capacity utilization
CONCEPT: Theoretical Optimum Capacity: When the cost of capacity rises then optimal capacity decreases
CONCEPT: Little’s Formula: Wait Time = Queue Size / Processing Rate
NOTES: Chapter 4: Exploiting Variability
CONCEPT: Principles of Variability
- Variability can increase value
- Asymmetric payoffs enable variability to create economic value
- Variability should neither be minimized nor maximized
- Optimum failure rate is 50%
- Overall variation decreases when uncorrelated random tasks are combined
- Forecasting becomes exponentially better in short time frames
- Many small experiments produce less variation than 1 big one: 4 quarters bet on 1 flip or 1 quarter bet on 4 flips
- Repetition reduces variation
- Reuse reduces variability
- We can reduce variability by applying a counter balancing effort (sailboat)
- Buffers trade money for variability reduction
- Reducing consequence is the best ways to reduce the cost of variability (broken thread in a weave)
- Operate in the linear range of system performance (sailboat tipping)
- Substitute cheap variability for expensive variability
- Better to improve iteration speed then defect rate
- Move variability to the process stage where the cost is the lowest (airplanes slow down vs circle)
TERM: jidoka: stopping a processs from making waste
NOTES: Chapter 5: Reducing Batch Size
QUOTE: Don’t test the water with both feet – Charles de Gaulle
CONCEPT: Batch Size Principles
- Reduce batch size reduces cycle time: Little’s formula says average queue size determines batch size
- Reducing batch size reduces variability in the flow
- Reducing batch size accelerates feedback
- Reducing batch size reduces risk — which is why IP uses small packets
- Reducing batch size reduces overhead — If I do an activity once I am poor at it, 10 times and I get better, 1000 times and I look for ways to make it really better
- Large batches reduce efficiency — might be more efficient for one engineer, but comes at the expense of destroying important feedback loops and lowering overall efficiency
- Large batches inherently lower motivation and urgency
- Large batches cause exponential cost and schedule growth
- Large batches lead to even larger batches — golden project syndrome
- Least common denominator: the entire batch is limited by its worst element
- Economic batch size is a U-curve optimization
- Reducing transaction cost per batch lowers overall cost
- Batch Size diseconomies: Batch size reduction saves much more than you think
- Batch size packing: Small batches allow finer tuning of capacity utilization
- Fluidity: loose coupling between product sub-systems enables small batches–mock objects
- The most important batch is the transport batch
- Proximity enables small batch sizes — hallway conversations vs. VTCs
- Short run lengths reduce queues.
- Good infrastructure enables small batches — test automation
- Sequence first which add value most cheaply
- Reduce batch size before you attack bottlenecks — boy scout march example
- Adjust batch size dynamically to respond to changing economies
CHART: Slippage Effects: Slippage in projects rises exponentially with duration
FORMULA: Optimum batch size is a function of holding cost and transaction cost
TERM: heijunka: mixed leveling done in production planning
NOTES: Chapter 6: Applying WIP Constraints
CONCEPT: WIP Control
- Constrain WIP to control cycle time and flow
- WIP constraints force rate-matching
- Use global constraints for predictable and permanent bottlenecks – Theory of Constraints by Eli Goldratt in The Goal
- If possible constrain local WIP pools – Feedback from kanban is quicker than that of a TOC system
- Use WIP ranges to decouple the batch sizes of adjacent processes
- Block all demand when WIP reaches its upper limit — like the telephone busy signal
- When WIP is high, purge low-value projects
- Control WIP by shedding requirements
- Quickly apply extra resources to an emerging queue
- Use part-time resources for high variability tasks
- Pull high-powered experts to emerging bottlenecks — keep big guns idle until needed
- Develop people who are deep in one area and broad in many
- Cross-train resources at adjacent processes
- Use upstream mix changes to regulate queue size
- Watch the outliers
- Create a preplanned escalation process for outliers — increase priority for aged items
- Increase throttling as you approach the queue limit
- Differentiate quality of service by worksteam
- Adjust WIP constraints as capacity changes
- Prevent uncontrolled expansion of work
- Constrain WIP in the section of the system where the queue is the most expensive
- Small WIP reductions accumulate
- Make WIP continuously visible
NOTES: Chapter 7: Controlling Flow Under Uncertainty
QUOTE: Anyone can be captain in a calm sea
CHART: Highway throughput is a parabolic function of speed. It becomes very low at both low and high speeds. Note: when the density is too high, flow is inherently unstable since the feedback cycle is regenerative. In contrast operating to the right of the throughput peak promotes stability because of a negative feedback effect.
CONCEPT: Controlling the flow under uncertainty
- When loading becomes too high, we will see a sudden and catastrophic drop in output
- Control occupancy to sustain high throughput in systems prone to congestion. Lights on I-66 controlling cars in.
- Use forecasts of expected flow time to make congestion visible. Disney says you have a 30 min wait from here, not a queue size.
- Use pricing to reduce demand during congested periods. Off-season hotels.
- Use a regular cadence to limit the accumulation of variance. Make sure buses start on time.
- Provide sufficient capacity margin to enable cadence. We can only resynchronize to a regular cadence if we have sufficient capacity margin
- Use cadence to make waiting times predictable. Good ideas have to wait a predictable amount of time.
- Use a regular cadence to enable small batch sizes.
- Schedule frequent meetings using a predictable cadence.
- To enable synchronization, provide sufficient capacity margin. International flight must have enough delay from connecting flights.
- Exploit economies of scale by synchronizing work from multiple projects.
- Use synchronized events to facilitate cross functional trade-offs. Grab all reviewers and put them in the same room.
- To reduce queues, synchronize the batch size and timing of adjacent processes.
- Make nested cadences harmonic multiples. Nest reporting in the monthly, quarterly, and yearly
- When delay costs are equal, do the shortest job first.
- When job durations are equal, do the highest cost of delay first.
- When both job duration and cost of delay are not homogeneous, use WSJF
- Priorities are inherently local
- When task duration is unknown, time-share capacity
- Only preempt when switching costs are low
- Use sequence to match jobs to appropriate resources. Use with specialty
- Select and tailor the sequence of subprocesses to the task at hand. Only visit nodes that add economic value.
- Route work based on the current most economic route.
- Develop and maintain alternate routes around points of congestion. Trickle.
- Use flexible resources to absorb variation. T-Shaped people.
- The later we bind demand to resources, the smoother the flow.
- Make tasks and resources reciprocally visible at adjacent processes.
- For fast responses, preplan and invest in flexibility.
- Correctly managed, centralized resources can reduce queues. Concentrate your fire on the nearest star destroyer
- Reduce variability before a bottleneck. Condition the flow just before the bottleneck
TERM: Cadence. Use of a regular, predictable rhythm within a process
TERM: Synchronization. Synchronization causes multiple events to happen at the same time
CONCEPT: FIFO queues work well for similar task durations and similar costs of delay
CONCEPT: WSJF: Weighted Shortest Job First. WSJF = COD/Duration
NOTES: Chapter 8: Using Fast Feedback
QUOTE: A little rudder early is better than a lot of rudder late
CONCEPT: Manufacturing payoff-function vs. Product Development Payoff-function
- Manufacturing has an inverted U-shape curve for performance/payoff which means that large variances create large losses. In product development fast feedback is capable of altering this curve.
- Product Development: Fast Feedback reduces loss from bad outcomes and enables exploitation of good outcomes
CONCEPT: Fast Feedback
- Focus control on project and process parameters with the highest economic influence – common sense
- Control parameters that are both influential and efficient – again common sense
- Select control variables that predict future system behavior — early interventions
- Set tripwires at points of equal economic impact — don’t ignore the small parameters because if they go wonkers they can have an impact
- Know when to purse of dynamic goal — our original goal is based on noisy assumptions
- Exploit unplanned economic opportunities — lack of adding an mp3 jack to the car
- Fast feedback enables smaller queues — use buffers but don’t go wonkers
- Use fast feedback to make learning faster and more efficient – sailing two nearly identical ships to see the difference
- What gets measured may not get done — just because you have a metric doesn’t mean it will help you
- We don’t need long planning horizons when we have a short turning radius — don’t be the B-1
- Small batches yield fast feedback
- To detect a smaller signal reduce the noise
- Control the economic logic behind a decision not the entire decision — reduce 1 lb of weight is work $300 of increased unit cost
- Whenever possible make feedback local — local wip of kanbaan vs global wip of toc
- Have a clear, predetermined economically-justified relief value
- Embed fast control loops inside slow loops
- Keep deviations within the control range — if testing Q gets too big as for more tests
- To minimize queues provide advance notice of heavy arrival rates
- Colocation improves almost all aspects of communication
- Fast feedback gives a sense of control
- Large queues make it hard to create urgency
- The human element tends to amplify large excursions – homeostatis
- To align behaviors, reward people for the work of others
- Time counts more than money
CONCEPT: Metrics for flow-based product development
- Queues
- Design In-process Inventory
- Queue Size
- Trends in Queue Size
- Cost of Queues
- Aging of items in Queue
- Batch Sizes
- Batch size
- Trends in batch size
- Transaction cost per batch
- Trends in Transaction cost
- Cadence
- Process using cadence
- Trends in Cadence
- Capacity Utilization
- Capacity utilization rate
- Feedback
- Feedback Speed
- Decision Cycle time
- Aging of Problems
- Flexibility
- Breadth of skill sets
- Number of Multipurpose resources
- Number of processes with alternative routes
- Flow
- Efficiency of flow
- DIP Turns
NOTES: Chapter 9: Achieving Decentralized Control
SUMMARY: This chapter attempts to summarize Jez Humbles’s book. It starts with a list of terms and definitions. The next sub-part is on continuous delivery architecture, then post-deployment validation and finally CD.
CONCEPT: Configuration is under revision control to ensure repeatability.
CONCEPT: Core pieces of CD:
- Continuous Integration
- Scripted environments
- Scripted deployments
- Evolutionary database design
- Test automation
- Deployment pipeline
- Orchestrator
TERM: Continuous Delivery: Automation (as much as possible) of configuration, deployment, and testing
TERM: Continuous Integration: A process that monitors the source code management and triggers a build.
TERM: Scripted environment: Scripts are created to configure everything from the O/S to the container.
TERM: Scripted Deployment: Process of scripting the deployments across all environments.
TERM: Evolutionary database design: Managing database changes to ensure that schema changes won’t ever break your application.
TERM: Deployment pipeline: defines how new code is integrated to the system deployed.
TERM: Orchestrator: Tool that coordinates all of the automation.
CONCEPT: One common script that uses variables for everything.
NOTES: Chapter 10: Designing the Deployment Pipeline
SUMMARY: Some very complex ideas are presented in just a few pages. Much more information here would have been useful and the lack of it reduces the value of the book.
CONCEPT: Quickly locating the offending code down to the fewest number of developers possible is the basic principle that should drive the design.
CONCEPT: Testing layers. First run unit tests, static code analysis, second run service or api tests, lastly test the User Interface.
CONCEPT: Testing stages.
- Build and test as much of the system as possible
- Break down the problem by using service virtualization to isolate different parts of the system
- Pick a subset of testing for fast feedback before promoting to later stages for more extensive testing