Model for Improvement and Hill Climbing
Improvement and hillclimbing really do go together.
The Model for Improvement, codified by colleagues at API in the late 1980s as a general purpose guide to improvement, is actually an algorithm to reach a goal. It combines three questions and a test cycle, as shown in the graphic from API.
One reason the Model for Improvement works as well as it does is that the Model tells you how to search for a goal. This claim is not just handwaving, the Model for Improvement maps to a variety of learning or goalseeking algorithms defined by computer scientists.
One of the simplest goalseeking algorithms is the hillclimbing algorithm.
The hillclimbing search algorithm gives you a way to get to the top of a hill, starting from an arbitrary location. You have an aim (reach the top of the hill), a way to measure progress (is a new location higher than the previous location?) and the ability to changemove from current location to new location. So, you have the answers to the first three questions of the Model for Improvement: aim, measurement, and change.
There is a repeated cycle in the hillclimbing algorithm: you evaluate options for a step up the hill, measure the height of each option relative to the current location, and take that option that is highest (and higher than the current location). Repeat the cycle repeatedly until no further progress is detected. This repeated cycle, along with the three answers essentially implements the Model for Improvement.
In the technical note at the end of this blog, I show how the Model for Improvement maps to another hillclimbing method, steepest ascent, which requires a few more assumptions than simple hillclimbing.
Sperm seeking an egg appear to deploy hillclimbing to reach the final destination, as described in this article in Science; goalseeking behavior is fundamental to life.
Simple hillclimbing works if there is a single peakno local peaks, no plateaus, no ridges that can get you stuck at a relatively low position relative to the absolute peak that is somewhere in the neighborhood. Single peak system are common but not universal.
For example, a local peak means you've gotten system performance to a place where small changes only make things worsebut you're missing out on achieving substantially better performance with a different change that is not close by to where you're testing. To get to the higher peak may require going downhillaccepting a shortterm decline in performance to get in the neighborhood of the higher peak.
The way computer scientists modify simple hillclimbing is to restart the hillclimbing from different, random locations. If a different starting position yields a better solution (higher on the hill), then the first hill top found is only a local maximum. Starting from different locations corresponds to different initial conditions in a work systemso knowledge of starting conditions should inform our understanding of the impact of changes. On the other hand, if different starting locations always yield the same ending point, your degree of belief that you have reached the highest point anywhere around should be strong.
Technical Note: Model for Improvement and the Method of Steepest Ascent
Model for Improvement Item  Math version of a mountain climbing ("steepest ascent")  Notes 
What is our aim? 
Maximize a differentiable function F(x) 
F(x) is the height of the function F at the point x 
How do we know a change is an improvement? 
Given a current position x_{i} and a new position x_{i+1, }we have an improvement if F(x_{i+1}) > F(x_{i}) and F(x_{i+1})  F(x_{i}) > ε 
We compare the height at the point x_{i+1} to the height at the point x_{i}. So long as the heights differ by at least a little bit indicated by the value ε, then we are still increasing the height by moving from x_{i }to x_{i+1}. 
What change can we make? 
Choose x_{i+1 }= x_{i} + λ_{i} del(F)_{i} 
del(F)_{i} is the multivariate “slope” of the function F, evaluated at the point x_{i}. The equation says that we want to move to a new point x_{i+1 }in the direction of increasing slope, by an amount given by the scale factor λ, when we start from point x_{i}. 
Cycle 1 Plan 
Choose λ_{0} = 1 and some guess x_{0. }Predict that if we set x_{1 }= x_{0} + λ_{0} del(F)_{0}, then F(x_{1}) will be greater than F(x_{0}) and F(x_{1})  F(x_{0}) > ε 

Cycle 1 Do 
Calculate F(x_{1}), F(x_{0}), del(F)_{0} and F(x_{1})  F(x_{0}) 

Cycle 1 Study 
Compare F(x_{1})  F(x_{0}) to ε and compare F(x_{1}) to F(x_{0}). Do the predictions hold?—is F(x_{1}) > F(x_{0}) and F(x_{1})  F(x_{0}) > ε? 

Cycle 1 Act 

Properties of F (e.g. concavity) may indicate that the local maximum is a global maximum 
Cycle 2 Plan 
Same as Cycle 1 Plan but replace x_{0} with x_{1}, x_{1} with x_{2} and del(F)_{0} with del(F)_{1 } 

Cycle 2 Do 
Same as Cycle 1 Do but replace x_{0} with x_{1} , x_{1} with x_{2} and del(F)_{0} with del(F)_{1 } 

Cycle 2 Study 
Same as Cycle 1 Study but replace x_{0} with x_{1} and x_{1} with x_{2}. 

Cycle 2 Act 
Same as Cycle 1 Act but replace x_{0} with x_{1} and x_{1} with x_{2} and replace Cycle 2 Plan with Cycle 3 Plan, incrementing indices appropriately. 

. . . 
Keep going until the algorithm terminatesyou will reach at least a local maximum. As with the hillclimbing algorithm, you can start steepest ascent from various locations and see if the algorithm finds any other peak.