Evolution explained in a different way



Recently, I have been doing a lot of research around the valuation of companies and financial regimes that could be built on top of maps. What I wanted to share today are few concepts and a different perspective on Evolution. And I hope it will be useful.

In mapping, the term uncertainty has a rather ambiguous definition. I mean, it is easy to get intuitively, but building a precise measurement around it is not that easy. Yes, Simon has a method to do that based on weak signals, but the method itself cannot be made publicly available, as it relies on determining the frequency of keywords and can be easily spoiled.

On the other hand, in statistics and finances, uncertainty has quite a precise meaning. It is the shape of the curve describing the probability of expected outputs. I’m terrible at explaining this thing, so just look at the Figure 1 below:

Figure 1: Low uncertainty vs High uncertainty. Note, that the height of the curve does not mean the value, but probability. The higher is the value, the more the curve is shifted to the right.

For a low uncertainty curve, we have outputs grouped very close to the mean value. In statistics, this is called Normal Distribution. The flatter the curve gets, the higher the uncertainty. The highest degree of uncertainty means that you assume you can get any output with similar probability and is represented by a flat line.

The reason why I find that information important is that those curves (uncertainty) are very well researched in mathematics, and there are guidelines for determining the shape of the curve and assumptions that need to be made. While I have not completed all the research yet (my statistics skills got a bit rusty), this post shows the direction in which I think it would be worthwhile to go.

In this phase of Evolution, we can safely assume the initial investment to be represented by a Normal Distribution and being on a relatively low level. Since many people used the component we are trying to adopt, there should be a lot of historical data that can be used to construct a Normal Distribution, and since most of the investments are similar to each other, we can say with a high degree of certainty that learning, f.e. serverless, will take 4±1 weeks in 99% of cases. That’s confidence, isn’t it?

It is important to mention that such conclusions can be made only if there are around 30 previous adoptions (!!!), and it would be good to check that there is no significant trend lowering the initial investment (30 last measurements are more or less similar to previous 30 measurements), because if not, different tools need to be applied.

Figure 2: Distributions of Costs, Benefits and Initial investment in the Commodity/Utility phase.
Note: Cost of adoption is not the cost of migration. The process of migration from f.e. DevOps to Serverless is completely different from learning how to use Serverless.

If costs and benefits are described by Normal Distribution, too, they can be very close to each other, and each transaction is very likely to bring value (because of low variability, situations like reduced revenue from a transaction and increased cost will not play a significant role). The narrower are the curves, the lower the smaller differences between costs and revenues are acceptable.

This is why Six Sigma is not only advised here, but is essential. With 30+ adoptions, and a large number of transactions, you can improve your margins by reducing deviation of costs and benefits, and if the competition presses you, you can reduce the margin without unnecessary risk.

Investments in improving things in this phase can be justified by measurements, statistics and finances.

Custom-built is characterised by relatively unpredictable (but high) initial investment. You more or less know how much it will cost to build, but you are quite uncertain about the delivered value and costs per transaction. Actually, the part of costs that can be attributed to a transaction is quite low, because you had to make a significant upfront investment.

Figure 3: Distributions of Costs, Benefits and Initial investment in the Custom-built phase.

In this distribution, there are fewer prior implementations available, if at all. Statistics cannot offer any precision here, so investment in those components should happen only if the expected value from a single transaction multiplied by the expected number of transactions is significantly higher than the investment. This is why only the biggest niches should be exploited first, and why the B2B market (with potentially high-value transactions compared to consumer transactions) is favoured here.

The trick is once you have 5 implementations available (The Rule of Five), you know quite a lot about the solution, and the following forces apply.

Figure 4: Forces affecting the shape of the distributions.

These forces are:

  1. The expected initial investment becomes more clearly defined. We become more certain of what is involved to build a solution. At the same time, we are quite certain about how much it will cost to execute a single transaction, so we know a lot about fixed and variable costs. Also, we get a slightly better view of the benefits delivered by each transaction.
  2. We learn a lot about how to build the solution, and each subsequent attempt becomes easier, and requires progressively lower investment.
  3. We learn how to attribute parts of the fixed costs to variable costs (and manage spare capacity).
  4. The solution is no longer revolutionary if many organisations are using it, so the perceived value reduces.

As soon as we learn the investment will pay off, the component transitions to the next phase.

The initial investment (and fixed costs) does not have to be low, but is quite predictable. That is the price you pay for buying and installing the product. The costs of executing a single transaction are pretty much well known, but the benefits…

Benefits have to be estimated by you, by the product buyer, and in the early product phase, you have no prior data to do that. You are certain about costs, uncertain about benefits, and that’s why, sometimes, we feel buyer’s remorse - when you miscalculate (or are misguided about) the value of benefits.

Figure 5: Distributions of costs, benefits and initial investment in the product space.

Early in this phase, especially in the B2B market, you will find companies that may not release the price of the product to the open public. This is because they do not want to reduce their profits. They can look at the potential customer, evaluate his potential benefits, and put a price tag equivalent to the added value. Such price has nothing to do with costs.

Once someone figures out how to reduce the initial investment and take the risk out of the buyer picture, we quickly move to the Commodity/Utility space.

Genesis means running fixed-budget experiments which always yield some value (knowledge), and sometimes that knowledge can be commercialised. So, it is a fixed cost, and completely unpredictable output for a single experiment. A series of experiments run over time around a certain topic can inform expectations about benefits, indicate new directions of research, or, yield completely nothing. There is no point in analysing it from any other perspective than whether or not you can afford doing the research.

So, all those considerations lead to this table, which may complement Simon’s cheatsheet in the future. Who knows? :slight_smile: .

Genesis Custom-built Product/Rental Commodity/Utility
Initial investment + Fixed costs Fixed High & uncertain (low amount of prior data) High but certain Low & Predictable
Value per transaction Any Predictable to some degree Getting more predictable Very predictable
Amount of transactions Few Small (compared to what may be possible) Highly increasing first, high later High
Cost per transaction - Difficult to estimate (we are using Fixed costs) Predictable Very predictable

Interesting consequences of this approach:

  • there is a chance this approach will work forwards. Product (such as a military Tank) is in the product space even if we do not know how the Commodity/Utility version of it will look like.
  • some components’ life may end before they reach Commodity/Utility, if they are replaced by some other approach producing the same output, or if the value chain they are part of is replaced with a totally new approach
  • Tacit knowledge will never get to the Commodity/Utility phase, because the initial investment will always be high.
  • Timing is important. You can’t jump from Custom-built to Product unless taking significant risk or having a number of observations. Similarly, the investment to transition from Product to Commodity needs to be economically justified.

Flagship Examples:

Gold bar Mobile phone (Hard to talk about transaction here, we have rather ‘interactions’).
Initial investment + Fixed costs Low (you do not have to do anything particularly expensive to buy gold) High (buy the phone - unless leased)
Value per transaction Very predictable Predictable to some degree
Amount of transactions High (compared to the market size) Very high
Cost per transaction Low (your worth does not change a lot) Low and predictable (time)
Verdict Commodity/Utility Late product - less mature than the gold bar

CC-BY-SA by @krzysztof.daniel , based on work @simonduckwardley, reviewed by @john.grant.


Given sample sizes are proportional to the stage of evolution, could the central limit theorem be used to dispute or exaggerate low or high uncertainty?


@john.grant. Killer question, I do not feel confident in this space, but let’s try.

Before I get to the Central Limit Theorem, I have to mention that the proportionality of sample size and level of Evolution seems to be a bit misleading. In the Custom Built phase the entire population of solutions is extremely small (I’d say 1-5), and it starts growing rapidly in the Product phase. That said, I would not try to use any large number approach for anything less mature than Product being sold by more than one vendor.

Now, the Central Limit Theorem is

is a statistical theory states that given a sufficiently large sample size from a population with a finite level of variance, the mean of all samples from the same population will be approximately equal to the mean of the population.

and its power, according to quora, is that if you test a lot of samples, you get a normal distribution of results. So, if your samples are large enough, by having just a few of them, you ould calculate, for example, the mean cost of the initial investment into adopting the Commodity/Utility service. There is, however, practical question here - in what circumstances will you get access to those independent, large samples?


I think using the normal distribution illustrates the intuitions being explained, however, there is a problem with deciding that these should be normal distributions in the first place. Normal distributions being extensively studied and known could simply mean they are easy to study and know.

These probability distributions are not directly observable, which makes any risk calculation
suspicious since it hinges on knowledge about these distributions. Do we have enough data?
If the distribution is, say, the traditional bell-shaped Gaussian, then yes, we may say that we
have sufficient data. But if the distribution is not from such well-bred family, then we do not
have enough data. But how do we know which distribution we have on our hands? Well,
from the data itself. If one needs a probability distribution to gauge knowledge about the
future behavior of the distribution from its past results, and if, at the same time, one needs
the past to derive a probability distribution in the first place, then we are facing a severe
regress loop–a problem of self reference akin to that of Epimenides the Cretan saying
whether the Cretans are liars or not liars. And this self-reference problem is only the
– from “On the Unfortunate Problem of the Nonobservability of the Probability Distribution”


@tristan.slominski - I know it will not be easy. Before late ‘Products/Rental’ we certainly get no Normal Distributions, we simply have not enough of sufficient data, and, more likely, we may experience price reductions. There is some more work to be done in this space to figure out proper statistical tools.

Later, we have enough of data to filter out changes in time, and we should get much more data, and I expect the distribution to become normal.

@john.grant - I think that as soon as you get any numbers on custom-built solution uncertainty, you are much better than without any numbers at all, and you can at least match your investment, risk and expected revenue, or cancel the project if the risk is out of your appetite.

I know there are many mistakes that can be made in this space, I just see some potential in this approach, especially because Digital Twins may be used sooner or later to track costs and delivered value, and that is one step from automating investment selection process in your value chain. It sounds worth exploring. :slight_smile:


Here’s the “different way” version of the mapping helper: https://mappingevolutiondifferently.com

Helper for determining the stage of evolution