Archive 27/02/2024.

Time to Information

jesper

Hi all,

Here’s a map I have used to understand the concept of Time to Information. See blogpost below.

It shows the relationship between proximity and the evolution of the information.

I would like to discuss this map… which might be more of a derivative model. :slight_smile:

Best regards,
Jesper

chris.daniel

Hi @jesper,

this is a fanstastic and a very novel thought!

Have you had a chance to think how this approach could work for systems? I know it adds tons of complexity but I am really curious. Also, how one can use the formula to improve company operations?

I will tag @alexander.simovic as he was trying to create formulas for Inertia, so might be interested in this topic.

alexander.simovic

@jesper love it!
So briefly
MTTI^2 = f(visibility)^2 + f(evolution)^2
?

I’m keen to learn more and share together the discussion @chris.daniel and I had about calculating inertia

@chris.daniel Thank you for mentioning me, this is very cool!

Would you (both) be interested in a discussion next week, maybe (or some other week)?

jesper

Basic Pythagoras or vector math really :slight_smile: so it should apply to systems maps on a 2D plane too. But let’s not overthink it - but it seems to follow that the distance from A to B is dependent on both axis.

It also aligns with: Chris McDermott & Marc Burgauer: Maturity Mapping: Using Wardley Maps and Cynefin to create context specific maturity models on Vimeo from Lean Agile Scotland 2019
An element (Practice) can be

  • Reconfigured, to the left, to a less evolved product
  • Developed, to the right, to a more evolved product
  • Hidden, down, to a less visible product
  • Exposed, up, to a more visible product

Each direction affects the length of the vector aka the cost of the relationship.

Sure let’s chat on this topic. I have DM’ed my details to Alexander. :slight_smile:

julian.everett

Hi @jesper, have you seen Max Boisot’s ISpace model - that was an influence on Cynefin and covers some of this domain too? https://warwick.ac.uk/fac/soc/wbs/conf/olkc/archive/oklc4/papers/oklc2003_boisot.pdf

My one comment on your current proposal is that it is arguably too generalised - whilst it holds true as an average overview, actually the time to information in most cases will be highly subjective/dependent on the requester? I think really what you are attempting to calculate here is the information diffusion path traversal time. However the diffusion path is going to be increasingly dependent on the degree of connectivity between information requester and source in the network, the more novel and hidden the information is. For such information within a large organisation, then the time it takes to access it might be minutes or it might be months depending on whether I have an established informal network of trusted contacts I can IM or else whether I need to go via “formal channels”. Similarly pushing decision making to the edge of any network increases adaptability because it reduces the length of the information diffusion paths between sensors/sources and decision makers.

I think there is a generally applicable point that uncodified information entails additional time for decontextualisation and recontextualisation, but apart from that the generalised average for the time to information is only really useful for public/diffused information (where the average value applies to everyone equally). The further you go in the novel/hidden direction, the less you can rely on averages and the more you need to know about the context and specific diffusion path in this instance.

jesper

hi @julian.everett

Great feedback. Appreciate it - it makes me think. Which is really what the map is about :slight_smile:
I suspected there might be more dimensions to knowledge besides proximity and evolution. Evolution in the above is mostly thought of as “how to produce” - it might relate to codification in the paper.

Come to think of it there’s a trap here. We don’t know the scales of the dimensions. They might be logarithmic or unevenly distributed. In other words, the weight of A->B might depend on where on the map A and B are. @alexander.simovic and @chris.daniel something to ponder.

julian.everett

Hi @jesper, no problem. I still think there is lots of mileage in exploring this idea further :slight_smile: - it also brings to mind Chris Matts work on Feature Injection and the notion of IT delivery as an information arrival process. Maybe the lack of scales is not such an issue: if your approach is used to estimate relative distances/times, e.g. for prioritisation purposes, then it could still be super useful?

julian.everett

Some more thoughts on the idea of subjective proximity. In FAIR data terms, maybe we are talking about what information is Findable and Accessible to the team doing the mapping? So some y-axis examples might be:

  1. Public - e.g. available online
  2. Privileged, accessible - e.g. available on intranet, discoverable via a functional search engine
  3. Privileged, contingent - e.g. someone one the team reckons they can find the answer in <24hrs via their informal networks/comms channels
  4. Private - e.g. someone one the team knows the information exists but they don’t know where to find it, so to get hold of it will require time-consuming escalations/engagement with bureaucracy.
  5. Unknown - no-one on the team knows whether or not the information exists, so the team must reasonably procede on the assumption that it doesn’t (hence standard driver for proliferatiing duplicated solutions, etc)