The evolution of understanding

The evolution of understanding

I'm working on my next book about human and organisational sensemaking, researching diverse themes such as the fundamentals of communication to the impact of AI. It’s a luxury to spend months researching topics such as linguistics, cultural transmission, and organisational theories—especially when compared to Studio D's fast-paced consulting projects.

Two years into the writing process (and probably another five to go) has already reframed how I think about the world. To give a simple (giddy?) example that demonstrates how wonderfully versatile humans are in parsing information: there are around 122 million native Japanese, each with different body shapes, bone density, lung capacity, nasal passage and mouth structures, teeth and tongue sizes—most of whom have learned how speak, listen and sufficiently comprehend one another despite the range of sounds the human body can produce. That we've evolved as a species to accompllish this is a testament to what makes us uniquely human.

My motivation is going back to the fundamentals of communication is to lay the groundwork to understand why our practices have evolved to how they are today, and to consider how these will change in the future. Whilst the book is focussed on what affects human decision making, for these articles I'll explore the role of photos in organisational communication. It might seem a bit niche—only some of you use photography for your work—but I think the lessons learned can be applied far more broadly. 

A bit of context: most of our consulting work is for corporate tech clients and is company confidential, but these two public domain deliverables highlight how Studio D uses photography in research: Paddy to Plate, and When It Rains It Pours conducted with and for the wonderful team at Proximity Designs.

Today I'll introduce a couple of concepts that have shaped my thinking. 

The evolution of understanding

The evolution of understanding framework (Figure 1), which I first referenced in The Field Study Handbook, breaks down the stages of understanding researchers typically go through from initial hypothesis or hunch through to attaining wisdom. In discussions colleagues and clients often use the term “data” interchangeably with “information”, “knowledge”, and  “insight”. To make the distinctions clearer here I will refer to lowercase “data” as atomistic units—such as a raw, unprocessed data point, the first pass of the interview transcript, an unedited video recording or photo. Alternatively, uppercase “Data” can be used to encompass other stages in this evolution of understanding that have gone through some form of sensemaking process.

Figure 1.

It’s worth noting two things:

  1. Skipping steps in this evolution fundamentally undermines our ability to ascertain the veracity of Data, and therefore our ability to effectively apply what was learned, and,

  2. “wisdom” is rarely part of a project deliverable but rather it is generated weeks or even years later from the failed and successful application of insight to the project ask. Failure is critical to wisdom generation, because it starts to put boundaries around the limitations of how insight can be applied.

We can adapt this framework to consider how generative AI tools such as ChatGPT affect the evolution of our understanding (Figure 2, below). In this example, a reliance on Generative AI output enables a person to either condense steps from weeks to minutes, or to skip some steps entirely. However, the black-box nature of the underlying AI models and their training data, means that it is impossible to gauge the veracity of outputs, or to mitigate biases inherent to the underlying data to train the AI.

Figure 2.

Practitioners invested in a systematic approach to understanding a particular topic will likely adopt generative tools to articulate an informed hypothesis, but for the layperson wanting a quick answer the generative AI output will often suffice. The cost of reliance on unchecked outputs should be obvious:

  • a perception of having attained “knowledge” without effectively gauging its veracity;

  • a superficial sense of what is insight, and,

  • a growing inability to build wisdom, because of limitations in understanding the Data.

There are many brilliant researchers working on and drawing attention to related issues such as biases in training data and transparency. As a starting point I recommend reading up on research from Kate Crawford and Timnit Gebru and others.

This is a fast-moving space with frequent launches and updates to tools, including some that provide better context to original sources, making the edges of the black box somewhat less opaque, but fundamentally, a reliance on generative tools currently trades off convenience with our ability to gauge Data veracity.

The weight and fluidity of Data

The second framework I’ll share today explores the concepts of Data “weight” and “fluidity”.

Figure 3.

Data weight encompasses two things: 

  1. the time and effort required to obtain, manage, make sense of, and apply Data, e.g. the collection of data analytics, or (in Studio D’s case) running international research projects with interview transcripts, video and photos, and other informative artefacts. 

  2. there is also a psychological “weight—the burden of collecting data and not having the time to go through a rigorous sensemaking process—undermining the team’s confidence in their own abilities, and leading to related issues such as low team morale, confidence when presenting research deliverables, and an inability to affectively address critical audience questions.

An overburdening psychological weight of Data is mostly witnessed in inexperienced teams that over-collects data to the detriment of systematic sensemaking (a framework to help address this is shared in the footnote).

The fluidity of Data is the ease at which it travels through (sometimes beyond) an organisation—the ease of discovery, how it is shared, and introduced it into conversations.

Photo. Three generation household, Lashio, Myanmar.

On our projects the highest fluidity can often be found in a single photo, a photo + quote, or a photo + insight—these often become entry points for a wider audience to engage with "heavier" deliverables. The example given is for field research, but the concept can be applied to quant and analytics, and of course there are other important properties to explore too (which I'll cover in my next book).

On any research project we’re aiming to generate a portfolio of deliverables that demonstrate high weight (usually the main report) along with with assets that have high fluidity (usually including a photo).

On any research project we’re aiming to generate a portfolio of deliverables that demonstrate high weight (usually the main report) along with with assets that have high fluidity (usually including photos).

Whilst strong photographic assets are a mainstay of Studio D project deliverables, I recognise that readers/organisations have very different comfort levels with photography. In future articles I’ll unpack our process and how we make it work but for now I’ll share that we’ve only used dedicated photographers twice in the last decade—preferring instead to engage all team members in high quality photo asset generation.

What’s next?

In the next article I’ll unpack the concept of “organisational data metabolism” before exploring the fundamentals of photography and why it often has the highest return on investment of any data type. This should set us up nicely for the final article/s to explore the impact of Generative AI tools and advances in computational processing on how photos are collected, managed, and applied to downstream activities from product creation to organisational communication.

Foot notes

To address overly "heavy" data collection, the data collection to sensemaking ratio framework makes it easier to reflect on where time was spent in the research process and to adjust this ratio for future projects. It recognises that sensemaking occurs throughout the evolution of understanding, starting with the hypothesis, regardless of how we label sensemaking to sell projects.

Figure 4.

In the Sensemaking for Impact Masterclass which we’ve run over 35 times with diverse groups of practitioners I ask attendees their optimal data-collection to sensemaking ratio, which for field work typically comes out at 1:3 or 1:4 (although over the years outliers have included 200:1 and 1:40). There is no one “right” ratio—what is optimal for you depends on the domain you work in, approach, tools, resources, etc. Whatever your situation, reflecting on past ratio's helps refine your approach in the future.

Whilst a 1:4 ratio is common in field research, projects are often sold as having a 1:1 ratio, a reflection of stakeholder priorities at where the money is being spent, and how "sensemaking" is defined.


You might also appreciate a related article: Considering AI as belief system. This is not a screed in favour of artificial general intelligence (AGI), but merely argues that the diverse situations, education levels, intelligence, vulnerabilities will lead some people to consider (even today’s rudimentary AIs) as comparable to other foundational belief systems, and to be explicit—it points to the need for greater transparency and stronger regulation. 


Studio D has just launched our 2024 masterclass line-up which we'll run in May, with early bird tickets now available, including a new session focussed on Photography in the Paradigm Shift.


To stay abreast of what Studio D publishes and finds interesting, join our community of 8,748 of ethically grounded, internationally minded subscribers of our ~monthly Radar mailing list.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics