Of bullet points, knowledge, and organizational ethnography

What a Sad Way to Earn a Living by Phil Shirley Creative Commons via Flickr
What a Sad Way to Earn a Living, by Phil Shirley (Creative Commons via Flickr)

Recently, I experienced the following scene during a client meeting to review preliminary research results from the “ethnography” phase of a short service design consulting engagement:

The research team, having been deeply engaged in fieldwork for several weeks, has worked up some preliminary findings as “themes.” These are presented to a small team from the client organization consisting of no more than a handful of people, but including an executive and several senior managers. Because we (the research team) are excited to share our discoveries, we decided to call out only the “most obvious” themes for this early presentation and smooth over some of the rough edges in the data. We are providing just enough anecdotal evidence — basically, amusing, sad, or otherwise affecting narratives — to illustrate our themes and bring them to life a little, but we are keeping it at a high level. We are using sticky notes attached to white boards, partly because white boards are a little less definitive than slides, partly because this is how we have chosen to work for this project. Sticky notes work well for collaboratively analyzing data when working as a team.

As we go through these initial findings, the client’s responses seem “flat” and not very interested. When we finish, someone from the client says, “There’s actually not a whole here that’s new or surprising to us. In fact, we have had many of these themes as bullet points on strategy slides for a while now.”

It was a startling moment. Fortunately, we ultimately recovered from it, and the project was successfully completed. But it was a standout experience for a number of reasons and has continued to be on my mind since we finished the project. I’d like to tell the “theoretical story” of what happened here, how we recovered from it, and what role ethnography and theory were able to play in this recovery. I don’t need to (and won’t) name any client specifics because they aren’t required for the story. I’ve seen stories like this before; it’s the pattern I’m interested in.

More often than not, organizations that hire ethnographers (whether directly or as part of a design agency) have certain kinds of very specific expectations about the scope of ethnography. Academically-trained ethnographers already know that their critical perspectives on neoliberalism will, for the most part, not be welcome. But for some, especially those who are new to the world of work and organizations, it still comes as a surprise to realize that their work is expected to fit narrowly into the realm of “data gathering,” and that “analysis” and “interpretation” (drawing conclusions, deciding what’s valuable, determining action) will be left to others. In part, this is clearly a “domain” issue: people working for client organizations tend to think that, since they are ultimately responsible for the project’s outcome, they should be the ones interpreting the data and deciding what happens next. Another reason is that many organizational actors believe that their understanding of the context is far richer than a researcher’s could ever be — based on possibly erroneous assumptions about the nature and purpose of ethnography.

One expectation that I’ve often encountered is that an outside researcher, particularly one with an “unusual” method like ethnography, better produce data that the client didn’t already know about. In other words, the cost of doing ethnography should be justifiable in terms that organizations can quantify and understand. Concepts like “new,” “measurable,” and “transformative” insights are readily invoked. This creates a transactional, even antagonistic relationship from the start because it puts the ethnographer — whose method by definition cannot declare its anticipated outcomes prior to starting, at least not in quantifiable terms — in a defensive position.

So when my research team presented its findings and the client responded by saying they were nothing new, it sounded as if we were being told we had failed. Despite having made it clear that we were only presenting preliminary findings, there was skepticism, because if our most standout discoveries were all things that had already been bullet points on strategy slides, then what had we really achieved? On one hand, one could say that we had “independently” verified what some of this organization’s issues were. On the other, it seemed that our method hadn’t produced anything “new.” Judged in terms that organizations use to measure success, we had failed because we had produced themes that were identical to already existing organizational knowledge.

But let’s dig a little deeper to understand what was really going on here. I will take a short theoretical excursion to set up what I’m going to say next. Bear with me if you’re familiar with this part (or if you don’t care much about social science theory).

Bruno Latour by G. Garitan Creative Commons via Wikimedia Commons
Bruno Latour in 2015, by G. Garitan (Creative Commons via Wikimedia Commons)

The French philosopher and anthropologist Bruno Latour has spent most of his career studying how knowledge is created. For his early, career-defining work, he focused on scientists, conducting ethnographic fieldwork in laboratories or by going on field trips with “natural” scientists. As part of an influential group of like-minded social scientists working in the 1980s and 90s, Latour was instrumental in showing us, in detail, that and how knowledge is constructed instead of discovered. This process of construction involves two different kinds of actors — humans and nonhumans, as well as many processes and dependencies that cannot easily be classified as “science” (e.g. funding, office politics, “craft,” “skill,” etc.). There are many examples of nonhuman actors participating in research, but an easy-to-grasp one is a laboratory instrument. Constructed by others (presumably, humans), it enables, animates, and constrains laboratory processes looking to produce new knowledge. The laboratory instrument is often a “black box,” a tool whose own process of coming into being is no longer understood by its current users. The work of making new knowledge is messy work. Far from the ideal that “natural” scientists (in fact, most scientists… even social scientists) would like us to believe about their work (that dimensions, truths, and relationships in nature are simply there, waiting to be discovered by a brilliant researcher), scientific method is fundamentally contingent, variable, and about as far from a straight line as possible. Latour refers to chains or networks of knowledge creation. The impermanent and changing relationships between their elements ultimately produce outcomes that can be presented as facts or knowledge. This process is known as mediation, and its typical end point is the appearance of a new fact. At this point of appearance, known as translation, the “messy” network or chain that produced it is severed and disappears from view, leaving just the fact visible as a new truth that was “discovered” by brilliant human agency. There are a number of reasons why this mechanism works in this particular way, many of them constitutive of our particular moment in history. For example, the “shortcuts” offered by pre-produced facts and black boxes allow us to proceed quickly in the production of new knowledge, while the removal of the messiness that underlies it all affords us certainty about the nature of the world and our place in it, giving us confidence to produce further knowledge, expand the economy, etc. (For the purposes of this blog post, I’ll leave it at this extremely high-level gloss of one of Latour’s main ideas. He has many more that are equally good or better.)

Back to our bullet points. As ethnographers, we sense that Latour is right, that knowledge is very much produced, that we create it in dialogue with our research participants, the research site, the specific questions we’re choosing to ask, and so on. (This is also why we should feel skeptical about the idea that we could simply be “gatherers” of field data, to be interpreted and transformed into “tranformative insights” by someone else.) So when we were told that most of what we had presented was in fact already part of the organization’s “official” strategy slides, we naturally started to wonder what processes of knowledge production had led to the construction of those facts. In other words, who had determined that these were the organization’s problems? And how? Had other, prior research been conducted? What methods had been used for that? In this case, clear answers were not forthcoming, and in my experience this is fairly typical of organizational knowledge, which often presents as a “severe case” of Latourian translation. Since knowledge in organizations — despite decades of “knowledge management” initiatives — is not valued in the same way that it is by academic researchers, religious practitioners, or community archivists, the memory of its genealogy is often quickly lost.

This is how those bullet points ended up on those slides. Another important thing happened once these truths became documented strategies in someone’s PowerPoint presentation: they turned into one particular person’s or group’s problem. That they had been identified and written down as strategy objectives also meant that everyone else could begin to wash their hands of any imperative to do something about them. They turned into fixed, inactive, ossified organizational knowledge. Speaking metaphorically, looking at those slides made everyone feel that “someone’s got this,” and that it would eventually be addressed.

And it is this comforting assumption that we needed to challenge in our project. Knowing what was wrong, in the manner of a bullet point on a strategy slide, was never going to result in meaningful change — in fact, this kind of knowing may actively prevent change from occurring. We sense instinctively that this is true: once a concrete insight has been re-stated as an abstract strategic goal, it loses much of its potential for action. I think this happens because making change (whether corporate improvement or political activism, the thrust of it is the same) demands a large surface area to find purchase and traction. In other words, changing an organization (even one aspect of an organization) typically requires an array of different ideas, solutions to test, field trials, evaluations of successes or failures, etc. Making change is not a one-shot thing where research somehow produces the one breakthrough insight that leads to the obvious thing to do. Instead, change — just like research itself, just like creating a new product or service, or like a relationship between humans — demands a kind of trial-and-error stance, a willingness to try a variety of things until one solution, or multiple in combination, brings about the desired result.

To enable designers to find and test the right kinds of solutions, ethnographic research needs to make visible a wide variety of data and possible interpretations. They can be ambiguous or even contradictory, as long as they are related to the problem. Most complex symptoms have multiple causes, so the act of identifying a single cause is either a foreclosing of what really is, or a sleight of hand whose purpose and payoff should be critically examined. In either case, the result is fiction, not research.

In the project that I took this case from, we stood our ground and continued our work as planned. We used a visual, tactile data analysis method that allowed us to “physically” document our process and messy interim discoveries. Some of those were in fact contradictory to expected outcomes. Even after we distilled our findings into “final” themes, we preserved the full data set and our analytical and interpretive decisions so that future readers could go back and understand how we arrived at them. This was not unnecessary transparency, reflexivity, or academic navel-gazing, but a systematic attempt to keep open as many surfaces for future design solutions as possible. The client should — theoretically at least — be able to benefit from our ethnographic study for many years to come, not because it produced particularly “new” or “transformative” insights, but because it made visible a large surface area of “problem space” that current and future solution designers can work with.

As we think about the ethnographic method and its value and impact in organizational work, this is useful to remember. Ethnographic research is not about somehow magically distilling the one transformative insight that all previous research failed to produce. It is also not merely about data gathering and leaving the interpretation to someone else. It is about making visible the multivalent, ambiguous, and contradictory complexity of what we discover, and preserving the messy reality of how we arrive at our insights, so that change becomes possible along multiple vectors. Regardless of the kind of change we are looking to enable, it will require many small steps, some of which will fail. Ethnography, among other things, helps us prepare for the inevitable failure of putting all our eggs in one basket.

Leave a comment