Vous êtes sur la page 1sur 10


Illustration by Kevin Cornell

Seeing the Elephant: Defragmenting User

by Lou Rosenfeld August 27, 2013
Published in Information Architecture, Interaction Design, Usability, User Research

A WTF moment in Silicon Valley

Like everyone else who 1) has performed user research, and 2) is over age 40, I spent the requisite
decade or two wandering a wilderness inhabited by misguided folks who assumed that, at best,
users behaviors and opinions were but minor considerations in the design process.

So imagine my shock when, about five years ago, I found myself trolling (AKA
consulting) down the corridors of a large Silicon Valley tech company. You most
definitely know this companyin fact, youve likely complained bitterly about your
experience with their products. Naturally, I expected to find precious little sensitivity
there to users needs, much less any actual user research.
ople who make
Instead I encountered a series of robust, expensive, well-staffed teams of researchers
many with doctoratesemploying just about every imaginable method to study the user
experience, including (but not limited to):

Brand architecture research

de, and content Call center log analysis
gners & devs.
Card sorting research

Clickstream analysis

Field studies

Focus groups

Market research

Mental model mapping

Net Promoter Score(http://www.netpromoter.com/) surveys

Search analytics

Usability testing

Voice of the customer research

The company had all this research into what their users were thinking
and doing. And yet their products were still universally despised.


The fable of the blind men and the elephant

Youve heard this one before. Some blind men walk into a bar Later, they happen upon an
elephant. One feels the trunk and pronounces it a snake. Another feels a leg and claims its a tree.
And so on. None can see the Big Picture.
Each of those teams is like one of those blind men. Each does an amazing job at studying and
analyzing its trunk or leg, but none can see the elephant. The result is a disjointed, expensive
collection of partial answers, and a glaring lack of insight.

Forget Big Dataright now, our bigger problem is fragmented data that comes from siloed user
research teams. Heres a simple example: one team may rely upon behavioral datalike a shopping
carts conversion rateto diagnose a major problem with their site. But they cant come up with a
solution. Meanwhile, just down the hall, another team has the tools to generate, design, and
evaluate the required solution. Unfortunately, they dont know about the problem. How come?

Because these two teams may not know that the other exists. Or they arent encouraged by their
organization to communicate. Or they dont share enough common cultural references and
vocabulary to have a reasonable dialogue, even if they wanted to. So synthesis doesnt happen, the
opportunity for game-changing insight is missed, and products and services continue to suck.

Ive since encountered the same problem in all sorts of industries and places outside the Valley.
Even relatively small companies like Aarron Walters MailChimp(/article/connectedux) struggle
with fragmented user research.

Organizations that now invest in user research must resist the urge to congratulate themselves;
theyve only achieved Level 1 status. How can we help them reach a higher stage in their evolution
one where the goal isnt simply to generate research, but achieve insight that actually solves real
design problems?

I wish there were a pat answer. There simply isnt.

But we can createconditions that get those blind men talking together. Consciously exploring and
addressing the following four themesbalance,cadence,conversation, and perspectivemay help
researchers and designers solve the problems all that precious (and expensive) user research
uncoverseven when their organizations arent on board.

Balance: Avoiding a research monoculture

Just as we favor the research tools that we find familiar and comfortable, large organizations often
use research methods that reflect their own internal selection biases. For example, an engineering-
driven organization may invest far more in its toolsy analytics platform than what may appear to
them as nebulous ethnographic studies.

If youre only listening to one blind man, youll be stuck with an incomplete and unbalanced view of
your customers and the world they inhabit. Thats risky organizational behavior: youll miss out on
detecting (and confirming) interesting patterns that emerge concurrently from different research
silos. And you likely wont learn something new and important.

A healthy balance of research methods and tools will give you a chance to really see the elephant.
Sounds simple, but its sadly uncommon in large organizations for two reasons:

1. Wedontknowwhatwedontknow. For example, you might have done dozens of field

studies, but know nothing about A/B testing.

2. Wedontknowwhattousewhen. There are so many potential approaches that its hard to

know which to use and how to optimally combine research methods.

Plenty of good books can introduce you to user research methods outside your comfort zone. For
example, ObservingtheUserExperience(http://www.amazon.com/ObservingUserExperience
SecondPractitioners/dp/0123848695/) and UniversalMethodsofDesign
(http://www.amazon.com/UniversalMethodsDesignInnovativeEffective/dp/1592537561/) will
help you inventory research methods from the human-computer interaction world, while Web
Day/dp/0470130652/) will do the same for web analytics methods.

But a laundry list of different research methods wont, by itself, tell you which methods you should
use to achieve balance. To make sense of the big picture, many smart researchers have also begun to
map out the canon.

One of the most extensive and useful maps is Christian Rohrers Landscape of User Research
Methods(http://www.nngroup.com/articles/whichuxresearchmethods/). It depicts research
methods within four quadrants delineated by two axes: qualitative versus quantitative, and
attitudinal (what people say) versus behavioral (what they do):
Use Christians landscape as an auditing tool for your user research program. Start with what you
already have, using this diagram first to inventory your organizations existing user research toolkit.
Then identify gaps in your research methodology. If, for example, all of your user research methods
are clustered in one of these quadrants, you need to find yourself some moreand some different
blind men.

Cadence: The rhythm of questions and answers

User researchlike any other kind of effort to better understand realitydoesnt work well if it
happens only once in a while. Your users reality is constantly in flux, and your research process
needs to keep up. So what research should happen when?

Just as a map like Christians can help you make sense of user research methods spatially, a
researchcadence can help you understand them in the context of time. A cadence describes the
frequency and duration of a set of user experience methods. Heres a simple example from user
researcher and author Whitney Quesenbery(http://www.wqusability.com/):
Whitneys cadence incorporates a mix of research methods, gives us a sense of their duration, and,
most importantly, maps out how frequently we should perform them. It helps us know what to
expect from an organizations upcoming research activities, and figure out how other types of
research might fit timewise.

To establish a cadence, first prioritize your organizations research methods by effort and cost.
Simple, inexpensive methods can be performed more frequently. You might also take a shortcut:
look for (and consolidate) the defacto cadences already employed within your organizations
various user research silos.

Then consider how frequently each method could be employed in a useful way, given budget,
staffing, and other resource constraints. Also look for gaps in timing: if your research is coming in
on only a daily or annual basis, look for opportunities to gather new data monthly or quarterly.

Heres a sample cadence. Given that your organization will employ a different mix of research
methods, your mileage will vary:


Call center data trend analysis 2 4 hours (behavioral/quantitative)

Task analysis 4 6 hours (behavioral/quantitative)


Exploratory analysis of site analytics data 8 10 hours (behavioral/qualitative)

User survey 16 24 hours (attitudinal/quantitative)


Net Promoter Score study 3 4 days (attitudinal/quantitative)

Field study 4 5 days (behavioral/qualitative)

Ive added in the categories from Christians two axes to ensure that our cadence maintains balance.

Balance and cadence can help organizations get the right mix of blind men talking, and make sure
theyre talking regularly. But how do we enable dialogue between different researchers and get them
to actually share and synthesize their work?

Conversation: Getting researchers talking

Getting people to talk is easier said than done. If your user researchers have HCI backgrounds and
your analytics team is mostly engineers, their languages and frames of reference may be so different
that they crush any hope of productive conversation.

To make that conversation more likely to succeed, its helpful to identify at least a few shared
references and vocabulary. In effect, look to develop something of a user research pidgin that
enables researchers from different backgrounds to understand each other and, eventually,

A concept from sociology, boundaryobjects(http://en.wikipedia.org/wiki/Boundary_object), can

be useful here. Boundary objects are two items from different fields that, while not exactly the same
thing, are similar enough that they can enable a productive conversation between groups. For
example, personas and marketsegments, or goals and KPIs, could be considered boundary objects.

Dave Gray(http://www.davegrayinfo.com/), co-author of Gamestorming and TheConnected

Company, has taken the idea further, developing a simple process
(http://www.gogamestorm.com/?p=58) for identifying a fuller boundarymatrix of common

While Daves process will help you determine common concepts and vocabulary, its still a Big Win
to get broad acknowledgment that, while you and your colleagues may be speaking (for example)
English, youre really not speaking the same language when it comes to user research. That
realization will make it much easier to meet each other halfway.


Common language makes it easier to have an effective interdisciplinary dialogue. So do stories that
demonstrate the value of that dialogue. Can you tell a story that shows the power of getting the
blind men to talk? Heres one Jared Spool(https://www.uie.com/about/)a master storyteller, for
suretold me a decade or so ago:

The analytics team at a large U.S. clothing retailer found, when analyzing its site search logs, that
there were many queries for the companys product SKUsand that they were all retrieving zero
results. Horrified, they quickly added SKUs to their catalogs product pagesan easy fix for a big
problembut they still couldnt understand how customers were finding the SKUs in the first place.
After all, they werent displayed anywhere on the site.

The analytics team could tell what was going on, but not why. So they enlisted the team responsible
for performing field studies to explore this issue further. The field study revealed that customers
were actually relying on paper catalogsan old, familiar standbyto browse products and obtain
SKUs, and then entering their orders via the newfangled website, which was deemed safer and
easier than ordering via a toll-free number.

The story may be an interesting example of cross-channel user experience. But for our purposes, its
a great way to show how two very different user research methodssearch analytics and field
studies, wielded by completely separate teamsdeliver compounded value when used together.


Of course, sometimes its not that hard to get interdisciplinary dialogue going; you just might need
to resort to some innocent bribery.

Samantha Starmer(http://www.linkedin.com/in/samanthastarmer), who led design, information

architecture, and user experience groups for years at REI, relates her experience in creating
dialogue with her counterparts in the marketing department. Samantha made a point of regularly
trekking across the REI campus over to their building to peek at the research they had posted in
their war rooms and cubicle walls. She would even buy candy for the marketing people she wanted
to get to know. She did whatever she could to get them talkingand sharingin an informal,
human way.

Samanthas guerrilla efforts soon bore fruither team developed relationships not just with
marketing, but everyone touching the customer experience. Informal lunches led to regular cross-
departmental meetings and, more importantly, sharing research data, new projects, and customer-
facing design work across multiple teams. Ultimately, Samanthas prospecting helped lead to the
creation of a centralized customer insights team that unified web analytics, market research, and
voice of the customer work across print, digital, call center, and in-store channels.

Perspective: Making sense and making function

So far, weve covered the need for a balanced set of user research tools and teams, coordinating their
work through orchestration, and getting them to have better, more productive conversations. But
thats quite a few moving partshow do we make sense of the whole?
Maps like Christian Rohrers landscape can help by making sense of an environment that we might
find large and disorienting. Youll also find that the process of mapping is, in effect, an exercise in
putting things together that hadnt been combined before.

But maps are also limitingthey are hard to maintain, and more importantly, you cant manipulate
them. To overcome this, the MailChimp team took a very different route to sense-making,
employing Evernote as a shared container for user research data and findings (see Aarron Walters
article, also in this issue of AListApart, Connected UX(/article/connectedux)). Its actually an
incredibly functional set of tools, all pointed at MailChimps collective user researchbut, unlike a
map, it struggles to make visual sense of MailChimps user research geography.

Would it make sense to combine your map and your container? Dashboards are both orientational,
like maps, and functional, like containers. Theyre also attractive to many leaders who, when
confronted with their organizations complexity, seek better ways to make sense and manage. But
before you get your hopes up, remember that theres a reason you dont steer your car from its
dashboard. Like any other design metaphor, dashboards tend to collapse as we overload them with

Perhaps some smart team of designers, developers, and researchers will be able to pull off some
combination of user research maps and containers, whether presented as a dashboard or something
else. In the meantime, you should be working on developing both.

Blue skies

These themesbalance,cadence,conversation, and perspectiveprovide a framework for

positioning your organizations user research teams to talk, synthesize, and, ultimately, come up
with more powerful insights. So, go make friends, have conversations, and get outside of your
comfort zone. Take a step back and look at what you and your counterparts are doingand when.
Then sketch maps and other pictures of which kinds of user research are happening in your
organizationand which are not.

Once youve done that, youll be armed to bring senior leadership into the conversation. Ask them
what evidence would ideally help them in their decision-making process. Then show them your map
of the imperfect, siloed user research environment thats currently in place. Balance,cadence,
conversation, and perspective can help make up the difference.

About the Author

Lou Rosenfeld
Lou Rosenfeld is the founder of Rosenfeld Media(http://rosenfeldmedia.com/), co-
author of Information Architecture for the World Wide Web
(http://oreilly.com/catalog/9780596527341/), and author of Search Analytics for Your
Beyond Goals: Site Search Analytics from the Bottom Up(/article/beyond

ISSN 1534-0295 Copyright 19982017 A List Apart & Our Authors