Académique Documents
Professionnel Documents
Culture Documents
This document collects my notes about key concepts and techniques covered in the
Harvard Business School MBA elective, Product Management 101 (course
description; 2014-14 syllabus). The document is a work-in-progress which I’ll be
expanding/refining over the coming months. DO NOT ASSUME THAT YOU’VE
DOWNLOADED THE FINAL VERSION. The document is deliberately dense and
meant to convey lots of information on any topic with few words. As such, it is
structured as an outline — one that won’t be fun or easy to read, because it omits
examples and smooth prose transitions between ideas.
THE PM ROLE
DESIGN THINKING
CUSTOMER DISCOVERY
SOLUTION GENERATION
SOLUTION VALIDATION/MVP TESTING
THE MRD
UX PRINCIPLES
TECHNOLOGY BASICS
PRD/DRAFTING SPECIFICATIONS
WORKING WITH DEVELOPERS
AGILE VS. WATERFALL
PROJECT MANAGEMENT
USABILITY TESTING
LAUNCH MARKETING
QA BASICS
POST-LAUNCH ANALYTICS
SQL BASICS
A/B TESTING/QUANTITATIVE EXPERIMENT DESIGN
ADWORDS BASICS
1
THE PM ROLE
2
DESIGN THINKING
3
o Implementation. Finally, use convergent thinking to select the most
promising solution.
o Process is iterative, so you alternate between
ideation/generation/divergence and
implementation/validation/convergence, and in process broadening
then narrowing solution space – with a trend toward overall
narrowing (depiction of solution space under exploration –vertical
axis—over time on horizontal axis: an accordion bigger on the left
than the right)
Critique (Buxton): “so fundamental to their practice that designers see no
more need to talk about it in a book than to mention that they also need to
eat breakfast”
Relationship to product management
4
CUSTOMER DISCOVERY
Research Techniques
5
o Often, you can learn a lot from interviewing extreme users (e.g., heavy
users, individuals who use product in unusual ways, etc.)
Customer Interviews
• Purposes: both generating and validating hypotheses
– Use interviews to generate hypotheses regarding: 1) potential
customer segments and their unmet needs; 2) potential customers’
current solutions for meeting those needs, and their reasons for
dissatisfaction with those solutions; 3) structure/motivations of
decision making unit; and 4) potential barriers to customers adopting
your envisioned solution (e.g., budget constraint; risk aversion).
– You have validated hypotheses when multiple interviews converge on
the same conclusions
• “Validation” doesn’t mean you have found “truth” – it means
you have, at least for now, failed to falsify the hypothesis and
you can proceed in the same direction with some confidence
• Limitations
– Interviews shouldn’t be used to collect feature lists or solicit product
design advice; rather, you should ask how envisioned solution might
help better perform a specific task
– Interviews are not good for pricing research, due to self-interested
behavior or lack of buying authority
6
– With radical innovation, potential users may not be able to express
preferences, but they can still describe their problems with current
solutions
• Guidelines for conducting customer interviews
– Set 2-3 explicit research objectives; develop interview guide
– Select sample and recruit interviewees (see above for guidelines)
– Decide who else will join you
• Who needs input: engineers? sales?
• 2+ is better: one moderator + one note taker
• However, consider risk of disjointed jumps with
multiple interviewers, so coordinate roles
– Brace your self emotionally to hear things you don’t want to hear (Giff
Constable)
– Conduct interview
• Determine approach: on site with customer is preferable if
physical environment plays a role in problem/solution;
otherwise, phone interview is typically more efficient
• Decide if you’ll record (Alvarez p. 83). Pros: capture
everything; you can listen more carefully. Cons:
awkward request to start; increases analysis time;
makes some interviewees more cautious.
• Length: 45 minutes is ideal according to Alvarez, p. 100,
but Cespedes says 1-2 hours is appropriate for field
interview
• Number of interviews: Alvarez (p. 116) suggests you
tend to hit diminishing returns in terms of hypothesis
generation/validation after 15-20 interviews.
• Choose first questions carefully; don’t overdo icebreakers
• Alvarez, pp. 88-89: make interviewee confident she’ll be
helpful; set expectations (“I’ll mostly be listening”); get
interviewee talking by being quiet yourself after posing
first substantive question.
• Disarm politeness: tell people you need honest answers,
including criticism (Giff Constrable)
• Define terms
• Use mostly open ended questions and listen WAY more than
you talk
• Restate answers to confirm understanding
• Some basic customer discovery questions (Alvarez p. 60)
• Tell me how you do [task] today.
• Do you use any [tools/tricks/apps] to do the task?
• In completing the task, if you could wave a magic wand
and do anything you can’t do today, what would it be?
• Last time you did task, what were you doing right
before and right afterward?
• Should I have asked anything else?
7
• Justin Wilcox’s script:
1. What’s the hardest part about [problem context]? [Context should not be too
narrow or broad]
2. Can you tell me about the last time that happened? [stories are especially vivid]
3. Why was that hard?
4. What, if anything, have you done to solve that problem? [if they aren’t looking for
a solution already, we don’t have a problem worth solving!]
5. What don’t you love about the solutions you’ve tried? [this becomes your Unique
Selling Proposition]
Other questions
How often do you experience this problem?
How much are you spending to solve this problem now?
Where do you find information about [problem context]?
8
interviews, process changes (Cepedes, p. 7). Cespedes suggests after-
action review: how did we do vs. intent? How can we improve? Also
focusing on process, Alvarez suggests (pp. 102-103):
• Did opening go smoothly?
• Did I ask leading questions?
• Did I get any bland answers?
• Did I use new questions on-the-fly that worked well and should
be added to interview guide?
• Was there anything I failed to ask/learn?
• Where did interviewee show the most emotion?
• Human-centered design interview tactics
– “Show me” and “Draw it” – ask interviewees to demonstrate things
they interact with
– Toyota Production System “Five Whys”
– Ask interviewees to “think aloud” as they complete a task
Focus Groups
• Purpose: typically, exploring customers’ current usage patterns and attitudes
– Group setting can be especially effective for eliciting reactions in high-
involvement categories with emotional, status, or life-style
associations.
– Good at finding desires, motivations, values and memories (Goodman,
p. 142)
• Limitations
– Best for generating or disconfirming hypotheses (answer to “Do you
want…?” is usually “Yes”), rather than validating hypotheses
– Best for exploring usage of/attitudes toward existing products by
mainstream customers. Subjects often struggle to state preferences
for products they haven’t experienced, and early adopters’ views may
be too disparate to converge in a group discussion.
– Conducting an effective focus group requires moderator skill, so if
budget allows, rely on a professional researcher
• Guidelines: 6-10 users of a homogeneous type (to encourage people to open
up with less fear of being judged), often compensated, 60-90 minutes
– Introduce objectives & ground rules (e.g., anonymity); participant
intros; how/when they use/bought product; discussion of
likes/dislikes
– To avoid herding, ask subjects to jot down ideas first
– Use whiteboard to guide discussion; never disagree or take sides
(“How do others feel about that…”)
Observation/Ethnography
IDEO: you can learn a lot from extreme users (e.g., eight- or eighty-year old
struggling with a can opener)
9
Likewise, look for “hacks” and workarounds to identify unmet needs.
Synthesize observations using empathy map with four quadrants: DO (lower
left), SAY (upper left), THINK (upper right), FEEL (lower right). Then look for
new/surprising/contradictory patterns to gain insight on latent needs
(Kelley & Kelley, p. 223)
Journey map is describes the steps a customer undertakes in solving a
problem/completing a task, arrayed in a timeline, “with special attention to
emotional highs and lows and the meaning that the experience holds for the
customer” (Liedtke, “Ten Tools for Design Thinking,” UVA case BP-0550)
– Goal: identify pain points/unmet needs
– Include even small steps (Kelley & Kelley, p. 234)
– Can be completed for different personas (see below)
Customer Surveys
• Purpose: assess 1) usage patterns; 2) past purchase behavior; 3) satisfaction
with existing solutions; 4) strength of needs; 5) feature preferences (but be
wary of responses!); 6) purchase intent (again, be wary of responses!); 7)
correlation of above with demographic/behavioral/psychographic attributes
(for segmentation)
– Responses can provide input for estimate of potential market size
• Limitations
– Must understand problem well enough to frame appropriate
questions, so surveys should typically be done only after a round of
customer interviews
– Survey responses regarding purchase intent for radically new
products are not reliable
– More broadly, questions should focus on past/present rather than
expected future behavior, especially when exploring an aspirational
product (e.g., “Would you like to go to the gym more often?”)
• Guidelines
– Conduct pilot testing to ensure question clarity and avoid excessive
length
– As with interviews, avoid leading questions such as “Would you like
XYZ?”
– Ensure adequate sample size; avoid convenience sampling
Competitor Analysis
Competitor feature audit, including benchmarking (e.g., with letter grades,
“moon charts”)
Surveys can uncover frequency of use, features used, strength of loyalty, etc.
Competitor usability testing (see section below for “How To” guidance) can
uncover unmet needs for existing solutions
10
Personas
What Are Personas?
• A persona is a precise description of a representative user of a given type.
o How are personas different from customer segments? Customer
segments, which typically reflect shared
demographic/behavioral/attitudinal characteristics revealed through
large sample quantitative analysis, are used to target marketing
efforts. Personas reflect personal stories/goals/motivations, as
revealed through one-on-one interviews and ethnographic research;
they typically are used to provide context on real customers’ needs in
the product design process.
• Persona description might include: 1) image and name (NAME is crucial!); 2)
key demographic and behavioral information or job description; 3) key
needs/worries/motivations/GOALS and their implications for a solution; 4)
quotes
o Some generic needs/tradeoffs to consider in developing personas
(Alvarez, pp. 25-26)
Preference for time vs. money
Risk aversion/prefer predictability vs. novelty seeking
Decision maker vs. taker
Independence vs. reliance on others’ views
How tech savvy?
• 3-7 personas are typical for a business; 1-3 should be primary. Having too
many suggests a product that’s too diffuse. Cooper, 135: Sometimes good to
include personas of potential users you are NOT designing for. Can have
personas for influencers/buyers, but you don’t design UI for them.
11
Example Persona
12
SOLUTION GENERATION
13
more human because they see humans as imprecise and imperfect
computing devices.
o Norman, 42: use root cause/5-Whys analysis to understand true goals.
Levitt: users don’t want quarter inch drill, they want a quarter inch
hole. Norman, true, but not root cause: users want to hang a picture.
Or, they want to decorate their house.
Brainstorming/Structured Ideation
Goal: generate LOTS of new solution concepts, leveraging design criteria and
insights about personas and their needs from exploration phase
IDEO rules for brainstorming: defer judgment; encourage wild ideas; stay
focused on the topic; build on the ideas of others (Brown, Change by Design,
p. 78)
o Brown: “Butterfly test” gives small post-its to each participant, used to
vote on which ideas should move forward. [Note that this approach is
vulnerable to social pressure.]
o Start with a crisp problem statement and share relevant information
on personas, competitors’ efforts to solve problem, etc.; number ideas;
use space and cover walls; avoid deference to HiPPO – highest paid
person’s opinion (Kelley, Art of Innovation, p. 56)
Techniques to encourage creative problem solving
o Envision how a negative can be a positive (Liedtke)
o Pretend to be someone else (Liedtke)
o Think about the opposite (Kelley & Kelley, p. 102)
“Design Studio” technique (Jeff Gothelf, Lean UX, pp. 37-41)
o Multidisciplinary team of 5-8
o Problem definition and constraints (30 minutes)
o Individual idea generation (10 minutes): ask each team member for
six ideas. Encourage sketching.
o Presentation/critique of ideas: each member states persona and pain
point their idea addresses; critique seeks to clarify intentions rather
than evaluate idea
o Iterate and refine: individually, members spend 10 minutes
developing a single “big idea,” then the group critiques these
o Team idea generation, converging on a solution with best prospects
for success (45 minutes)
Goldenberg et al. (HBR March 2003) patterns for generating product
innovation ideas. Start by breaking product into components, then explore
patterns
o Subtraction especially to avoid feature creep/excessive complexity,
e.g., Philips removing all buttons from DVD player
o Multiplication with altered copies, e.g., Gillette double-bladed razor
(with 2nd blade at different angle to improve the shave)
o Division into components then reintegrate parts in new way, e.g.,
stereo components broken out of integrated HiFi units
14
o Unification of task previously performed by separate elements into
one element, e.g., defroster that serves as radio antenna. Good when
cost control is imperative.
15
o Should be disposable. This helps avoid premature attachment to
specific solutions.
Prototypes vary in terms of fidelity, i.e., closeness to eventual design
o Early, low fidelity prototypes focus on assessing value to potential
customers; later stage prototypes are often of higher fidelity and are
intended to resolve technical issues, gain clarity on workflow or
integration, secure feedback on visual design, etc.
o Brown, Change by Design, p. 91: surface details may require attention
so that potential customers are not distracted (i.e., false negative risk)
o Prototypes should be clean/clear/obvious. They should include some
images. But too much visual polish in prototypes (e.g., obsessing over
colors, font size, etc.) is likely to entail wasted effort as designs
change. (Klein, p. 129)
Also, having a polished design may make you sub-consciously
less likely to make major and necessary changes.
Likewise, a polished design feels more “done,” so testers are
more inclined to be polite in giving feedback. Or, they may be
distracted by visual elements (color choices etc.) and not focus
enough on ease of interaction etc.
o HH: high vs low fidelity distinction can be misleading. Focus on
purpose of prototype and intended audience, not tool used to create it.
It isn’t always the case that a finished-looking prototype is used late in
the design process, and vice versa. Depends in part on audience. E.g.,
Apple Knowledge Navigator video was meant to make the concept
concrete for a broad audience
o Buxton pp. 383+ cites research showing that quality of prototypes
didn’t impact test subjects’ perceptions of usability
Buxton 383: subjects are reluctant to criticize designs because it gives
impression they are negative people and they don’t want to hurt tester’s
feelings.
o BUT!!! When shown 3-4 designs, the willingness to criticize goes
away. Same design gets much higher rating when shown alone than
when shown in tandem with rival design ideas.
Formats for early stage software prototypes:
Sketches, which Buxton distinguishes from prototypes
Sketches (vs prototypes): evoke and explore in tentative manner
(vs describe, refine, test)
Storyboards to demonstrate user experience, often with post-its or note
cards allowing easy manipulation
Wireframes — skeletal representations of a software application or
website that map key features/functions, navigation elements, and links
between screens
Paper prototypes – can rely on manual replacement of “screens” to
simulate interaction
o Pros: first iteration is fast
16
o Con: rapid iteration can be tedious; simulation of
interaction is artificial
Interactive wireframes (as with Balsamiq) allow users to click on
elements and be transported to the destination screen
When wireframing, writing copy for navigation elements forces
you to understand what a screen is about (Klein, 124)
How Does Prototype Testing Relate to Next Section’s Topic, MVP Testing? A
prototype is a representation of the envisioned system, and may be functional or
non-functional. Product prototypes can be used in ways that conform to the
definition of an MVP. Soliciting user reaction to a prototype (“Would this software
help you do XYX?”) is one way to validate a solution. Likewise, functional prototypes
can be used to confirm that key technologies will work as intended.
How do the MVP categories in the next section relate to prototypes? “Limited
Use MVPs” require a functional prototype; typically, so do “Wizard of Oz
MVPs,” which have a user-facing interface that resembles that of the
envisioned application but an invisible backend. Concierge MVPs tend to rely
on manual front- and backend processes; they are not really faithful
representations of the envisioned system. Smoke tests may or may not
attempt to represent the envisioned system. A landing page test may share
only a text description of the planned service. A video MVP – like the one
used to demonstrate Dropbox before its beta test – will often depict a
prototype.
17
SOLUTION VALIDATION THROUGH MVP TESTING
Definitions
Lean startup: A new venture that tests business model hypotheses using
Minimum Viable Product tests. “Lean” does not necessarily imply “low cost”;
rather, it refers to an imperative to “avoid waste.”
Minimum Viable Product: The smallest set of product functionality and
operational capabilities — including the possibility of a non-functional
product simulacrum — required to rigorously disprove a business model
hypothesis.
Falsifiable hypothesis: A hypothesis is falsifiable when it can be
disconfirmed through an experiment. For example, NOT “Our product will
grow through word of mouth, RATHER, “Our viral coefficient will exceed 0.5.”
False negative: A test result that indicates a hypothesis has been
disconfirmed when in reality it is valid. Common cause: reaction to badly-
designed MVP
False positive: A test result that fails to disconfirm a hypothesis that in
reality is not valid. Common cause: sampling friends or super-enthusiasts
who aren’t representative of “typical” early adopters
Pivot: Changing some business model elements while retaining others.
Product-market fit: Occurs when the venture has the right product for the
market: one with demonstrated demand from early adopters and with solid
profit potential. Lean startups do not commence scaling until they achieve
product-market fit
18
– Industries with small numbers of large customers who regularly
compare notes on vendors
– Products that are especially vulnerable to IP theft (e.g., distinctive
designs that can be easily/legally copied)
MVP Purpose
Typical concerns
Reputation damage due to poor quality and/or limited
functionality/capability. But: 1) quality won’t necessarily be poor – it should
be set at a level that yields reliable test feedback; and 2) small numbers limit
impact.
Early launch will lead to concept theft. Maybe, but value of learning typically
more than offsets this risk.
MVP Types
Limited Use MVP reduces product functionality relative to envisioned solution, e.g.,
Intuit SnapTax (Ries, pp. 29-31); IMVU (Stanford case E254)
Smoke Test (Alvarez p. 134) radically constrains both functionality and operational
capability, and thereby gauges commitment before product is built, ideally by
19
securing an actual pre-order. Capturing an email demonstrates less commitment.
Variants include:
Landing page test
Button to nowhere: click on button/link that sends you to a “coming soon”
message. Modest cost is user’s reaction “What else can’t I trust here?”
Video MVP (e.g., Kickstarter)
o Tips for creating a video prototype (via Toy Lab in Kelley & Kelley, p.
135)
Use a script and a shot list; mix camera angles and shot styles
to avoid boring rhythm
Use voiceovers to convey meaning/backstory
Pay attention to lighting and sound quality
Get early feedback
Keep it short -- < 2 minutes
Letter of Intent: often used with enterprise software
o Typically legally non-binding statement that if product with XYZ
specifications is delivered by XYZ date, customer intends to buy at
XYZ price. Even though this isn’t a legal contract, it still signals some
level of commitment, because customers don’t want to have to deal
with a vendor who can cite the LOI and say, “You reneged on a
promise.” There could be reputational consequences from reneging.
o Entrepreneur/PM can learn a lot about customer needs by negotiating
the language of an LOI.
20
tests”; design thinking practitioners embrace rapid prototyping. LS
builds on many antecedents, but adds some distinctive ideas, in
particular, testing all elements of a business model using a series of
MVPs.
• Myth: Only for software/web/mobile products
• Reality: Rapid iteration is easier with software-based businesses, but
3D printing and other prototyping tools are speeding product
development cycles in hardware.
• Myth: LS is only for startups
• Reality: “Fast and frugal” testing when launching new ventures makes
just as much sense for big corporations as for startups.
21
MRD
• What:
• Why:
• Vision
– Use positioning statement template from Moore or others
• Customer Value Proposition
– Unmet Needs to be Addressed: Hypotheses
– Customer Segments to be Targeted: Hypotheses
– Key Use Cases: Hypotheses
– Existing Solutions and Their Shortcomings
– Key Requirements for a New Solution: Hypotheses
– Expected Sources of Differentiation
– Why Now?
• Evidence Supporting Hypotheses
– Evidence for Unmet Needs
– Evidence of Demand for Proposed Solution
• Market Size
• Go-To-Market Approach
• Risks and Key Dependencies
• Strategic Considerations
• Team Members
• Go/No Go Recommendation
22
UX PRINCIPLES
23
Cooper: be wary (esp. with programmers not trained in design) of self-
referential trap, i.e., imagining yourself as user
Norman ch 5 analyzes user error. Categorizes causes, including deliberate
error (e.g., propping open a door; exceeding speed limit); multitasking;
fatigue; carelessness due to boredom; time stress. Design lessons:
o Distinguish between slips (correct goal, flawed execution) and
mistakes (wrong goal)
Slips happen due to lower-level cognition problems during
execution. They are action-based (wrong action, e.g., pour
orange juice instead of milk into coffeee) or memory lapse-
baed (forget to do action, e.g., forget to turn off stove burners)
Experts are more vulnerable to slips because they go on
autopilot
Capture slips happen when procedures have similar
opening actions then diverge. You’ve been doing
procedure A a lot, then switch to B, but slip because you
keep following action path for A
Description slips (e.g., throw dirty shirt into toilet,
instead of laundry basket) happen with controls that
are too similar. Important for different functions to have
different looking controls/displays (often violated in
airplane cockpit)
Memory-lapse slips e.g., leave card in ATM or originals
in copier. Often caused by interruptions or cognitive
load of so many steps. Avoid with vivid reminders or
forcing, e.g., can’t complete transaction until your
recover card.
Mode-error slips, e.g., turn off wrong component in
home entertainment system. Exacerbated when one
control has multiple functions to save money.
Mistakes are due to problems at higher levels of cognition (i.e.,
planning, and comparing results against expectations). They
are rule based (wrong rule applied despite correct diagnosis of
situation); knowledge based (lack knowledge to diagnose, e.g.,
cargo measured in pounds not kilos, or novel situation with no
set rule), or memory lapse based (e.g., distracted tech botches
diagnosis)
Good design provides clear, readily accessible display of
system state.
Memory lapse at higher cognitive level leads to mistakes, at
lower level to slips
Providing “Undo” option at all stages is really important.
Constraints can prevent slips, e.g., different coloration of
liquids to be added to reservoir and different container designs
so container spouts only fit one intake.
24
Confirmation statements can help with slips, but with mistakes
the user is typically sure that they want to proceed, so error
may not be corrected.
Checklists help. Have two people execute them.
Detecting mistakes is hard, because it’s difficult to ascertain
goals. Likewise, memory lapse is hard to detect.
o Use 5 whys to analyze, with awareness that many errors have
multiple (and often nested) causes
Krug, Don’t Make Me Think: things that diminish goodwill
o Hiding key info like customer service phone number or shipping costs
o Insisting that info conforms to site’s format preference (e.g., dashes in
SS number)
o Asking for unnecessary information
o Sizzle in the way of steak (time consuming Flash prelude)
o Amateurish looking site
Weinschenk = W: 100 Things Designers Need to Know About People; Krug =
K: Don’t Make Me Think
Layout
o Group similar items together, and use white space to create
patterns: our brains scan for patterns -W
o People respond strongly to images of faces-W
o People who use our alphabet read web page top left to bottom right,
so locate key info on top or in middle of page-W
o Shading on button creates perceived affordance-W
Color
o Blue and red don’t go together – hot and cold colors-W
o 9% of men are color blind-W
o Color meaning varies by culture, e.g., white is color of death for
some, purity for others-W
o
Text/Reading
o Titles and headlines are CRITICAL-W
o Use all caps – which is harder to read -- sparingly, to get attention-W
o Notwithstanding strong opinions, research shows no difference in
reading comprehension between serif and sans serif fonts. Use them
to evoke mood etc. -W
o Some newer fonts, e.g., Tahoma and Verdana, are designed with
large “x” (i.e., letter body) heights for better screen readability-W
o Especially when reading on a screen, contrast matters: best is black
text on white background-W
o Break text into chunks with bullets, short paragraphs, pictures, etc. -
W
o People read faster with long line length but prefer shorter line
length-W
25
o Omit needless words, especially happy talk. Eliminate instructions
by making things so simple they aren’t necessary -K
Navigation
Progressive disclosure: don’t overwhelm with info on first page; let user
choose level of detail that meets their needs. Don’t worry about the fact
his requires multiple clicks—if clicks make sense, the user will find
navigation easy. Crucial, however, to know what most users want at each
step/down each path –W
Make targets large enough and not too far away to avoid “motor load”
-W
People categorize info, so do it for them-W
Provide progress indicators-W
Make it easy to undo last action and entire last sequence-W
What gets attention: faces, things that move (e.g., videos), loud noises-W
Some users are “search dominant” and always start navigation with a
search; others are “link dominant” and will start by browsing, via
hierarchy –K
o Back button accounts for 30-40% of all web clicks
o Conventions for web nav include sections (with a “you are here”
bold/arrow/highlighted/color-change indicator always evident)
and 4-5 utilities (e.g., shopping cart, site map, checkout, help,
archives, customer service, etc.) in top bar; local (current level)
nav on left side (left or right adjusted – not centered); “about us”
etc. at bottom
o Persistent nav or “global” nav appears on every page (except
home page and some pages for forms), and should always include
logo/site ID (also linked to home page), home link, search box
(with the word “Search”!), sections/primary nav (ideally as tabs
with contrast coloring—different for each section--for active tab;
tabs are self evident, obvious, and suggest physical space) and
utilities
o Think through nav carefully 3-4 levels below home page
o Every page needs a PROMINENT name (e.g., “Auctions” section
and “Sell an Item” subsection, later a link in the former top level
tab)
o Breadcrumbs (in addition to but below section nav) show current
location: You Are Here: Home > Hobbies > Book Collecting >
Welcome (each hyperlinked)
o Home page should accommodate: site identity and 6-8 word clear
and informative/lively tagline or mission statement; how to get
started via 1)site hierarchy, 2) search and 3) best stuff =
teases/timely content/deals; registration
o Pulldowns are tempting as a way to save space, but they reduce
serendipity in discovery and they are “twitchy” because they
26
disappear easily. Pulldowns are best for alpha lists of known
names, e.g., states
Action/motivation/understanding
Change beliefs by getting people to make a small commitment-even one
that is forced-W
People process info best as stories, especially of the journey form
(obstacle overcome); they’ll embrace causation assertions through story-
W
People learn best from examples; photos and videos are great ways to
deliver examples-W
Sustained attention lasts about 10 minutes (hence, length of Lynda.com
videos)-W
With repeated action, there’s a tradeoff with making things too easy:
people will get bored and careless-W
People are most motivated when close to reaching goal; motivation
plummets/attrition risk rises after reaching goal-W
Variable/unpredictable rewards (rather than fixed schedule) are
powerful, because unpredictability keeps people searching –especially
when info comes in small doses (e.g., 140 character tweets)-W
People are motivated by intrinsic rewards, social connection
opportunities, and they value progress, challenge/mastery (so, include
scores, leader boards, progress indicators, etc.) and control (so,
emphasize autonomy when you want them to self service) -W
People are more motivated to compete when there are fewer
competitors-W
People let others decide when they are uncertain, so use testimonials-W
Impulse to imitate is strong, so show videos of people doing action you
want to encourage-W
Place critical relevant information in close proximity to a call to action
Krug, Don’t Make Me Think = #1 usability rule!!
For example, bad naming makes us think, e.g., clever names, marketing-
induced names, technical names, company-specific names, etc. (e.g., do you
put prepaid business reply card in box labeled “Stamped Mail” or “Metered
Mail”?)
Also: Where am I? Where should I start?
People scan/skim, so design like it’s a billboard
o Create clear visual hierarchy via prominence (larger, bolder) given
to items; grouping similar items; nesting
o Use conventions (and note that designers often resist them)
o Break pages into defined areas
o Make what’s clickable obvious
o Minimize noise/busy-ness
27
• Codifies interactive, visual and copy elements of a UI/software system
o For example, headers, footers, drop-down menus, grids, forms, button
logic, colors, column sizes, backgrounds, separators, etc.
o Guide specifies what elements look like, where they are placed, and
when/how they should be used
o aka pattern libraries
• Saves effort and ensures consistent experience
28
TECHNOLOGY BASICS
Model-View-Controller
• Model: Where’s the data?
– Changes the view
• View: Where’s the UI?
– Browser, mobile app, etc.
• Controller: Where does computation happen?
– Updates the data via algorithms etc.
• MVC goals: M = smart; V = dumb; C = thin
29
PRD/DRAFTING SPECIFICATIONS
30
WORKING WITH DEVELOPERS
Cooper, 89 + ch 7: Engineers think they can learn to design. They can’t. They are too
steeped in the deterministic world of silicon.
Jetway test: do you turn left into cockpit, to be IN CONTROL and tackle the
challenge of figuring out all the dials and buttons (i.e., a highly complex,
deterministic system), or right into cabin to SIMPLIFY and put others in
control? Programmers turn left!
Arrogance: I didn’t answer wrong, you asked the wrong question. Leads them
to take even specific designs as suggestions.
Humans while complex, are too messy, non-deterministic
Homo logicus wants control, and complexity is the price they’ll pay for it.
Result, when programmers do design, if feature bloat, like Office.
Homo logicus wants to understand system, and will forfeit success (valued by
homo sapiens) to gain understanding (break a clock by taking it apart to see
how it works). They design programs so that interaction follows internal
functioning (e.g., exposing user unnecessarily to distinction between hard
disk and RAM storage)
Homo logicus obsessed with possible edge cases (however remote) and will
program in order to address them (adding to cost/UI complexity). Homo
sapiens will cope with remote risk.
Prone to mental bullying (like the physical domination of jocks)
Cooper, ch 8: programming culture
Code reuse is valued – saves time and guarantees bug free code – but UI can
suffer if functionality is force fit
reverence for technical skill (and fact that design doesn’t require as much
technical skill) leads programmers and managers who were programmers to
give it prominence in product development process, so UI comes latter if at
all
LOVE difficult tasks: this is the prime motivator. The harder the better.
Rarely have contact with users, and when they do, the best programmers
usually interact only with power users, skewing their view of “average”
Work alone – programming takes uninterrupted single-minded focus. It can
take longer to check someone’s code than it took to write it, so programmers
have lots of skin in the game – functioning product depends on them alone.
“lonely work of programmer gives him a sense of power” and makes him
uncomfortable delegating – including delegating design.
View UI design as vague opinion; programmer’s opinion is at least as valid in
his own mind
31
actual discovery of how to win is a buzzkill. The thrill, the adrenaline, comes
from the discovery, hunt, and eventual mastery of the unknown
167 sees the world as a system that , given enough time and effort, is
completely knowable
167 control issues: sensitive to drastic changes… system-defining events
force your nerd to recognize that the world is not always nor entirely a
knowable place
168 joy in problem solving and discovery; adrenaline rush as each part of
project is completed
170 hates small talk. “I have no system for understanding this messy person
in front of me.”
32
AGILE VS. WATERFALL
33
– Stories that are not completed during a cycle are sent back to the
backlog — a time-boxed cycle is never extended.
– Agile development cycles are sometimes “feature-boxed,” i.e., they
continue until prioritized stories are complete. However, as noted by
developer Kevin Bedell, time-boxing forces sharper prioritization
decisions.
– Teams start each day with a stand-up meeting of roughly 15 minutes,
during which each member summarizes yesterday’s progress, today’s
priorities, and any expected obstacles.
– Status is publicly displayed on a board (often referred to as a
“kanban” board, adapting practices from just-in-time manufacturing)
on a wall of the team’s shared workspace. The board typically
organizes stories into sections reflecting story status, for example:
backlog, ready, coding, testing, approval, done. Stories are captured on
post-its or note cards, which often are color-coded by story type or
team member, as shown below:
• User Stories
– User stories describing specific tasks that end users need to complete
are written in everyday language by the team’s product owner in
consultation with the product team.
34
– While online tools are available to capture and share stories, they are
often hand-written on note cards to ensure brevity and to encourage
ongoing face-to-face dialogue between the product owner and
developers.
– Story format is often: Who? What? Why? Or equivalently, “As a <role
or persona>, I want <goal>, so that <benefit>.”
• Example: “As a user closing the app, I want to be prompted to
save anything that’s changed since the last save, so I can
preserve useful work and discard work that’s not useful.”
– The product owner must specify acceptance criteria — conditions
that must be met or quality assurance tests that must be passed in
order to declare a story “done.” The criteria are often listed on the
back of the story card, as shown below:
– Big stories, called epics (e.g., “user pays upon checkout”), are broken
into smaller stories (e.g., “user whose credit card has expired is
prompted for another card”)
• Just-in-time decomposition of epics at a cycle’s start (rather
than earlier in the development process) can avoid waste,
since a team will have better knowledge of priorities then
along with better understanding of the effort likely to be
required to complete stories.
• To prioritize tasks, the team must be able to estimate effort
required; decomposition helps with this.
• Interdependencies between stories complicate planning;
decomposition can lead to interdependencies.
– In waterfall planning, a use case is a “generalized description of set of
interactions between system and actors.” Compared to an agile user
story, a waterfall use case is typically:
• Larger in scope; stories must be granular to facilitate agile’s
incremental planning
• More detailed, especial with respect to user interface design
35
• More permanent; stories are often discarded, revised, and
decomposed
Waterfall Agile
Pro Avoids piecemeal design Ongoing input from product owner
Can work on modules in parallel Product owner buy-in via direct
Can see full output at end of participation
each phase Flexible if requirements change
Can measure progress against Find/fix bugs faster
full plan Faster time to market with MVP
Full-time product owner not Time-boxing yields cost predictability
needed
36
Con It is difficult to specify all Works best with small team and co-
requirements at outset location
Not flexible if requirements Requires full-time product owner
change Flexibility can lead to feature
Problems not discovered until proliferation
phase is complete Incremental approach can lead to
integration problems
Incremental approach can lead to a
product that lacks vision
37
PROJECT MANAGEMENT
Tracking Technologies
• The best approach is usually whatever your team prefers. Any technology
(e.g., whiteboard + post-its; spreadsheets; tracking software like Asana or
Pivotal Tracker) can work if it is used in a disciplined way.
• Spreadsheets are simple, customizable, and easy to share
– Key columns on the sheet might include: item, owner, due date,
priority, time budgeted for completion, status, dependencies, cost, etc.
• Bug tracking software (e.g., JIRA), which includes prioritization functionality,
can be adapted for use in project management. This approach treats all tasks,
including bugs, as “features,” and prioritizes them in a single stream.
• Project tracking software includes Asana, Pivotal Tracker, Sprint.ly,
Basecamp
Key issues
• Clarity of goals
• Features should be ideally decomposed into tasks that: 1) can be completed
by one person; 2) can be completed in a half day or less; 3) have no/minimal
dependencies
• Some slack in schedule for experiments etc.
• Allowing enough time for integration of multiple streams
• Top management buy-in
• Distributed vs. co-located teams
• Meeting format
• Prioritization scheme
• When is a feature “done”?
Team Roles
Project manager is typically found in very large projects
– Like a product manager, removes barriers and coordinates team effort
– Unlike a product manager, does NOT set vision or overall goals
Product manager
Tech lead: most senior engineer, who’s responsible for final technical decisions.
Typically does some project and product management
People manager
Email Management
Read emails immediately
Use automated filters to move and label emails
Take Action, Archive / move email immediately
Emails over 2 weeks old should be discarded
Be able to quickly scan your inbox
38
Get good with keyboard shortcuts
Meeting Management
Avoid status update meetings
Agenda out before hand
o Update action items in the notes before meeting
o Email action item owners in CC
<= 8 ppl
Decision maker identified
Separate note taker from person running the meeting
Display the notes as the meeting is happening – everyone should agree to the
action items being assigned
25 or 50 minutes
39
USABILITY TESTING
Purposes
Can get respondents’ feedback on rivals’ products or on prototypes of early
solutions, helping to validate demand -- in particular, whether your concept
is easy to understand
o Ask with sketch/prototype, “What do you think this is for?” (NOT “Do
you like this?”); “Does this screen make sense?”; “What would you
click on first?”; “What would you expect to see when you click on
that?”
Product usability is key focus
Klein, 41 NOT FOR
o Finding out if people like the product, will use product
o Finding out how to fix usability problems
Types of usability issues to explore – anything that requires multiple steps (Klein, p
30)
• Multi-step signup flows
• Purchase flows
• Searching/browsing experience
• Sharing experience
• File upload/edits
• Installation process
• Etc.
Session Management
• Do it early!!
• Recruiting representative users is overrated. Loose fit is okay (Krug Don’t
Make Me Think), since most usability issues don’t relate to domain
knowledge
• 3-5 subjects is usually enough to spot most big problems
• Do them all in one morning then debrief
• Use scripted intro (“Testing site, not you. Don’t worry about hurting our
feelings. I’ll ask you to think out loud as you complete the tasks I give you –
that’ll help us. Ask any questions you have: I may take a minute to answer,
since we want to learn what people do when they don’t have someone
nearby who’s familiar with the site. We’ll record, but we won’t share beyond
internal team – is that okay?”)
• Give a specific task (e.g., “Find a cookbook for under $14)
• Get subject to think out loud when completing tasks
• Don’t coach, don’t give a guided tour of product
• Be willing to let the user fail
• Ask open ended and neutral/non-leading questions
o NOT “Was that easy to use?” or “Do you think that was cool?” RATHER
“How’d that go?” or “What do you think?”
40
• Use a screen recorder
• Review/debrief right away
o Typical problems: unclear on overall concept; seeking words that
aren’t there; too much going on
o Triage Guidelines: Ignore “kayak” problems, i.e., user gets stuck then
quickly figures things out (like kayak rolling over); be wary of feature
requests; resist impulse to add things
• Do testing once a month!
Tools
• UserTesting.com
41
LAUNCH MARKETING
42
POST-LAUNCH ANALYTICS
43
QA BASICS
44
SQL BASICS
45
A/B TESTING/QUANTITATIVE EXPERIMENT DESIGN
46
ADWORDS BASICS
47