App review: StatusToday

Artificial Intelligence (‘AI’) has rapidly become yet another buzzword in the tech space and I’m therefore always on the lookout for AI based applications which add actual customer value. StatusToday could that kind of app:

My quick summary of StatusToday before using it – I think Status Today provides software to help manage teams of employees, I suspect this product is geared towards HR people.

How does StatusToday explain itself in the first minute – “Understand your employees” is the strapline that catches my eye. Whilst not being entirely clear on the tangible benefits Status Today delivers on, I do get that it offers employee data. I presume that customers will have access to a data portal and can generate reports.

What does StatusToday do (1)? – StatusToday analyses human behaviour and generates a digital fingerprint for individual employees. The company originally started out with a sole focus on using AI for cyber security, applying designated algorithms to analyse internal online comms, detecting behavioural patterns in comms activity and quickly spotting any abnormal activity or negligence. For example, ‘abnormal file exploration’ and ‘access from unusual locations’ are two behaviours that StatusToday will be tracking for its clients.

What does StatusToday do (2)? -StatusToday has since started offering more generic employee insights services. By plugging into a various online tools companies may use, Google and Microsoft for example, StatusToday will start collecting employee activity data. This will help companies in getting better visibility of employee behaviour as well as making the processes around data access and usage more efficient.

It makes me wonder to what extent there’s a “big brother is watching you element” to StatusToday’s products and services. For example, will the data accessible through StatusToday’s “Live Dashboard” (eventually) make it easier for companies to punish employees if they’re spending too much time on Facebook!?

Main learning point: I can see how StatusToday takes the (manual) pain out of monitoring suspicious online activity and helps companies to preempt data breaches and other ‘anomalies’.

 

Related links for further learning:

  1. https://techcrunch.com/2018/02/20/statustoday/
  2. https://www.youtube.com/watch?v=KhIkx8ZvA-Q
  3. https://techcrunch.com/2015/09/09/ef4/
  4. https://blog.statustoday.com/1nature-is-not-your-friend-but-ai-is-d94aaa13fd2e
  5. https://blog.statustoday.com/1your-small-business-could-be-in-big-trouble-7a34574ab42c

App review: Warby Parker

I recently listened to a podcast which was all about Warby Parker and its makings. After listening to the podcast, I was keen to have a closer look at Warby Parker’s website:

My quick summary of Warby Parker before using it – Warby Parker is disrupting the way in which consumers discover and buy glasses. I expect a product which removes the need for physical opticians.

How does Warby Parker explain itself in the first minute? – Accessing https://www.warbyparker.com/ on desktop, I see a nice horizontal layout, dominated by two hero images. There are two main calls to action. Firstly, “Try frames at home – for free”, which then offers me to either “get started” or “browse frames”. Secondly, “Shop online” which lets me shop for eyeglasses and sunglasses.

Getting started, what’s the process like? – After clicking on “Get started”, I can choose between styles for men and women.

Having selected “Men’s styles”, I’m pleased that there’s an option for me to skip the “What’s your fit?” screen as I’m unsure about the width of my face 🙂

Selecting a shape of frames feels somewhat easier, but it’s good that I can select all three shapes if I wish. Instead, I go for “rectangular”.

The same applies for the next screen, where I can pick colours and I select “Neutral” and “Black” simply because I find it easier to visualise what the frames will look like in these colours.

I decide the skip the step involving different materials to choose from. The icons on this screen do help but I personally would have benefited from seeing some real samples of materials such as acetate and titanium, just to get a better idea.

It’s good that I’m then being asked about my last eye exam. Wondering if and when I’ll be asked for the results from my last eye test in order to determine the strength of the glasses I need.

The next holding screen is useful since up to this point I hadn’t been sure about how Warby Parker’s service works. The explanations are clear and simple, encouraging me to click on the “Cool! Show me my results.” call to action at the bottom of the screen. I now understand that I can upload my prescription at checkout, but I wonder if I need to go to an eye doctor or an optician first in order to get a recent (and more reliable) prescription …

I’m then presented with 15 frames to choose from. From these 15 frames, Warby Parker lets me pick 5 frames to try on at home. I like how I can view the frames in the different colours that I selected as part of step 4 (see above). If I don’t like the frames suggested to me, I can always click “Browse all Home Try-on frames” or “Retake the quiz”.

I like the look of the “Chamberlain” so I select this pair of frames and click on “Try at home for free”.

As soon as I’ve clicked on the “Try at home for free” button a small tile appears which confirms that I’ve added 1 out of 5 frames which I can try at home. I can either decide to find another frame or view my cart.

When I click on “Find another frame” I expected to be taken back to my previous quiz results. Instead, I can now see a larger number of frames, but there’s the option to go back to my original quiz results and matches with my results have been highlighted.

I really like how the signup / login stage has been positioned right at the very end of my journey – i.e. at the checkout stage -and that I can just continue as a new customer.

My Warby Parker experience sadly ends when I realise that Warby Parker doesn’t ship frames to the United Kingdom. No matter how I hard I try, I can only enter a US address and zip code 😦

 

Did Warby Parker deliver on my expectations? – Yes and no. I felt Warby Parker’s site was great with respect to discovery and customisation, but I do think there’s opportunity to include some explanatory bits about Warby Parker’s  process.

 

Related links for further learning:

  1. https://www.stitcher.com/podcast/national-public-radio/how-i-built-this/e/48640659
  2. https://www.recode.net/2018/3/14/17115230/warby-parker-75-million-funding-t-rowe-price-ipo
  3. https://www.fastcompany.com/3041334/warby-parker-sees-the-future-of-retail

App review: Blinkist

The main driver for this app review of Blinkist is simple: I heard a fellow product manager talking about it and was intrigued (mostly by the name, I must add).

My quick summary of Blinkist (before using it) – “Big ideas in small packages” is what I read when I Google for Blinkist. I expect an app which provides me with executive type summaries of book and talks, effectively reducing them to bitesize ideas and talking points.

How does Blinkist explain itself in the first minute? – When I go into Apple’s app store and search for Blinkist, I see a strapline which reads “Big ideas from 2,000+ nonfiction books” and “Listen or read in just 15 minutes”. There’s also a mention of “Always learning” which sounds good …

 

 

Getting started, what’s the process like? (1) – I like how Blinkist lets me swipe across a few screens before deciding whether to click on the “Get started” button. The screens use Cal Newport’s “Deep Work” book as an explain to demonstrate the summary Blinkist offers of the book, the 15 minute extract to read or listen to, and how one can highlight relevant bits of the extract. These sample screens give me a much better idea of what Blinkist is about, before I decide whether to sign up or not.

 

 

Getting started, what’s the process like? (2) – I use Facebook account to sign up. After I clicked on “Connect with Facebook” and providing authorisation, I land on this screen which mentions “£59.99 / year*”, followed by a whole lot of small print. Hold on a minute! I’m not sure I want to commit for a whole year, I haven’t used Blinkist’s service yet! Instead, I decide to go for the “Subscribe & try 7 days for free” option at the bottom of the screen.

 

Despite my not wanting to pay for the Blinkist service at this stage, I’m nevertheless being presented with an App Store screen which asks me to confirm payment. No way! I simply get rid  of this screen and land on a – much friendlier – “Discover” screen.

 

 

To start building up my own library I need to go into the “Discover” section and pick a title. However, when I select “Getting Things Done” which is suggested to me in the Discover section, I need to unlock this first by start a free 7-day trial. I don’t want to this at this stage! I just want to get a feel for the content and for what Blinkist has to offer, and how I can best get value out of its service. I decide to not sign up at this stage and leave things here … Instead of letting me build up my library, invest in Blinkist and its content and I only then making me ‘commit’, Blinkist has gone for a free trial and subscription model instead. This is absolutely fine, but doesn’t work for me unfortunately, as I just want to learn more before leaving my email address, committing to payment, etc.

 

 

Did Blinkist deliver on my expectations? – Disappointed.

 

 

 

My product management toolkit (26): PAUSE and LISTEN

If there are two things I can definitely improve, I’d say it would be my ability to “pause” and “listen”. Too often, I’ve made the mistake of not listening to what the other person is saying. Instead, I’m thinking of what to say myself or making assumptions, completely ‘steamrolling’ the other person in the process …

 

Looking back, I guess my inability to listen was closely linked to my focus on products over people. This focus effectively meant that I cared more about products, and less about building relationships with people. For example, I felt at times that internal stakeholders were more of a necessary evil whose sole purpose was to hinder product development. Fortunately, I no longer adhere to this point of view and I’ve come to realise how critical collaboration is to building great products.

I had to think back to this transition when I recently read “Practical Empathy”, which Indi Young, an independent UX consultant, published in 2015. In this great book, Young explains that empathy is all about understanding what’s going on in other people’s minds. She describes “listening” as a vital tool to create a deeper understanding and talks about “a new way to listen”:

  • Listen for reasoning (inner thinking) – What is going through someone else’s mind?
  • Listen for reactions  Reactions often go hand in hand with reasoning. For instance, I could express an emotional response when I describe why I decided to make a career change.
  • Listen for guiding principles – A guiding principle is a philosophy or belief that someone uses to decide what action to take, what to choose, how to act, etc.

I found that for people like me – i.e. with lots of opinions, thoughts, ideas and assumptions – listening can be incredibly difficult. In the Netherlands where I was born, voicing your opinion and standing up for oneself are considered highly regarded attributes. I’ve had to learn – and am still learning – to “pause” a lot more. Instead of jumping to conclusions or simply getting my two cents in, I’ve learned to breathe and pause first before deciding to say something or to simply listen. I’m trying to remind myself continuously that each I forget to listen, I forfeit the opportunity to understand what’s driving the other person.

Let me share a real life example with you, to illustrate how listening can help to develop a better understanding of where the other person is coming from:

Example

A while ago, I was talking about team performance with an engineer and he mentioned that “we have to be careful, because most engineers are delicate beings”.

Me – before understanding a single thing about listening and empathy

With a comment like the engineer had made, I’d have jumped straight in there and would have said: “what are you talking about! Surely, not all developers are delicate human beings! I’ve worked with some developers who made me look like a wallflower!” and so on and so forth.

Me – with the beginnings of an understanding about listening and empathy

In this real life example, I didn’t respond at all. I paused and listened. By allowing the engineer speak, I started to understand that he cared deeply about the developers in the team and considered himself as their mentor, wanting to make sure they fully enjoyed their day job.

The moral of this story is that listening starts with taking that split second to stop that innate desire to respond immediately. I learned a lot about listening from reading and practising the insights from “Active Listening”, another valuable book. In this book by Josh Gibson and Fynn Walker, they describe the four components of active listening:

  1. Acceptance – Acceptance is about respecting the person that you’re talking to; irrespective of what the other person has to say but purely because you’re talking to another human being. Accepting means trying to avoid expressing agreement or disagreement with what the other person is saying, at least initially. I’ve often made this mistake; being too keen to express my views and thus encouraging the speaker to take a very defensive stance in the conversation.
  2. Honesty – Honesty comes down to being open about your reactions to what you’ve heard. Similar to the acceptance component, honest reactions given too soon can easily stifle further explanation on the part of the speaker.
  3. Empathy – Empathy is about your ability to understand the speaker’s situation on an emotional level, based on your own view. Basing your understanding on your own view instead of on a sense of what should be felt, creates empathy instead of sympathy. Empathy can also be defined as your desire to feel the speaker’s emotions, regardless of your own experience.
  4. Specifics – Specifics refers to the need to deal in details rather than generalities. The point here is that for communication to be worthwhile, you should ask the speaker to be more specific, encouraging the speaker to open up more or “own” the problem that they’re trying to raise.

The thing with empathy, as Young points out in her book, is that it isn’t about having to feel warmth for the other person or fully agree with him / her. In contrast, empathy means understanding and comprehending the other person. It takes time and skill to be able to drop in to a neutral frame of mind to:

  • Resist the urge to demonstrate how smart you are – Well known technology exec and investor Bill Krause used to write down “DNT” in his notepad during meetings. “DNT” stood for “Do Not Talk”, and he used it as a practical tool to stop himself from saying something or showing how smart he was. Young also provides some good tips on how to stop yourself from talking or saying too much (see Fig. 2 below).
  • Develop the knowledge to gain empathy – In “Practical Empathy”, Young suggests a few simple questions that can help you build a better understanding of the other person’s deeper reasoning and principles. Readers are encouraged to use the fewest number of words possible when asking question (see Fig. 1 below).
  • Apply empathy to both customers and colleagues – UX expert Erika Hall recently made a great point about the importance of applying empathy to co-workers (listen to the Aurelius podcast here for the interview with Erika). Again, I used to make the mistake of solely applying customer empathy but not paying enough attention or respect to colleagues. Showing empathy towards can be as simple as understanding about someone’s else workload or OKRs.

Main learning point: Empathy – both inward and outward – is SO SO important. Pausing and listening are your first tools on the path towards developing empathy. Yes, I look back on mistakes made and people that I’ve upset in the past due to a lack of empathy, but I feel that I’ve learned a lot since then (whilst I appreciate I still have got a long way to go). If you want to get started on developing and showing more empathy, I’d highly recommend reading “Practical Empathy” by Indi Young and “Active Listening” by Josh Gibson and Fynn Walker.

 

Fig. 1 – Use the fewest number of words possible – Taken from: Indi Young – Practical Empathy, p. 60

  • “Why’s that?”
  • “What were you thinking?”
  • “What’s your reasoning?”
  • Tell me more about <her phrase>.”
  • “Because?”

Fig. 2 – Summary of “a new way to listen” – Taken from: Indi Young – Practical Empathy, p. 77

What to listen for:

Reasoning: Thinking, decision-making, motivations, thought processes, rationalisation.

Reaction: Responses to something – mostly emotional, some behavioural.

Guiding Principle: Belief that guides decisions.

Follow the peaks and valleys:

  • Started with a broad topic
  • Let the speaker keep choosing the direction
  • Dig into the last few remarks
  • Use the fewest number of words possible
  • Reiterate a topic to show attention, verify your understanding and ask for more
  • Avoid introducing words the speaker hasn’t used
  • Try not to say “I”

Be supportive:

  • Don’t fake it – react, be present
  • Never switch abruptly
  • Adapt yourself to the mood
  • Don’t cause doubt or worry

Be respectful:

  • Be the undermind, not the overmind
  • Resist the urge to demonstrate how smart you are
  • Avoid implying or telling the speaker she is wrong

Neutralise your reactions:

  • Learn how to notice your emotional reactions
  • Dissipate your reactions and judgments

 

 

App review: Steemit

Steemit.com is one of those products that feels super complex at first sight. I think it’s content platform but I need to give it a much closer look in order to understand how Steemit works:

My quick summary of Steemit (before using it): I reckon Steemit is a content creation and sharing platform, but I’m not sure what technology it’s built on or how it works.

How does the app explain itself in the first minute? “Your voice is worth something” is the first thing I see. When I continue reading above the fold, it says “Get paid for good content. Post and upvote articles on Steemit to get your share of the daily rewards pool.”

Getting started, what’s the process like (1)? The first thing I do is clicking on the “Learn more” button on the Steemit homepage. I then land on a useful FAQ page which covers the typical questions and answers you’d expect. Steemit enables “the crowd to reward the crowd for their content.” The platform is connected with the Steem blockchain, which is decentralised and open. Content contributors to Steemit are rewarded with STEEM, dependent on the attention their content is getting from other Steemit users.

Getting started, what’s the process like (2)? Signing up is very straightforward, nothing out of the ordinary. A nice progress bar, two-factor authentication and I now have to wait for Steemit to validate my sign-up request.

What can I do in the meantime? – I have a little nose around the Steemit platform, to learn about the content people publish. For example, I came across Dan Dicks, who has posted 71 posted on Steemed and has (sofar) received $123.86 for his latest post.

 

Main learning point: Steemit feels very similar to Quora and Reddit, but the main difference being the underlying blockchain and cryptocurrency element. Once my signup request has been approved, I’ll no doubt get a better sense of how the platform actually works. Currently, I’m not entirely clear on the dynamics in terms of being rewarded for your Steemit content.

Related links for further learning:

  1. https://steemit.com/steemit/@mindover/steemit-for-dummies-like-me-everything-you-need-know-to-get-started
  2. https://steemit.com/faq.html
  3. https://steemit.com/exploring/@kebin/what-is-steem-and-what-is-sbd
  4. https://steem.io/SteemWhitePaper.pdf

Book review: “Designing with Data”

I’d been looking forward to Rochelle King writing her book about using data to inform designs (I wrote about using data to inform product decisions a few years ago, which post followed a great conversation with Rochelle).

Earlier this year, Rochelle published “Designing with Data: Improving the User Experience with A/B Testing”, together with Elizabeth F. Churchill and Caitlin Tan. The main theme of “Designing with Data” the book is the authors’ belief that data capture, management, and analysis is the best way to bridge between design, user experience, and business relevance:

  1. Data aware — In the book, King, Churchill and Tan distinguish between three different ways to think about data: data driven; data informed and data aware (see Fig. 1 below). The third way listed, being ‘data aware’, is introduced by the authors: “In a data-aware mindset, you are aware of the fact that there are many types of data to answer many questions.” If you are aware there are many kinds of problem solving to answer your bigger goals, then you are also aware of all the different kinds of data that might be available to you.
  2. How much data to collect? — The authors make an important distinction between “small sample research” and “large sample research”. Small sample research tends to be good for identifying usability problems, because “you don’t need to quantify exactly how many in the population will share that confusion to know it’s a problem with your design.” It reminded me of Jakob Nielsen’s point about how the best results come from testing with no more than 5 five people. In contrast, collecting data from a large group of participants, i.e. large sample research, can give you more precise quantity and frequency information: how many people people feel a certain way, what percentage of users will take this action, etc. A/B tests are one way of collecting data at scale, with the data being “statistically significant” and not just anecdotal. Statistical significance is the likelihood that the difference in conversion rates between a given variation and the baseline is not due to random chance.
  3. Running A/B tests: online experiments — The book does a great job of explaining what is required to successfully running A/B tests online, providing tips on how to sample users online and key metrics to measure (Fig. 2) .
  4. Minimum Detectable Effect — There’s an important distinction between statistical significance — which measure whether there’s a difference — and “effect”, which quantifies how big that difference is. The book explains about determining “Minimum Detectable Effect” when planning online A/B tests. The Minimum Detectable Effect is the minimum effect we want to observe between our test condition and control condition in order to call the A/B test a success. It can be positive or negative but you want to see a clear difference in order to be able to call the test a success or a failure.
  5. Know what you need to learn — The book covers hypotheses as an important way to figure out what it is that you want to learn through the A/B test, and to identify what success will look like. In addition, you can look at learnings beyond the outcomes of your A/B test (see Fig. 3 below).
  6. Experimentation framework — For me, the most useful section of the book was Chapter 3, in which the authors introduce an experimentation framework that helps planning your A/B test in a more structured fashion (see Fig. 4 below). They describe three main phases — Definition, Execution and Analysis — which feed into the experimentation framework. The ‘Definition’ phase covers the definition of a goal, articulation of a problem / opportunity and the drafting of a testable hypothesis. The ‘Execution’ phase is all about designing and building the A/B test, “designing to learn” in other words. In the final ‘Analysis’ phase you’re getting answers from your experiments. These results can be either “positive” and expected or “negative” and unexpected (see Fig. 5–6 below).

Main learning point: “Designing with Data” made me realise again how much thinking and designing needs to happen before running a successful online A/B test. “Successful” in this context means achieving clear learning outcomes. The book provides a comprehensive overview of the key considerations to take into account in order to optimise your learning.

Fig. 1 — Three ways to think about data — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 3–9

  • Data driven — With a purely data driven approach, it’s data that determine the fate of a product; based solely on data outcomes businesses can optimise continuously for the biggest impact on their key metric. You can be data driven if you’ve done the work of knowing exactly what your goal is, and you have a very precise and unambiguous question that you want to understand.
  • Data informed — With a data informed approach, you weigh up data alongside a variety of other variables such as strategic considerations, user experience, intuition, resources, regulation and competition. So adopting a data-informed perspective means that you may not be as targeted and directed in what you’re trying to understand. Instead, what you’re trying to do is inform the way you think about the problem and the problem space.
  • Data aware — In a data-aware mindset, you are aware of the fact that there are many types of data to answer many questions. If you are aware there are many kinds of problem solving to answer your bigger goals, then you are also aware of all the different kinds of data that might be available to you.

Fig. 2 — Generating a representative sample — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 45–53

  • Cohorts and segments — A cohort is a group of users who have a shared experience. Alternatively, you can also segment your user base into different groups based on more stable characteristics such as demographic factors (e.g. gender, age, country of residence) or you may want them by their behaviour (e.g. new user, power user).
  • New users versus existing users — Data can help you learn more about both your existing understand prospective future users, and determining whether you want to sample from new or existing users is an important consideration in A/B testing. Existing users are people who have prior experience with your product or service. Because of this, they come into the experience with a preconceived notion of how your product or service works. Thus, it’s important to be careful about whether your test is with new or existing users, as these learned habits and behaviours about how your product used to be in the past can bias in your A/B test.

Fig. 3 — Know what you want to learn — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, p. 67

  • If you fail, what did you learn that you will apply to future designs?
  • If you succeed, what did you learn that you will apply to future designs?
  • How much work are you willing to put into your testing in order to get this learning?

Fig. 4 — Experimentation framework — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 83–85

  1. Goal — First you define the goal that you want to achieve; usually this is something that is directly tied to the success of your business. Note that you might also articulate this goal as an ideal user experience that you want to provide. This is often the case that you believe that delivering that ideal experience will ultimately lead to business success.
  2. Problem/opportunity area — You’ll then identify an area of focus for achieving that goal, either by addressing a problem that you want to solve for your users or by finding an opportunity area to offer your users something that didn’t exist before or is a new way of satisfying their needs.
  3. Hypothesis — After that, you’ll create a hypothesis statement which is a structured way of describing the belief about your users and product that you want to test. You may pursue one hypothesis or many concurrently.
  4. Test — Next, you’ll create your test by designing the actual experience that represents your idea. You’ll run your test by launching the experience to a subset of your users.
  5. Results — Finally, you’ll end by getting the reaction to your test from your users and doing analysis on the results that you get. You’ll take these results and make decisions about what to do next.

Fig. 5 — Expected (“positive”) results — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 227–228

  • How large of an effect will your changes have on users? Will this new experience require any new training or support? Will the new experience slow down the workflow for anyone who has become accustomed to how your current experience is?
  • How much work will it take to maintain?
  • Did you take any “shortcuts” in the process of running the test that you need to go back and address before your roll it out to a larger audience (e.g. edge cases or fine-tuning details)?
  • Are you planning on doing additional testing and if so, what is the time frame you’ve established for that? If you have other large changes that are planned for the future, then you may not want to roll your first positive test out to users right away.

Fig. 6 — Unexpected and undesirable (“negative”) results — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 228–231

  • Are they using the feature the way you think they do?
  • Do they care about different things than you think they do?
  • Are you focusing on something that only appeals to a small segment of the base but not the majority?

Related links for further learning:

  1. https://www.ted.com/watch/ted-institute/ted-bcg/rochelle-king-the-complex-relationship-between-data-and-design-in-ux
  2. http://andrewchen.co/know-the-difference-between-data-informed-and-versus-data-driven/
  3. https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/
  4. https://vwo.com/ab-split-test-significance-calculator/
  5. https://www.kissmetrics.com/growth-tools/ab-significance-test/
  6. https://select-statistics.co.uk/blog/importance-effect-sample-size/
  7. https://www.optimizely.com/optimization-glossary/statistical-significance/
  8. https://medium.com/airbnb-engineering/experiment-reporting-framework-4e3fcd29e6c0
  9. https://medium.com/@Pinterest_Engineering/building-pinterests-a-b-testing-platform-ab4934ace9f4
  10. https://medium.com/airbnb-engineering/https-medium-com-jonathan-parks-scaling-erf-23fd17c91166

 

Design with Data.jpg

 

 

 

 

Book review: “Customers Included”

In the book “Customers Included”, Mark Hurst and Phil Terry make a great case for listening to the customer. In the book, Hurst and Terry look at why customers get overlooked by companies and explain how to best engage with customers:

  1. Why do customers get overlooked? – “The problem with customers is that they don’t always know what’s best for them” is a quote from Netflix CEO Reed Hastings referred to in the book. Similarly, Harvard Business School professor Clayton Christensen, warns that paying too much attention to today’s customers could lead a company to avoid the necessary step of disrupting itself to prepare for tomorrow’s market. These are commons reasons for why customers don’t always get involved or listened to when it comes to creating or improving products.
  2. Listening and disrupting can go hand in hand – Hurst and Terry argue that listening to customers isn’t as black and white as the likes of Hastings and Christensen portray it to to be. There’s room for nuance; accounting for different types of customers and different ways of listening to them. They make the point that “being disruptive requires knowing how to listen, in the right ways, to the right customers.” I totally agree that even in disruptive environments, it’s still essential to include the customer.” The point being that innovation should be focused on creating benefits for the customer, measuring innovation by its impact on the customer.
  3. Difference what people think and what they actually do – There’s typically a big difference between what people think (or say they think) and what they actually do. In my experience, this phenomenon raises its head particularly in focus groups, where people get together to give their feedback on a product. Hurst and Terry make the point that the very structure of a focus group fails to approximate real-world usage of a product, simply because having a number of people talking about a product doesn’t equal actual usage.
  4. The power of direct observations – The risk with research methods like focus groups is that customers give hypothetical answers, speculating about how they might behave, or how they could feel. I don’t find this feedback particularly helpful as it doesn’t give me a reliable indiction of how people actually behave or how they really feel. This is the key reason why Hurst and Terry advocate the use of direct observations; observing people in the appropriate environment, watching what they (don’t) do. For example, if you’re looking to learn more about people’s grocery shopping behaviours, you’re most likely to learn the most from observing people whilst they’re shopping at the supermarket.
  5. Doubts about personas – Hurst and Terry argue that “personas prioritise the hypothetical over the actual, and fiction over fact”. A user persona is a fictitious person with a fictitious profile. These aren’t real life people and I agree that if you do work with personas, you should always validate your made up user traits with real people. If you don’t do this validation, there’s a big risk of making product decisions solely based on hypothetical data.
  6. Limitations of task-based usability testing – Similar to the aforementioned point about personas, Hurst and Terry explain about the limitations of task-based usability testing (see Fig. 1 below). The overarching problem with only doing usability testing is that you might miss out on larger, more strategic insights. At is core, usability testing is tactical and helps to learn about how people use your product and identify any points of friction.
  7. Discovering unmet needs – “Unmet needs” are the antidote to the concept of “customers not knowing what they want” or “build it and they (customers) will come.” By just focusing on set usability tasks, Hurst and Terry argue, you’re unlikely to develop more strategic insights into your customers and their needs. To solve this, Hurst and Terry suggest direct observations and so-called “listening labs” as a better way of uncovering unmet needs.

Main learning point: “Customers Included” offers some good primers to use when convincing others of the importance of engaging with customers. More than that, the book also provides a ‘nuanced’ overview of the different user research methods to use, explaining pros and cons of each method.

Fig. 1 – Drawbacks of task-based usability testing – Taken from: Mark Hurst and Phil Terry, Customers Included, pp. 70-71

  1. The user tasks are all determined by the researchers beforehand
  2. The insights gained from the usability test are limited by those tasks
  3. The focus of task-based usability on tactical design elements