App review: StatusToday

Artificial Intelligence (‘AI’) has rapidly become yet another buzzword in the tech space and I’m therefore always on the lookout for AI based applications which add actual customer value. StatusToday could that kind of app:

My quick summary of StatusToday before using it – I think Status Today provides software to help manage teams of employees, I suspect this product is geared towards HR people.

How does StatusToday explain itself in the first minute – “Understand your employees” is the strapline that catches my eye. Whilst not being entirely clear on the tangible benefits Status Today delivers on, I do get that it offers employee data. I presume that customers will have access to a data portal and can generate reports.

What does StatusToday do (1)? – StatusToday analyses human behaviour and generates a digital fingerprint for individual employees. The company originally started out with a sole focus on using AI for cyber security, applying designated algorithms to analyse internal online comms, detecting behavioural patterns in comms activity and quickly spotting any abnormal activity or negligence. For example, ‘abnormal file exploration’ and ‘access from unusual locations’ are two behaviours that StatusToday will be tracking for its clients.

What does StatusToday do (2)? -StatusToday has since started offering more generic employee insights services. By plugging into a various online tools companies may use, Google and Microsoft for example, StatusToday will start collecting employee activity data. This will help companies in getting better visibility of employee behaviour as well as making the processes around data access and usage more efficient.

It makes me wonder to what extent there’s a “big brother is watching you element” to StatusToday’s products and services. For example, will the data accessible through StatusToday’s “Live Dashboard” (eventually) make it easier for companies to punish employees if they’re spending too much time on Facebook!?

Main learning point: I can see how StatusToday takes the (manual) pain out of monitoring suspicious online activity and helps companies to preempt data breaches and other ‘anomalies’.

 

Related links for further learning:

  1. https://techcrunch.com/2018/02/20/statustoday/
  2. https://www.youtube.com/watch?v=KhIkx8ZvA-Q
  3. https://techcrunch.com/2015/09/09/ef4/
  4. https://blog.statustoday.com/1nature-is-not-your-friend-but-ai-is-d94aaa13fd2e
  5. https://blog.statustoday.com/1your-small-business-could-be-in-big-trouble-7a34574ab42c

App review: Warby Parker

I recently listened to a podcast which was all about Warby Parker and its makings. After listening to the podcast, I was keen to have a closer look at Warby Parker’s website:

My quick summary of Warby Parker before using it – Warby Parker is disrupting the way in which consumers discover and buy glasses. I expect a product which removes the need for physical opticians.

How does Warby Parker explain itself in the first minute? – Accessing https://www.warbyparker.com/ on desktop, I see a nice horizontal layout, dominated by two hero images. There are two main calls to action. Firstly, “Try frames at home – for free”, which then offers me to either “get started” or “browse frames”. Secondly, “Shop online” which lets me shop for eyeglasses and sunglasses.

Getting started, what’s the process like? – After clicking on “Get started”, I can choose between styles for men and women.

Having selected “Men’s styles”, I’m pleased that there’s an option for me to skip the “What’s your fit?” screen as I’m unsure about the width of my face 🙂

Selecting a shape of frames feels somewhat easier, but it’s good that I can select all three shapes if I wish. Instead, I go for “rectangular”.

The same applies for the next screen, where I can pick colours and I select “Neutral” and “Black” simply because I find it easier to visualise what the frames will look like in these colours.

I decide the skip the step involving different materials to choose from. The icons on this screen do help but I personally would have benefited from seeing some real samples of materials such as acetate and titanium, just to get a better idea.

It’s good that I’m then being asked about my last eye exam. Wondering if and when I’ll be asked for the results from my last eye test in order to determine the strength of the glasses I need.

The next holding screen is useful since up to this point I hadn’t been sure about how Warby Parker’s service works. The explanations are clear and simple, encouraging me to click on the “Cool! Show me my results.” call to action at the bottom of the screen. I now understand that I can upload my prescription at checkout, but I wonder if I need to go to an eye doctor or an optician first in order to get a recent (and more reliable) prescription …

I’m then presented with 15 frames to choose from. From these 15 frames, Warby Parker lets me pick 5 frames to try on at home. I like how I can view the frames in the different colours that I selected as part of step 4 (see above). If I don’t like the frames suggested to me, I can always click “Browse all Home Try-on frames” or “Retake the quiz”.

I like the look of the “Chamberlain” so I select this pair of frames and click on “Try at home for free”.

As soon as I’ve clicked on the “Try at home for free” button a small tile appears which confirms that I’ve added 1 out of 5 frames which I can try at home. I can either decide to find another frame or view my cart.

When I click on “Find another frame” I expected to be taken back to my previous quiz results. Instead, I can now see a larger number of frames, but there’s the option to go back to my original quiz results and matches with my results have been highlighted.

I really like how the signup / login stage has been positioned right at the very end of my journey – i.e. at the checkout stage -and that I can just continue as a new customer.

My Warby Parker experience sadly ends when I realise that Warby Parker doesn’t ship frames to the United Kingdom. No matter how I hard I try, I can only enter a US address and zip code 😦

 

Did Warby Parker deliver on my expectations? – Yes and no. I felt Warby Parker’s site was great with respect to discovery and customisation, but I do think there’s opportunity to include some explanatory bits about Warby Parker’s  process.

 

Related links for further learning:

  1. https://www.stitcher.com/podcast/national-public-radio/how-i-built-this/e/48640659
  2. https://www.recode.net/2018/3/14/17115230/warby-parker-75-million-funding-t-rowe-price-ipo
  3. https://www.fastcompany.com/3041334/warby-parker-sees-the-future-of-retail

My product management toolkit (27): checklists

If you’d know me personally, you’d know that I love a good list. Making lists helps me to outline my thoughts, see connections and help prioritise. Three years ago, I wrote about “The Checklist Manifesto” by Atuwal Gawande, which is a great book about the importance of checklists and the ingredients of good checklist (see Fig. 1 below).

 

Fig. 1 – Key learnings from “The Checklist Manifesto” – Taken from: https://marcabraham.com/2015/07/01/book-review-the-checklist-manifesto/, 1 July 2015:

  1. Why checklists? – As individuals, the volume and complexity of the know-how that we carry around in our heads or (personal) systems is increasingly becoming unmanageable. Gawande points out that it’s becoming very hard for individuals to deliver the benefits of their know-how correctly. We therefore need a strategy for overcoming (human) failure. One the one hand this strategy needs to build on people’s experience and take advantage of their knowledge. On the other hand, however, this strategy needs to take into account human inadequacies. Checklists act as a very useful as part of this strategy.
  2. What makes a good checklist? – Gawande stresses that the checklist can’t be lengthy. A rule of thumb that some people use is to have between 5 to 9 items on a checklist in order to keep things manageable. The book contain some good real-life examples of how people go about starting their checklists. For example, looking at lessons learned from previous projects or the errors known to occur at any point.
  3. How to use a checklist – I believe that the key thing to bear in mind when using checklists is that they aren’t supposed to tell you what to do. As the book explains, a checklist isn’t a magic formula. Instead, having a checklist helps you at every step of the way, making sure you’ve got all the crucial info or data required at each step. Also, a checklist is a critical communication tool, as it outlines who you need to talk to (and why, what about) at each step of the way. Gawande also highlights the value of the ‘discipline’ that comes with having a checklist, the routine that’s involved in having a checklist. I’d add to this that a checklist can be a great way of identifying and mitigating risk upfront.

 

I’ve since read Leander Kahney’s biography of Jonny Ive, Apple’s design honcho. The book contains a quote about Apple’s New Product Process (‘ANPP’):

“In the world according to Steve Jobs, the ANPP would rapidly evolve into a well-defined process for bringing new products to market by laying out in extreme detail every stage of product development.

Embodied in a program that runs on the company’s internal network, the ANPP resembled a giant checklist. It detailed exactly what everyone was to do at every stage for every product, with instructions for every department ranging from hardware to software, and on to operations, finance, marketing, even the support teams that troubleshoot and repair the product after it goes to market.”

I perked up when I read how the ANPP resembles “a giant checklist”, detailing what needs to happen at each stage of the product development process. Apple’s process entails all stages, from concept to market launch. The ANPP is understandably very secretive, but I believe that we don’t need to know the ins and outs of Apple’s product development process to look at the use of checklists when developing and managing products:

Checklists aren’t the same as Gantt Charts! – It’s easy to confuse a short checklist with a Gantt Chart. Over the years, I’ve observed how people can derive a lot of certainty from creating and viewing detailed Gantt Charts or roadmaps (see my previous thoughts on this topic here). In my view, a super detailed checklist defeats the object. Instead, I encourage you to have short checklists that highlight both basic and critical steps to go through when developing / launching / managing products.

Checklists are evolving – Checklists are evolving in a sense that they’re likely to be different per product / team / project / etc. I find, for example, that each time I work with a new team of people or on new product, the checklist reflects the specific steps that need be checked, tailored to the team’s way of working or the specific product at hand.

Checklists are a collective effort – I’m currently onboarding a new UX designer into my team, and he’s keen for us to look at the ‘design checklist’ together, as he’s got some suggestions on how to make it work better. This might mean that the existing design checklist (see Fig. 2 below), underpinned by my preferred dual track approach, might be binned or adapted accordingly. Both are fine, as I expects checklists to be formed by those people who are responsible for checking the different list items. I’ve seen people treat their individually developed checklists as a decree … which others had to following blindly. My simple reaction to that kind of approach: no, no, no, no.

 

Fig. 2 – Sample ”design checklist”:

Democratise the sign-off process (1) – Often, quality assurance people come up and drive the best checklists. However, the risk I’ve observed, is that these QAs or the product managers become the single sign-off point for the checklist in question. I go into companies and look at their SCRUM and Kanban boards which have cards stating “Pet sign-off” or “Jackie sign-off”. I recently spoke to product managers at a company where the CEO wanted to sign off each feature.

Democratise the sign-off process (2) -Whilst nothing seems wrong with this approach at the face of it, there are two reasons why I feel uncomfortable with this ‘single sign-off’ approach. Firstly, to me, a hallmark of a truly self-organising and empowered team is that everyone feels empowered to ‘sign off’ on the end result (and its individual components). Secondly, I’m also worried about what happens if the designated sign-off person isn’t available. What happens if Pete and Christina are off ill or in never ending meetings? What if the CEO isn’t available for sign-off? Does the feature or product not get released to market? In short, I’m worried about creating another bottleneck or ‘single point of failure’ or forfeiting speed to market.

Don’t forget the basic steps – How often have you’ve been in a situation where you’ve just launched a new product or feature and realised that you forgot to test the styling of the images, content and calls to action? Sense checking things like these sounds like a basic step, but it’s one that’s easily forgotten in the excitement (and haste) to launch. Having basic steps like ‘check content’ included in your ‘pre-launch checklist’ will make sure that things don’t get overlooked (see Fig. 3 below).

 

Fig. 3 – Sample ”hygiene checklist”:

 

Include critical steps, lessons learned – I’ve found checklist to be a good way to incorporate key lessons learned on a continuous basis. The risks with post-mortem sessions or retrospectives is that lessons learned don’t get action and tend to be forgotten. Including a learning into a checklist is a simple but effective of way of ensuring that a learning sticks. For example, “training the customer support team” (on a new product, user flow or feature) was a critical step that I used to forget consistently. By including this item in a ‘go-to-market checklist’ helped me and my team in making sure this step wouldn’t be forgotten about anymore.

Put your checklist on a wall – Finally, I’d recommend making your checklist as visible and shareable as possible. You can stick your checklist on an office wall or, if the team aren’t all working in the same space, in collaboration software products like Confluence and Trello (see Fig. 4 – 5 below).

 

Fig. 4 – Put your checklist on a wall – Taken from: https://www.superestudio.co.uk/wall-checklist

 

 

Fig. 5 – Add your checklist in Trello – Taken from: https://www.addictivetips.com/internet-tips/trello-an-online-pinboard-for-task-organization-collaboration/

 

 

Main learning point: As long as you don’t confuse them with highly detailed project plans or roadmaps, checklists can be a valuable tool in making sure you and your team don’t overlook key steps when developing products!

 

Related links for further learning:

  1. https://www.interaction-design.org/literature/article/apple-s-product-development-process-inside-the-world-s-greatest-design-organization
  2. https://qz.com/183861/any-company-can-copy-the-keystone-of-apples-design-process/
  3. http://www.theequitykicker.com/2014/03/06/apples-new-product-process-long-checklist/
  4. https://www.quora.com/What-is-Apple-s-product-development-process
  5. https://blog.toggl.com/gantt-chart/
  6. http://datainsightsideas.com/post/18502350035
  7. https://en.wikipedia.org/wiki/Single_point_of_failure

App review: Blinkist

The main driver for this app review of Blinkist is simple: I heard a fellow product manager talking about it and was intrigued (mostly by the name, I must add).

My quick summary of Blinkist (before using it) – “Big ideas in small packages” is what I read when I Google for Blinkist. I expect an app which provides me with executive type summaries of book and talks, effectively reducing them to bitesize ideas and talking points.

How does Blinkist explain itself in the first minute? – When I go into Apple’s app store and search for Blinkist, I see a strapline which reads “Big ideas from 2,000+ nonfiction books” and “Listen or read in just 15 minutes”. There’s also a mention of “Always learning” which sounds good …

 

 

Getting started, what’s the process like? (1) – I like how Blinkist lets me swipe across a few screens before deciding whether to click on the “Get started” button. The screens use Cal Newport’s “Deep Work” book as an explain to demonstrate the summary Blinkist offers of the book, the 15 minute extract to read or listen to, and how one can highlight relevant bits of the extract. These sample screens give me a much better idea of what Blinkist is about, before I decide whether to sign up or not.

 

 

Getting started, what’s the process like? (2) – I use Facebook account to sign up. After I clicked on “Connect with Facebook” and providing authorisation, I land on this screen which mentions “£59.99 / year*”, followed by a whole lot of small print. Hold on a minute! I’m not sure I want to commit for a whole year, I haven’t used Blinkist’s service yet! Instead, I decide to go for the “Subscribe & try 7 days for free” option at the bottom of the screen.

 

Despite my not wanting to pay for the Blinkist service at this stage, I’m nevertheless being presented with an App Store screen which asks me to confirm payment. No way! I simply get rid  of this screen and land on a – much friendlier – “Discover” screen.

 

 

To start building up my own library I need to go into the “Discover” section and pick a title. However, when I select “Getting Things Done” which is suggested to me in the Discover section, I need to unlock this first by start a free 7-day trial. I don’t want to this at this stage! I just want to get a feel for the content and for what Blinkist has to offer, and how I can best get value out of its service. I decide to not sign up at this stage and leave things here … Instead of letting me build up my library, invest in Blinkist and its content and I only then making me ‘commit’, Blinkist has gone for a free trial and subscription model instead. This is absolutely fine, but doesn’t work for me unfortunately, as I just want to learn more before leaving my email address, committing to payment, etc.

 

 

Did Blinkist deliver on my expectations? – Disappointed.

 

 

 

Book review: “Designing with Data”

I’d been looking forward to Rochelle King writing her book about using data to inform designs (I wrote about using data to inform product decisions a few years ago, which post followed a great conversation with Rochelle).

Earlier this year, Rochelle published “Designing with Data: Improving the User Experience with A/B Testing”, together with Elizabeth F. Churchill and Caitlin Tan. The main theme of “Designing with Data” the book is the authors’ belief that data capture, management, and analysis is the best way to bridge between design, user experience, and business relevance:

  1. Data aware — In the book, King, Churchill and Tan distinguish between three different ways to think about data: data driven; data informed and data aware (see Fig. 1 below). The third way listed, being ‘data aware’, is introduced by the authors: “In a data-aware mindset, you are aware of the fact that there are many types of data to answer many questions.” If you are aware there are many kinds of problem solving to answer your bigger goals, then you are also aware of all the different kinds of data that might be available to you.
  2. How much data to collect? — The authors make an important distinction between “small sample research” and “large sample research”. Small sample research tends to be good for identifying usability problems, because “you don’t need to quantify exactly how many in the population will share that confusion to know it’s a problem with your design.” It reminded me of Jakob Nielsen’s point about how the best results come from testing with no more than 5 five people. In contrast, collecting data from a large group of participants, i.e. large sample research, can give you more precise quantity and frequency information: how many people people feel a certain way, what percentage of users will take this action, etc. A/B tests are one way of collecting data at scale, with the data being “statistically significant” and not just anecdotal. Statistical significance is the likelihood that the difference in conversion rates between a given variation and the baseline is not due to random chance.
  3. Running A/B tests: online experiments — The book does a great job of explaining what is required to successfully running A/B tests online, providing tips on how to sample users online and key metrics to measure (Fig. 2) .
  4. Minimum Detectable Effect — There’s an important distinction between statistical significance — which measure whether there’s a difference — and “effect”, which quantifies how big that difference is. The book explains about determining “Minimum Detectable Effect” when planning online A/B tests. The Minimum Detectable Effect is the minimum effect we want to observe between our test condition and control condition in order to call the A/B test a success. It can be positive or negative but you want to see a clear difference in order to be able to call the test a success or a failure.
  5. Know what you need to learn — The book covers hypotheses as an important way to figure out what it is that you want to learn through the A/B test, and to identify what success will look like. In addition, you can look at learnings beyond the outcomes of your A/B test (see Fig. 3 below).
  6. Experimentation framework — For me, the most useful section of the book was Chapter 3, in which the authors introduce an experimentation framework that helps planning your A/B test in a more structured fashion (see Fig. 4 below). They describe three main phases — Definition, Execution and Analysis — which feed into the experimentation framework. The ‘Definition’ phase covers the definition of a goal, articulation of a problem / opportunity and the drafting of a testable hypothesis. The ‘Execution’ phase is all about designing and building the A/B test, “designing to learn” in other words. In the final ‘Analysis’ phase you’re getting answers from your experiments. These results can be either “positive” and expected or “negative” and unexpected (see Fig. 5–6 below).

Main learning point: “Designing with Data” made me realise again how much thinking and designing needs to happen before running a successful online A/B test. “Successful” in this context means achieving clear learning outcomes. The book provides a comprehensive overview of the key considerations to take into account in order to optimise your learning.

Fig. 1 — Three ways to think about data — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 3–9

  • Data driven — With a purely data driven approach, it’s data that determine the fate of a product; based solely on data outcomes businesses can optimise continuously for the biggest impact on their key metric. You can be data driven if you’ve done the work of knowing exactly what your goal is, and you have a very precise and unambiguous question that you want to understand.
  • Data informed — With a data informed approach, you weigh up data alongside a variety of other variables such as strategic considerations, user experience, intuition, resources, regulation and competition. So adopting a data-informed perspective means that you may not be as targeted and directed in what you’re trying to understand. Instead, what you’re trying to do is inform the way you think about the problem and the problem space.
  • Data aware — In a data-aware mindset, you are aware of the fact that there are many types of data to answer many questions. If you are aware there are many kinds of problem solving to answer your bigger goals, then you are also aware of all the different kinds of data that might be available to you.

Fig. 2 — Generating a representative sample — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 45–53

  • Cohorts and segments — A cohort is a group of users who have a shared experience. Alternatively, you can also segment your user base into different groups based on more stable characteristics such as demographic factors (e.g. gender, age, country of residence) or you may want them by their behaviour (e.g. new user, power user).
  • New users versus existing users — Data can help you learn more about both your existing understand prospective future users, and determining whether you want to sample from new or existing users is an important consideration in A/B testing. Existing users are people who have prior experience with your product or service. Because of this, they come into the experience with a preconceived notion of how your product or service works. Thus, it’s important to be careful about whether your test is with new or existing users, as these learned habits and behaviours about how your product used to be in the past can bias in your A/B test.

Fig. 3 — Know what you want to learn — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, p. 67

  • If you fail, what did you learn that you will apply to future designs?
  • If you succeed, what did you learn that you will apply to future designs?
  • How much work are you willing to put into your testing in order to get this learning?

Fig. 4 — Experimentation framework — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 83–85

  1. Goal — First you define the goal that you want to achieve; usually this is something that is directly tied to the success of your business. Note that you might also articulate this goal as an ideal user experience that you want to provide. This is often the case that you believe that delivering that ideal experience will ultimately lead to business success.
  2. Problem/opportunity area — You’ll then identify an area of focus for achieving that goal, either by addressing a problem that you want to solve for your users or by finding an opportunity area to offer your users something that didn’t exist before or is a new way of satisfying their needs.
  3. Hypothesis — After that, you’ll create a hypothesis statement which is a structured way of describing the belief about your users and product that you want to test. You may pursue one hypothesis or many concurrently.
  4. Test — Next, you’ll create your test by designing the actual experience that represents your idea. You’ll run your test by launching the experience to a subset of your users.
  5. Results — Finally, you’ll end by getting the reaction to your test from your users and doing analysis on the results that you get. You’ll take these results and make decisions about what to do next.

Fig. 5 — Expected (“positive”) results — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 227–228

  • How large of an effect will your changes have on users? Will this new experience require any new training or support? Will the new experience slow down the workflow for anyone who has become accustomed to how your current experience is?
  • How much work will it take to maintain?
  • Did you take any “shortcuts” in the process of running the test that you need to go back and address before your roll it out to a larger audience (e.g. edge cases or fine-tuning details)?
  • Are you planning on doing additional testing and if so, what is the time frame you’ve established for that? If you have other large changes that are planned for the future, then you may not want to roll your first positive test out to users right away.

Fig. 6 — Unexpected and undesirable (“negative”) results — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 228–231

  • Are they using the feature the way you think they do?
  • Do they care about different things than you think they do?
  • Are you focusing on something that only appeals to a small segment of the base but not the majority?

Related links for further learning:

  1. https://www.ted.com/watch/ted-institute/ted-bcg/rochelle-king-the-complex-relationship-between-data-and-design-in-ux
  2. http://andrewchen.co/know-the-difference-between-data-informed-and-versus-data-driven/
  3. https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/
  4. https://vwo.com/ab-split-test-significance-calculator/
  5. https://www.kissmetrics.com/growth-tools/ab-significance-test/
  6. https://select-statistics.co.uk/blog/importance-effect-sample-size/
  7. https://www.optimizely.com/optimization-glossary/statistical-significance/
  8. https://medium.com/airbnb-engineering/experiment-reporting-framework-4e3fcd29e6c0
  9. https://medium.com/@Pinterest_Engineering/building-pinterests-a-b-testing-platform-ab4934ace9f4
  10. https://medium.com/airbnb-engineering/https-medium-com-jonathan-parks-scaling-erf-23fd17c91166

 

Design with Data.jpg

 

 

 

 

Book review: “Just Enough Research”

Back in 2013, Erika Hall, co-founder of Mule Design, wrote “Just Enough Research”. In this book, Hall explains why good customer research is so important. She outlines what makes research effective and provides practical tips on how to best conduct research. Reading “Just Enough Research” reminded me of reading “Rocket surgery made easy” by Steve Krug and “Undercover UX” by Cennydd Bowles, since all three books do a good job at both explaining and demystifying what it takes to do customer research.

These are the main things that I learned from reading “Just Enough Research”:

  1. What is research? – Right off the bat, Hall makes the point that in order to innovate, it’s important for you to know about the current state of things and why they’re like that. Research is systematic inquiry; you want to know more about a particular topic, so you go through a process to increase your knowledge. The specific type of process depends on who you are and what you need to know. This is illustrated through a nice definition of design research by Jane Fulton Suri, partner at design consultancy IDEO (see Fig. 1).
  2. Research is not asking people what they like! – I’m fully aware of how obvious this statement probably sounds. However, customer researcher is NOT about asking about what people do or don’t like. You might sometimes hear people ask users whether they like a particular product or feature; that isn’t what customer research is about. Instead, the focus is on exploring problem areas or new ideas, or simply testing how usable your product is.
  3. Generative or exploratory research – This is the research you do to identify the problem to solve and explore ideas. As Hall explains “this is the research you do before you even know what you’re doing.” Once you’ve gathered information, you then analyse your learnings and identify the most commonly voiced (or observed) unmet customer needs. This will in turn result in a problem statement or hypothesis to concentrate on.
  4. Descriptive and explanatory research – Descriptive research is about understanding the context of the problem that you’re looking to solve and how to best solve it. By this stage, you’ll have moved from “What’s a good problem to solve” to “What’s the best way to solve the problem I’ve identified?”
  5. Evaluative research – Usability testing is the most common form of evaluative research. With this research you test that your solution is working as expected and is solving the problem you’ve identified.
  6. Casual research – This type of research is about establishing a cause-and-effect relationship, understanding the ‘why’ behind an observation or pattern. Casual research often involves looking at analytics and carrying out A/B tests.
  7. Heuristic analysis – In the early stages of product design and development, evaluative research can be done in the form of usability testing (see point 5. above) or heuristic analysis. You can test an existing site or application before redesigning. “Heuristic” means “based on experience”. A heuristic is not a hard measure; it’s more of a qualitative guideline of best usability practice. Jakob Nielsen, arguably the founding father of usability, came up with the idea of heuristic analysis in 1990 and introduced ten heuristic principles (see Fig. 2).
  8. Usability testing – Testing the usability of a product with people is the second form of evaluative testing. Nielsen, the aforementioned usability guru, outlined five components that define usability (see Fig. 3). Hall stresses the importance of “cheap tests first, expensive tests later”; start simple – paper prototypes or sketches – and gradually up the ante.

Main learning point: “Just Enough Research” is a great, easy to read book which underlines the importance of customer research. The book does a great job in demonstrating that research doesn’t have to very expensive or onerous; it provides plenty of simple and practical to conduct ‘just enough research’.

 

Fig. 1 – Definition of “design research” by Jane Fulton Suri – Taken from: https://www.ideo.com/news/informing-our-intuition-design-research-for-radical-innovation

“Design research both inspires imagination and informs intuition through a variety of methods with related intents: to expose patterns underlying the rich reality of people’s behaviours and experiences, to explore reactions to probes and prototypes, and to shed light on the unknown through iterative hypothesis and experiment.”

Fig. 2 – Jakob Nielsen’s 10 Heuristics for User Interface Design (taken from: http://www.nngroup.com/articles/ten-usability-heuristics/)

  1. Visibility of system status – The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
  2. Match between system and the real world – The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
  3. User control and freedom – Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
  4. Consistency and standards – Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
  5. Error prevention – Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
  6. Recognition rather than recall – Minimise the user’s memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
  7. Flexibility and efficiency of use – Accelerators — unseen by the novice user — may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
  8. Aesthetic and minimalist design – Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
  9. Help users recognise, diagnose, and recover from errors – Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
  10. Help and documentation – Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.

Fig. 3 – Jakob Nielsen’s 5 components of usability – Taken from: Erika Hall. Just Enough Research, pp. 105-106

  • Learnability – How easy is it for users to accomplish basic tasks the first time they come across the design?
  • Efficiency – Once users have learned the design, how quickly can they perform tasks?
  • Memorability – When users return to the design after a period of not using it, how easily can they reestablish proficiency?
  • Errors – How many errors do users make, how severe are these errors, and how easily can they recover from the errors?
  • Satisfaction – How pleasant is it to use the design?

 

My product management toolkit (23): customer empathy

A few weeks ago I attended the annual Mind the Product conference in San Francisco, where David Wascha delivered a great talk about some of his key lessons learned in his 20 years of product management experience. He impressed on the audience that as product managers we should “protect our customer”; as product managers we need to shield our teams, but ultimately we need to protect our customers and their needs.

Dave’s point really resonated with me and prompted me to think more about how product managers can best protect customers and their needs. I believe this begins with the need to fully understand your customers;  “customer empathy” is something that comes to mind here:

  1. What’s customer empathy (1)? – In the dictionary, empathy is typically defined as “the ability to understand and share the feelings of another.” In contrast, sympathy is about feeling bad for someone else because of something that has happened to him or her. When I think about empathising with customers, I think about truly understanding their needs or problems. To me, the ultimate example of customer empathy can be found in Change By Design, a great book by IDEO‘s Tim Brown. In this book, Brown describes an IDEO employee who wanted to improve the experience of ER patients. The employee subsequently became an emergency room patient himself in order to experience first hand what it was like to be in an ER.
  2. What’s customer empathy (2)? – I love how UX designer Irene Au describes design as “empathy made tangible”. Irene distinguishes between between analytical thinking and empathic thinking. Irene refers to a piece  by Anthony Jack of Case Western University in this regard. Anthony found that when people think analytically, they tend to not use those areas of the brain that allow us to understand other people’s experience. It’s great to use data to inform the design and build of your product, and any decisions you make in the process. The risk with both quantitative data (e.g. analytics and surveys) and qualitative data (e.g. user interviews and observations) is that you end up still being quite removed from what the customer actually feels or thinks. We want to make sure that we really understand customer pain points and the impact of these pain points on the customers’ day-to-day lives.
  3. What’s customer empathy (3)? – I recently came across a video by the Cleveland Clinic – a non-profit academic medical centre that specialises in clinical and hospital care – which embodies customer empathy in a very inspiring and effective way (see Fig. 1 below). The underlying premise of the video is all about looking through another person’s eyes, truly trying to understand what someone else is thinking or feeling.

Fig. 1 – Cleveland Clinic Empathy: The Human Connection to Patient Care – Wvj_q-o8&feature=youtu.be

I see customer empathy as a skill that can be learned. In previous pieces, I’ve looked at some of the tools and techniques you can use to develop customer empathy. This is a quick recap of three simple ways to get started:

Listen. Listen. Listen  I often find myself dying to say something, getting my two cents in. I’ve learned that this desire is the first thing that needs to go if you want to develop customer empathy. Earlier this year, I learned about the four components of active listening, from reading “The Art of Active Listening” . Empathy is one of the four components of active listening:

Empathy is about your ability to understand the speaker’s situation on an emotional level, based on your own view. Basing your understanding on your own view instead of on a sense of what should be felt, creates empathy instead of sympathy. Empathy can also be defined as your desire to feel the speaker’s emotions, regardless of your own experience.

Empathy Map – I’ve found empathy mapping to be a great way of capturing your insights into another person’s thoughts, feelings, perceptions, pain, gains and behaviours (see Fig. 2 below). In my experience, empathy maps tend to be most effective when they’ve been created collectively and validated with actual customers.

Fig. 2 – Example empathy map, by Harry Brignull – Taken from: “How To Run an Empathy Mapping & User Journey Workshop” https://medium.com/@harrybr/how-to-run-an-empathy-user-journey-mapping-workshop-813f3737067

Problem Statements – To me, product management is all about – to quote Ash Maurya – “falling in love with the problem, not your solution.” Problem statements are an easy but very effective way to both capture and communicate your understanding of customer problems to solve. Here’s a quick snippet from an earlier ‘toolkit post’, dedicated to writing effective problem statements:

Standard formula:

Stakeholder (describe person using empathetic language) NEEDS A WAY TO Need (needs are verbs) BECAUSE Insight (describe what you’ve learned about the stakeholder and his need)

Some simple examples:

Richard,who loves to eat biscuits wants to find a way to eat at 5 biscuits a day without gaining weight as he’s currently struggling to keep his weight under control.

Sandra from The Frying Pan Co. who likes using our data platform wants to be able to see the sales figures of her business for the previous three years, so that she can do accurate stock planning for the coming year.

As you can see from the simple sample problem statements above, the idea is that you put yourself in the shoes of your (target) users and ask yourself “so what …!?” What’s the impact that we’re looking to make on a user’s life? Why?

Main learning point: Don’t despair if you feel that you haven’t got a sense of customer empathy yet. There are numerous ways to start developing customer empathy, and listening to customers is probably the best place to start!

Related links for further learning:

  1. https://www.ideo.com/post/change-by-design
  2. https://designthinking.ideo.com/
  3. http://www.sciencedirect.com/science/article/pii/S1053811912010646
  4. http://www.insightsquared.com/2015/02/empathy-the-must-have-skill-for-all-customer-service-reps/
  5. https://www.youtube.com/watch?v=cDDWvj_q-o8&feature=youtu.be
  6. https://www.linkedin.com/pulse/20131002191226-10842349-the-secret-to-redesigning-health-care-think-big-and-small?trk=mp-reader-card
  7. https://medium.com/@harrybr/how-to-run-an-empathy-user-journey-mapping-workshop-813f3737067
  8. https://blog.leanstack.com/love-the-problem-not-your-solution-65cfbfb1916b
  9. https://www.interaction-design.org/literature/article/stage-2-in-the-design-thinking-process-define-the-problem-and-interpret-the-results
  10. https://robots.thoughtbot.com/writing-effective-problem-statements
  11. https://www.slideshare.net/felipevlima/empathy-map-and-problem-statement-for-design-thinking-action-lab