Book review: “Designing with Data”

I’d been looking forward to Rochelle King writing her book about using data to inform designs (I wrote about using data to inform product decisions a few years ago, which post followed a great conversation with Rochelle).

Earlier this year, Rochelle published “Designing with Data: Improving the User Experience with A/B Testing”, together with Elizabeth F. Churchill and Caitlin Tan. The main theme of “Designing with Data” the book is the authors’ belief that data capture, management, and analysis is the best way to bridge between design, user experience, and business relevance:

  1. Data aware — In the book, King, Churchill and Tan distinguish between three different ways to think about data: data driven; data informed and data aware (see Fig. 1 below). The third way listed, being ‘data aware’, is introduced by the authors: “In a data-aware mindset, you are aware of the fact that there are many types of data to answer many questions.” If you are aware there are many kinds of problem solving to answer your bigger goals, then you are also aware of all the different kinds of data that might be available to you.
  2. How much data to collect? — The authors make an important distinction between “small sample research” and “large sample research”. Small sample research tends to be good for identifying usability problems, because “you don’t need to quantify exactly how many in the population will share that confusion to know it’s a problem with your design.” It reminded me of Jakob Nielsen’s point about how the best results come from testing with no more than 5 five people. In contrast, collecting data from a large group of participants, i.e. large sample research, can give you more precise quantity and frequency information: how many people people feel a certain way, what percentage of users will take this action, etc. A/B tests are one way of collecting data at scale, with the data being “statistically significant” and not just anecdotal. Statistical significance is the likelihood that the difference in conversion rates between a given variation and the baseline is not due to random chance.
  3. Running A/B tests: online experiments — The book does a great job of explaining what is required to successfully running A/B tests online, providing tips on how to sample users online and key metrics to measure (Fig. 2) .
  4. Minimum Detectable Effect — There’s an important distinction between statistical significance — which measure whether there’s a difference — and “effect”, which quantifies how big that difference is. The book explains about determining “Minimum Detectable Effect” when planning online A/B tests. The Minimum Detectable Effect is the minimum effect we want to observe between our test condition and control condition in order to call the A/B test a success. It can be positive or negative but you want to see a clear difference in order to be able to call the test a success or a failure.
  5. Know what you need to learn — The book covers hypotheses as an important way to figure out what it is that you want to learn through the A/B test, and to identify what success will look like. In addition, you can look at learnings beyond the outcomes of your A/B test (see Fig. 3 below).
  6. Experimentation framework — For me, the most useful section of the book was Chapter 3, in which the authors introduce an experimentation framework that helps planning your A/B test in a more structured fashion (see Fig. 4 below). They describe three main phases — Definition, Execution and Analysis — which feed into the experimentation framework. The ‘Definition’ phase covers the definition of a goal, articulation of a problem / opportunity and the drafting of a testable hypothesis. The ‘Execution’ phase is all about designing and building the A/B test, “designing to learn” in other words. In the final ‘Analysis’ phase you’re getting answers from your experiments. These results can be either “positive” and expected or “negative” and unexpected (see Fig. 5–6 below).

Main learning point: “Designing with Data” made me realise again how much thinking and designing needs to happen before running a successful online A/B test. “Successful” in this context means achieving clear learning outcomes. The book provides a comprehensive overview of the key considerations to take into account in order to optimise your learning.

Fig. 1 — Three ways to think about data — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 3–9

  • Data driven — With a purely data driven approach, it’s data that determine the fate of a product; based solely on data outcomes businesses can optimise continuously for the biggest impact on their key metric. You can be data driven if you’ve done the work of knowing exactly what your goal is, and you have a very precise and unambiguous question that you want to understand.
  • Data informed — With a data informed approach, you weigh up data alongside a variety of other variables such as strategic considerations, user experience, intuition, resources, regulation and competition. So adopting a data-informed perspective means that you may not be as targeted and directed in what you’re trying to understand. Instead, what you’re trying to do is inform the way you think about the problem and the problem space.
  • Data aware — In a data-aware mindset, you are aware of the fact that there are many types of data to answer many questions. If you are aware there are many kinds of problem solving to answer your bigger goals, then you are also aware of all the different kinds of data that might be available to you.

Fig. 2 — Generating a representative sample — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 45–53

  • Cohorts and segments — A cohort is a group of users who have a shared experience. Alternatively, you can also segment your user base into different groups based on more stable characteristics such as demographic factors (e.g. gender, age, country of residence) or you may want them by their behaviour (e.g. new user, power user).
  • New users versus existing users — Data can help you learn more about both your existing understand prospective future users, and determining whether you want to sample from new or existing users is an important consideration in A/B testing. Existing users are people who have prior experience with your product or service. Because of this, they come into the experience with a preconceived notion of how your product or service works. Thus, it’s important to be careful about whether your test is with new or existing users, as these learned habits and behaviours about how your product used to be in the past can bias in your A/B test.

Fig. 3 — Know what you want to learn — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, p. 67

  • If you fail, what did you learn that you will apply to future designs?
  • If you succeed, what did you learn that you will apply to future designs?
  • How much work are you willing to put into your testing in order to get this learning?

Fig. 4 — Experimentation framework — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 83–85

  1. Goal — First you define the goal that you want to achieve; usually this is something that is directly tied to the success of your business. Note that you might also articulate this goal as an ideal user experience that you want to provide. This is often the case that you believe that delivering that ideal experience will ultimately lead to business success.
  2. Problem/opportunity area — You’ll then identify an area of focus for achieving that goal, either by addressing a problem that you want to solve for your users or by finding an opportunity area to offer your users something that didn’t exist before or is a new way of satisfying their needs.
  3. Hypothesis — After that, you’ll create a hypothesis statement which is a structured way of describing the belief about your users and product that you want to test. You may pursue one hypothesis or many concurrently.
  4. Test — Next, you’ll create your test by designing the actual experience that represents your idea. You’ll run your test by launching the experience to a subset of your users.
  5. Results — Finally, you’ll end by getting the reaction to your test from your users and doing analysis on the results that you get. You’ll take these results and make decisions about what to do next.

Fig. 5 — Expected (“positive”) results — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 227–228

  • How large of an effect will your changes have on users? Will this new experience require any new training or support? Will the new experience slow down the workflow for anyone who has become accustomed to how your current experience is?
  • How much work will it take to maintain?
  • Did you take any “shortcuts” in the process of running the test that you need to go back and address before your roll it out to a larger audience (e.g. edge cases or fine-tuning details)?
  • Are you planning on doing additional testing and if so, what is the time frame you’ve established for that? If you have other large changes that are planned for the future, then you may not want to roll your first positive test out to users right away.

Fig. 6 — Unexpected and undesirable (“negative”) results — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 228–231

  • Are they using the feature the way you think they do?
  • Do they care about different things than you think they do?
  • Are you focusing on something that only appeals to a small segment of the base but not the majority?

Related links for further learning:

  1. https://www.ted.com/watch/ted-institute/ted-bcg/rochelle-king-the-complex-relationship-between-data-and-design-in-ux
  2. http://andrewchen.co/know-the-difference-between-data-informed-and-versus-data-driven/
  3. https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/
  4. https://vwo.com/ab-split-test-significance-calculator/
  5. https://www.kissmetrics.com/growth-tools/ab-significance-test/
  6. https://select-statistics.co.uk/blog/importance-effect-sample-size/
  7. https://www.optimizely.com/optimization-glossary/statistical-significance/
  8. https://medium.com/airbnb-engineering/experiment-reporting-framework-4e3fcd29e6c0
  9. https://medium.com/@Pinterest_Engineering/building-pinterests-a-b-testing-platform-ab4934ace9f4
  10. https://medium.com/airbnb-engineering/https-medium-com-jonathan-parks-scaling-erf-23fd17c91166

 

Design with Data.jpg

 

 

 

 

Book review: “Customers Included”

In the book “Customers Included”, Mark Hurst and Phil Terry make a great case for listening to the customer. In the book, Hurst and Terry look at why customers get overlooked by companies and explain how to best engage with customers:

  1. Why do customers get overlooked? – “The problem with customers is that they don’t always know what’s best for them” is a quote from Netflix CEO Reed Hastings referred to in the book. Similarly, Harvard Business School professor Clayton Christensen, warns that paying too much attention to today’s customers could lead a company to avoid the necessary step of disrupting itself to prepare for tomorrow’s market. These are commons reasons for why customers don’t always get involved or listened to when it comes to creating or improving products.
  2. Listening and disrupting can go hand in hand – Hurst and Terry argue that listening to customers isn’t as black and white as the likes of Hastings and Christensen portray it to to be. There’s room for nuance; accounting for different types of customers and different ways of listening to them. They make the point that “being disruptive requires knowing how to listen, in the right ways, to the right customers.” I totally agree that even in disruptive environments, it’s still essential to include the customer.” The point being that innovation should be focused on creating benefits for the customer, measuring innovation by its impact on the customer.
  3. Difference what people think and what they actually do – There’s typically a big difference between what people think (or say they think) and what they actually do. In my experience, this phenomenon raises its head particularly in focus groups, where people get together to give their feedback on a product. Hurst and Terry make the point that the very structure of a focus group fails to approximate real-world usage of a product, simply because having a number of people talking about a product doesn’t equal actual usage.
  4. The power of direct observations – The risk with research methods like focus groups is that customers give hypothetical answers, speculating about how they might behave, or how they could feel. I don’t find this feedback particularly helpful as it doesn’t give me a reliable indiction of how people actually behave or how they really feel. This is the key reason why Hurst and Terry advocate the use of direct observations; observing people in the appropriate environment, watching what they (don’t) do. For example, if you’re looking to learn more about people’s grocery shopping behaviours, you’re most likely to learn the most from observing people whilst they’re shopping at the supermarket.
  5. Doubts about personas – Hurst and Terry argue that “personas prioritise the hypothetical over the actual, and fiction over fact”. A user persona is a fictitious person with a fictitious profile. These aren’t real life people and I agree that if you do work with personas, you should always validate your made up user traits with real people. If you don’t do this validation, there’s a big risk of making product decisions solely based on hypothetical data.
  6. Limitations of task-based usability testing – Similar to the aforementioned point about personas, Hurst and Terry explain about the limitations of task-based usability testing (see Fig. 1 below). The overarching problem with only doing usability testing is that you might miss out on larger, more strategic insights. At is core, usability testing is tactical and helps to learn about how people use your product and identify any points of friction.
  7. Discovering unmet needs – “Unmet needs” are the antidote to the concept of “customers not knowing what they want” or “build it and they (customers) will come.” By just focusing on set usability tasks, Hurst and Terry argue, you’re unlikely to develop more strategic insights into your customers and their needs. To solve this, Hurst and Terry suggest direct observations and so-called “listening labs” as a better way of uncovering unmet needs.

Main learning point: “Customers Included” offers some good primers to use when convincing others of the importance of engaging with customers. More than that, the book also provides a ‘nuanced’ overview of the different user research methods to use, explaining pros and cons of each method.

Fig. 1 – Drawbacks of task-based usability testing – Taken from: Mark Hurst and Phil Terry, Customers Included, pp. 70-71

  1. The user tasks are all determined by the researchers beforehand
  2. The insights gained from the usability test are limited by those tasks
  3. The focus of task-based usability on tactical design elements

App review: Receipt Bank

It isn’t often that one of the apps that I use on a regular basis attracts a large round of funding but it happened recently with Receipt Bank, a London based started which “makes your bookkeeping, faster, easier and more efficient.” Last month, Receipt Bank received a Series B investment worth $50 million from New York based Insight Venture Partners.

Receipt Bank, which started in 2010, targets accountants, bookkeepers and small businesses. It offers them an online platform through which users can submit their invoices, receipts, and bills by taking a picture and uploading it through Receipt Bank’s mobile app (see Fig. 1), desktop app (see Fig. 2), or an email submission. Receipt Bank’s system then automatically extracts relevant data, sorts and categorises it. Apart from viewing your processed expenses online, Receipt Bank also publishes everything to the user’s accounting software of choice, FreshBooks or Xero for example.

Fig. 1 – Screenshot of Receipt Bank iOS app

 

 

Fig. 2 – The entry in Receipt Bank for one of my receipts

Given that I’ve been using Receipt Bank for a while now; instead of just reviewing existing functionality, I’ve also had a think about how I’d use a $50m war chest to further build out the Receipt Bank product:

  1. Faster! Faster! Faster! – When I started using Receipt Bank last year, I emailed the customer support team enquiring about the wait between submitting a picture of a receipt and it being “ready for export”. I got a friendly reply explaining that “we ask for a maximum of 24 hours to process items, but we are usually much faster than that.” The customer support adviser also explained that “the turnaround time also depends on the number of items waiting to be processed by the software and also their quality.” I’m sure Receipt Bank uses some form of machine-learning, algorithms to automatically interpret and categorise the key data fields from the picture of a receipt. As the field of Artificial Intelligence continues to evolve, I expect Receipt Bank to be able to – eventually – process receipts and invoices within seconds, with no need for the user to add or edit any info processed. Because I envisage machine learning to be the core driver of Receipt Bank’s proposition, I suggest spending at least half of its latest investment on AI technology and engineers specialised in machine learning.
  2. Not just tracking my bills and invoices – Yes, everybody is jumping on the chatbot wagon (and some of the results are frankly laughable). However, I do believe that if Receipt Bank can learn a sufficient amount about its customers and their spending and accounting behaviours, it will be able to provide them with tailored advice and predictions. For example, if I pay my supplier in China a fixed amount per month to keep my stock up, I’d like to ask Receipt Bank’s future “Expense Assistant” how my supplier payments will be affected if there’s massive volatility in the exchange rate between the British Pound and the Chinese Yuan. Similarly, when I look at most of today’s finance departments, the people in these teams seem to spend on matching the right payments received to the relevant invoice(s) sent out. I realise that the machine learning around multiple invoices wrapped into a single payment is easier said than done, but I don’t think it will be impossible and the $25m investment into AI (see point 1. above) should help massively.
  3. What if the days of paper bills are numbered!? – Now that I’ve effectively spent $25m on AI technology, I’ve got $25m left. The first thing I’d do with this remaining money is to prepare for scenarios where invoices or receipts are no longer issued on paper but provided orally. At the moment, capability like Alexa Expense Tracker is mostly used for personal expenses, but I do envisage a future where people use Alexa or Siri to add and track their expenses. Given that voice technology is still very much in its infancy, I suggest restricting Receipt Bank’s investment into this area to a no more than $1m.
  4. Integrate more (and please don’t forget about Asia) – If I were Receipt Bank I’d probably use about $10m of the remaining fund to enter new geographies and integrate with additional systems. For example, I like how Sage’s Pegg hooks into any expenses you record on your mobile, whether it’s via Slack, Facebook, Skype, WhatsApp, etc. I don’t know whether Receipt Bank is looking to enter the Asian market, but I feel there’s great opportunity to integrate with messenger apps like WeChat and Hike, without spending more than $2m on development and marketing. Also, integrating with payment processors, like Finsync did recently with Worldpay, is an integration avenue worth considering! 
  5. But don’t forget about the current product! – I feel Receipt bank would be remiss if it were to forget about improving its current platform, both in terms of functionality and user experience. For example, I can’t judge how well Receipt Bank does in retaining its customers, but I feel there are a number of ways in which it can make the existing product ‘work harder’ (see Fig. 3 below). In my experience, some of my proposed improvements and features shouldn’t break the bank. By spending about $1m on continuous improvements over a number of years, Receipt Bank should have at least $20m left in the bank, as a buffer for difficult times and any new opportunities that might arise during the product lifecycle.

Fig. 3 – Suggestions to make Receipt Bank’s existing product work harder:

  1. Some touches of gamification – I’d argue that the longevity of the relationship between Receipt Bank and an individual user is determined by how often the users uploads bills onto the platform. I assume that most users will most probably not view managing their expenses as fun, I think it would be good to look at ways to make the experience more fun. For example, I could get a gold star from my accountant once I’ve successfully synced my month’s expenses into my accounting system. I feel that there’s plenty of room to reinforce the current gamification elements that Receipt Bank uses. For example, the message that Receipt Bank managed to save 27 minutes of my time doesn’t really do it for me (see Fig. 4 below). Instead, the focus could be on the productivity gain that I’ve made for billable work (if I’m a freelancer for example).
  2. Better progress and status updates – Even if it does continue to take up to 24 hours. to categorise and process my expenses, it would be great if Receipt Bank could make its “in progress” status more intuitive and informative.
  3. Clearer and stronger calls to action – For example, I can see that I’m not making the best use of my Receipt Bank subscription (see Fig. 5 below). However, there are no suggestions on specific actions I can take to get more value from my Receipt Bank plan.

Fig. 4 – Screenshot my Receipt Bank usage

Fig. 5 – Screenshot of my Receipt Bank “Usage summary”

Main learning point: Having thought about Receipt Bank’s current product offering, and my understanding of their target market, I suggest investing a good chunk of the recent investment into optimising the machine learning algorithms in such a way that both processing speed and accuracy are significantly increased. By doing this, the customer profile and behavioural data generated, will create additional opportunities to further retain customers and offer adjacent products and services.

Related links for further learning:

  1. http://uk.businessinsider.com/receipt-bank-raises-50-million-from-insight-venture-partners-2017-7
  2. https://venturebeat.com/2017/07/20/receipt-bank-raises-50-million-insight-venture-partners/
  3. https://itunes.apple.com/gb/app/receipt-bank-business-expense-scanner-tracker/id418327708?mt=8
  4. https://play.google.com/store/apps/details?id=com.receiptbank.android&hl=en_GB 
  5. https://www.forbes.com/sites/bernardmarr/2017/07/07/machine-learning-artificial-intelligence-and-the-future-of-accounting/#49bb42ac2dd1
  6. https://hellopegg.io/
  7. http://uk.pcmag.com/cloud-services/87846/feature/23-must-have-alexa-skills-for-your-small-business
  8. https://www.accountingweb.co.uk/tech/accounting-software/case-study-receipt-banks-rapid-growth
  9. https://www.finextra.com/pressarticle/70263/finsync-connects-with-worldpay-us
  10. http://www.bankingtech.com/520502/symitars-episys-core-system-integrated-with-amazon-echo-baxter-cu-an-early-taker/

Book review: “Just Enough Research”

Back in 2013, Erika Hall, co-founder of Mule Design, wrote “Just Enough Research”. In this book, Hall explains why good customer research is so important. She outlines what makes research effective and provides practical tips on how to best conduct research. Reading “Just Enough Research” reminded me of reading “Rocket surgery made easy” by Steve Krug and “Undercover UX” by Cennydd Bowles, since all three books do a good job at both explaining and demystifying what it takes to do customer research.

These are the main things that I learned from reading “Just Enough Research”:

  1. What is research? – Right off the bat, Hall makes the point that in order to innovate, it’s important for you to know about the current state of things and why they’re like that. Research is systematic inquiry; you want to know more about a particular topic, so you go through a process to increase your knowledge. The specific type of process depends on who you are and what you need to know. This is illustrated through a nice definition of design research by Jane Fulton Suri, partner at design consultancy IDEO (see Fig. 1).
  2. Research is not asking people what they like! – I’m fully aware of how obvious this statement probably sounds. However, customer researcher is NOT about asking about what people do or don’t like. You might sometimes hear people ask users whether they like a particular product or feature; that isn’t what customer research is about. Instead, the focus is on exploring problem areas or new ideas, or simply testing how usable your product is.
  3. Generative or exploratory research – This is the research you do to identify the problem to solve and explore ideas. As Hall explains “this is the research you do before you even know what you’re doing.” Once you’ve gathered information, you then analyse your learnings and identify the most commonly voiced (or observed) unmet customer needs. This will in turn result in a problem statement or hypothesis to concentrate on.
  4. Descriptive and explanatory research – Descriptive research is about understanding the context of the problem that you’re looking to solve and how to best solve it. By this stage, you’ll have moved from “What’s a good problem to solve” to “What’s the best way to solve the problem I’ve identified?”
  5. Evaluative research – Usability testing is the most common form of evaluative research. With this research you test that your solution is working as expected and is solving the problem you’ve identified.
  6. Casual research – This type of research is about establishing a cause-and-effect relationship, understanding the ‘why’ behind an observation or pattern. Casual research often involves looking at analytics and carrying out A/B tests.
  7. Heuristic analysis – In the early stages of product design and development, evaluative research can be done in the form of usability testing (see point 5. above) or heuristic analysis. You can test an existing site or application before redesigning. “Heuristic” means “based on experience”. A heuristic is not a hard measure; it’s more of a qualitative guideline of best usability practice. Jakob Nielsen, arguably the founding father of usability, came up with the idea of heuristic analysis in 1990 and introduced ten heuristic principles (see Fig. 2).
  8. Usability testing – Testing the usability of a product with people is the second form of evaluative testing. Nielsen, the aforementioned usability guru, outlined five components that define usability (see Fig. 3). Hall stresses the importance of “cheap tests first, expensive tests later”; start simple – paper prototypes or sketches – and gradually up the ante.

Main learning point: “Just Enough Research” is a great, easy to read book which underlines the importance of customer research. The book does a great job in demonstrating that research doesn’t have to very expensive or onerous; it provides plenty of simple and practical to conduct ‘just enough research’.

 

Fig. 1 – Definition of “design research” by Jane Fulton Suri – Taken from: https://www.ideo.com/news/informing-our-intuition-design-research-for-radical-innovation

“Design research both inspires imagination and informs intuition through a variety of methods with related intents: to expose patterns underlying the rich reality of people’s behaviours and experiences, to explore reactions to probes and prototypes, and to shed light on the unknown through iterative hypothesis and experiment.”

Fig. 2 – Jakob Nielsen’s 10 Heuristics for User Interface Design (taken from: http://www.nngroup.com/articles/ten-usability-heuristics/)

  1. Visibility of system status – The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
  2. Match between system and the real world – The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
  3. User control and freedom – Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
  4. Consistency and standards – Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
  5. Error prevention – Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
  6. Recognition rather than recall – Minimise the user’s memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
  7. Flexibility and efficiency of use – Accelerators — unseen by the novice user — may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
  8. Aesthetic and minimalist design – Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
  9. Help users recognise, diagnose, and recover from errors – Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
  10. Help and documentation – Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.

Fig. 3 – Jakob Nielsen’s 5 components of usability – Taken from: Erika Hall. Just Enough Research, pp. 105-106

  • Learnability – How easy is it for users to accomplish basic tasks the first time they come across the design?
  • Efficiency – Once users have learned the design, how quickly can they perform tasks?
  • Memorability – When users return to the design after a period of not using it, how easily can they reestablish proficiency?
  • Errors – How many errors do users make, how severe are these errors, and how easily can they recover from the errors?
  • Satisfaction – How pleasant is it to use the design?

 

App review: Curve

I recently heard Shachar Bialick – Founder and CEO of Curve – talk about how the new Curve app will make it easier for small businesses to manage their financial lives. It prompted me to have a first play with the Curve app in iOS, which is currently available as a Beta release. This is what I learned:

  1. My quick summary of Curve (before using it) – I expect Curve to be able to aggregate all my (business credit and debit cards – and related account / transaction data – into a single place.
  2. How does Curve explain itself in the first minute? – When I open the Curve app, I’m presented with two key messages: “Welcome to Connected Money” followed by “Curve combines all your cards into one smart card and smart app”. When reading these messages, “data” is the first thing that comes to mind. How will Curve combine and display all my bank data into a single place (and in way that lets me understand at a glance what’s going on)?
  3. Getting started, what’s the process like (1)? – When I tap the “Get Started” button on the app’s landing screen (see Fig. 1), I then need to enter my email address. By continuing through the rest of Curve’s onboarding journey I automatically agree to its terms and conditions as well as its privacy policy (see Fig. 2). I like the sound of the “magic link” – Curve sending me an email which lets me sign in with one click – over having to add yet another password (see Fig. 3).
  4. Getting started, what’s the process like (2)? – The screen which shows me the different Curve packages to chose from is great. It’s a clear overview, no frills (see Fig. 4). However, I’m unsure whether I’ve got sufficient data or information to decide which package is most appropriate for me. Also, can I switch from one package to another? If so, how easy is that?
  5. Getting started, what’s the process like (3)? – From providing more detail about my business to entering my card, the user experience feels very seamless and intuitive. I did, however, wonder why I did not need to enter then name of my business after stating that I’m a business owner. I expected some link to the Companies House details of my business, similar to Tide Bank’s onboarding process. Overall, the Curve app does a good job in trying to keep the onboarding process as simple as possible and the process of adding my first card feels straightforward too (see Fig. 5-6).
  6. Getting started, what’s the process like (4)? – The messaging around the card verification process is ok, but I’m nevertheless not entirely as to why my card issuer needs to provide security and I’m unsure as to how long this will take (see Fig. 7). I wonder what I can (not) do whilst I am waiting for my card to be verified? Will I need to go through a similar process when I enter an additional card that has been issued by the same provider?
  7. Getting started, what’s the process like (5)? – I’m massively intrigued by Curve’s new “Go Back in Time” feature (see Fig. 8). Curve lets its users swap purchases paid on one card to another. It lets users select the purchase(s) that they want change the payment method for. By tapping “Go Back in Time” under “Transaction Features” to bring up the menu, users can choose their preferred card for that purchase. This feature is available for any purchase under £1,000 with the Curve Mastercard within 14 days from purchase. I’m not 100% sure how long Curve will be able to hold one to this feature, as I can imagine the different credit card schemes getting up in arms about e.g. delayed payments or not being able to recoup initial payment transaction costs.
  8. Did Curve deliver on my expectations? – Yes. Although I haven’t yet been able to add multiple cards and see a combined view of transaction data for those different cards, Curve does a great job at explaining the onboarding process at every step of the way and uses some simple, but nice UX practices along the journey.

Fig. 1 – Screenshot of Curve’s iOS opening screen

 

Fig. 2 – Entering my email into Curve and agreeing to Curve’s T&Cs and privacy policy

 

Fig. 3 – Screenshots of Curve’s sign in process

 

Fig. 4 – Screenshot of ‘Which Curve are you?’ screen on Curve iOS

 

Fig. 5 – Information to enter during the onboarding process on Curve’s iOS app

 

Fig. 6 – Adding my first card in the Curve iOS app

 

Fig. 7 – Card verification steps on Curve’s iOS app

 

Fig. 8 – Welcome screen and Curve’s ‘Go Back in Time’ feature

 

Related links for further learning:

  1. https://breakingbanks.com/episode/removing-mystery-money-movement/
  2. https://www.imaginecurve.com/
  3. https://techcrunch.com/2017/07/03/back-to-the-future/
  4. http://blog.imaginecurve.com/go-back-in-time-with-curve/
  5. http://www.wired.co.uk/article/curve-time-travel

 

My product management toolkit (23): customer empathy

A few weeks ago I attended the annual Mind the Product conference in San Francisco, where David Wascha delivered a great talk about some of his key lessons learned in his 20 years of product management experience. He impressed on the audience that as product managers we should “protect our customer”; as product managers we need to shield our teams, but ultimately we need to protect our customers and their needs.

Dave’s point really resonated with me and prompted me to think more about how product managers can best protect customers and their needs. I believe this begins with the need to fully understand your customers;  “customer empathy” is something that comes to mind here:

  1. What’s customer empathy (1)? – In the dictionary, empathy is typically defined as “the ability to understand and share the feelings of another.” In contrast, sympathy is about feeling bad for someone else because of something that has happened to him or her. When I think about empathising with customers, I think about truly understanding their needs or problems. To me, the ultimate example of customer empathy can be found in Change By Design, a great book by IDEO‘s Tim Brown. In this book, Brown describes an IDEO employee who wanted to improve the experience of ER patients. The employee subsequently became an emergency room patient himself in order to experience first hand what it was like to be in an ER.
  2. What’s customer empathy (2)? – I love how UX designer Irene Au describes design as “empathy made tangible”. Irene distinguishes between between analytical thinking and empathic thinking. Irene refers to a piece  by Anthony Jack of Case Western University in this regard. Anthony found that when people think analytically, they tend to not use those areas of the brain that allow us to understand other people’s experience. It’s great to use data to inform the design and build of your product, and any decisions you make in the process. The risk with both quantitative data (e.g. analytics and surveys) and qualitative data (e.g. user interviews and observations) is that you end up still being quite removed from what the customer actually feels or thinks. We want to make sure that we really understand customer pain points and the impact of these pain points on the customers’ day-to-day lives.
  3. What’s customer empathy (3)? – I recently came across a video by the Cleveland Clinic – a non-profit academic medical centre that specialises in clinical and hospital care – which embodies customer empathy in a very inspiring and effective way (see Fig. 1 below). The underlying premise of the video is all about looking through another person’s eyes, truly trying to understand what someone else is thinking or feeling.

Fig. 1 – Cleveland Clinic Empathy: The Human Connection to Patient Care – Wvj_q-o8&feature=youtu.be

I see customer empathy as a skill that can be learned. In previous pieces, I’ve looked at some of the tools and techniques you can use to develop customer empathy. This is a quick recap of three simple ways to get started:

Listen. Listen. Listen  I often find myself dying to say something, getting my two cents in. I’ve learned that this desire is the first thing that needs to go if you want to develop customer empathy. Earlier this year, I learned about the four components of active listening, from reading “The Art of Active Listening” . Empathy is one of the four components of active listening:

Empathy is about your ability to understand the speaker’s situation on an emotional level, based on your own view. Basing your understanding on your own view instead of on a sense of what should be felt, creates empathy instead of sympathy. Empathy can also be defined as your desire to feel the speaker’s emotions, regardless of your own experience.

Empathy Map – I’ve found empathy mapping to be a great way of capturing your insights into another person’s thoughts, feelings, perceptions, pain, gains and behaviours (see Fig. 2 below). In my experience, empathy maps tend to be most effective when they’ve been created collectively and validated with actual customers.

Fig. 2 – Example empathy map, by Harry Brignull – Taken from: “How To Run an Empathy Mapping & User Journey Workshop” https://medium.com/@harrybr/how-to-run-an-empathy-user-journey-mapping-workshop-813f3737067

Problem Statements – To me, product management is all about – to quote Ash Maurya – “falling in love with the problem, not your solution.” Problem statements are an easy but very effective way to both capture and communicate your understanding of customer problems to solve. Here’s a quick snippet from an earlier ‘toolkit post’, dedicated to writing effective problem statements:

Standard formula:

Stakeholder (describe person using empathetic language) NEEDS A WAY TO Need (needs are verbs) BECAUSE Insight (describe what you’ve learned about the stakeholder and his need)

Some simple examples:

Richard,who loves to eat biscuits wants to find a way to eat at 5 biscuits a day without gaining weight as he’s currently struggling to keep his weight under control.

Sandra from The Frying Pan Co. who likes using our data platform wants to be able to see the sales figures of her business for the previous three years, so that she can do accurate stock planning for the coming year.

As you can see from the simple sample problem statements above, the idea is that you put yourself in the shoes of your (target) users and ask yourself “so what …!?” What’s the impact that we’re looking to make on a user’s life? Why?

Main learning point: Don’t despair if you feel that you haven’t got a sense of customer empathy yet. There are numerous ways to start developing customer empathy, and listening to customers is probably the best place to start!

Related links for further learning:

  1. https://www.ideo.com/post/change-by-design
  2. https://designthinking.ideo.com/
  3. http://www.sciencedirect.com/science/article/pii/S1053811912010646
  4. http://www.insightsquared.com/2015/02/empathy-the-must-have-skill-for-all-customer-service-reps/
  5. https://www.youtube.com/watch?v=cDDWvj_q-o8&feature=youtu.be
  6. https://www.linkedin.com/pulse/20131002191226-10842349-the-secret-to-redesigning-health-care-think-big-and-small?trk=mp-reader-card
  7. https://medium.com/@harrybr/how-to-run-an-empathy-user-journey-mapping-workshop-813f3737067
  8. https://blog.leanstack.com/love-the-problem-not-your-solution-65cfbfb1916b
  9. https://www.interaction-design.org/literature/article/stage-2-in-the-design-thinking-process-define-the-problem-and-interpret-the-results
  10. https://robots.thoughtbot.com/writing-effective-problem-statements
  11. https://www.slideshare.net/felipevlima/empathy-map-and-problem-statement-for-design-thinking-action-lab

 

App review: Toutiao

Fig. 1 – Screenshot of www.toutiao.com/ homepage 

When I first heard about Toutiaou I thought it might be just another news app, this coming one from China. I learned, however, very quickly that Toutiaou is much more than just a news app; at the time of writing, Toutiao has more than 700 million users in total, with ore than 78 million users reading over 1.3 billion articles on a daily basis.

Toutiao, known officially as Jinri Toutiao, which means “Today’s Headlines”, has a large part of its rapid rise to its ability to provide its users with a highly personalised news feed. Toutiao is a mobile platform that use machine learning algorithms to recommend content to its users, based on previous content accessed by users and their interaction with the content (see Fig. 2).

Fig. 2 – Screenshot of Toutiao iOS app

I identified a number of elements that contribute to Toutiao’s success:

  1. AI and machine learning – Toutiao’s flagship value proposition to its users, having its own dedicated AI Lab in order to constantly further the development of the AI technology that underpins its platform. Toutiao’s algorithms learn from the types of content its users interact with and the way(s) in which they interact with this content. Given that Toutiao users spend on average 76 minutes per day on the app, there’s a wealth of data for Toutiao’s algorithms to learn form and to base personalisations on.
  2. Variety of content types to choose from – Toutiao enables its users to upload short videos, and Toutiao’s algorithms of will recommend selected videos to appropriate users (see Fig. 3). Last year, Ivideos on Toutiao were played 1.5 billion times per day, making Toutiao China’s largest short video platform. Users can also upload pictures, similar to Instagram or Facebook, users can share their pictures, with other users being abel to like or comment on this content (see Fig. 4).
  3. Third party integrations – Toutiao has got strategic partnerships in place with the likes of WeChat, a highly popular messaging app (see Fig. 5), and jd.com, a local online marketplace. It’s easy to see how Toutiao is following an approach whereby they’re inserting their news feed into a user’s broader ecosystem.

Main learning point: I was amazed by the scale at which Toutiao operate and the levels at which its users interact with the app. We often talk about the likes of Netflix and Spotify when it comes to personalised recommendations, but with the amount of data that Toutiao gathers, I can they can create a highly tailored content experience for their users.

Fig. 3 – Screenshot of video section on Toutiao iOS app 

Fig. 4 – Screenshot of user generated content feed on Toutiao iOS app

IMG_4954

Fig. 5 – Screenshot of Toutiao and WeChat integration on Toutiao iOS app

Related links for further learning:

  1. https://www.toutiao.com/
  2. https://www.crunchbase.com/organization/toutiao#/entity
  3. http://technode.com/2017/06/05/podcast-analyse-asia-187-toutiao-with-matthew-brennan/
  4. https://www.technologyreview.com/s/603351/the-insanely-popular-chinese-news-app-that-youve-never-heard-of/
  5. https://www.forbes.com/sites/ywang/2017/05/26/jinri-toutiao-how-chinas-11-billion-news-aggregator-is-no-fake/#24d401d64d8a
  6. https://en.wikipedia.org/wiki/Toutiao
  7. http://lab.toutiao.com/
  8. https://www.liftigniter.com/toutiao-making-headlines-machine-learning/
  9. https://techcrunch.com/2017/02/01/chinese-news-reading-app-toutiao-acquires-flipagram/
  10. https://lotusruan.wordpress.com/2016/03/20/cant-beat-giant-companies-on-wechatweibo-try-toutiao/
  11. https://www.chinainternetwatch.com/tag/toutiao/