My product management toolkit (36): Google’s HEART framework

Measure. Measure. Measure. Tracking the impact of a product is crucial if you wish to learn about your product and your customers. I’ve written before about the importance of spending time on defining the right metrics to measure, avoiding the risk of succumbing to data overload. That’s all well and good, but what do you do when the key things to measure aren’t so tangible!? For example, how do you measure customer feelings or opinions (a lot of which you’ll learn about during qualitative research)?

A few years ago, Kerry Rodden – whilst at Google – introduced the HEART framework which aims to solve the problem of measuring less tangible aspects of the products and experiences we create (see Fig. 1 below). The HEART framework consists of two parts:

  • The part that measures the quality of the user experience (the HEART framework)
  • The part that measures the goals of a project or product (the Goals-Signals-Metrics process)

 

Fig. 1 – The HEART framework combined with the Goals-Signals-Metrics process – Taken from: https://medium.com/@dhruvghulati/google-s-heart-framework-a-critical-evaluation-a6694421dae

 

Both parts are very helpful tools to have in one’s product management toolkit as they’ll help you to measure product performance through the lens of the person using your product:

HEART framework

  • Happiness – Measure of user attitudes, often collected via surveys or interviews. For example: satisfaction, perceived ease of use, and net promoter score.
  • Engagement – Measures the level of user involvement, typically via behavioural proxies such as frequency, intensity, or depth of interaction over some time period. Examples include the number of visits per user per week or the number of photos uploaded per user per day.
  • Adoption – New users of a product, feature or a service. For example: the number of accounts created in the last seven days, the number of people dropping off during the onboarding experience or the percentage of Gmail users who use labels.
  • Retention – The rate at which existing users are returning. For example: how many active users from a given time period are still present in some later time period? You may be more interested in failure to retain, commonly known as “churn.”
  • Task success – This includes traditional behavioural metrics with respect to user experience, such as efficiency (e.g. time to complete a task), effectiveness (e.g. percent of tasks completed), and error rate. This category is most applicable to areas of your product that are very task-focused, such as search or an upload flow.

Certainly, the HEART framework isn’t bullet proof (nor does it have to be in my humble opinion). For example, Dhruv Ghulati has written up some valid concerns about how the HEART metrics could easily contradict each other or shouldn’t be taken at face value. I do, however, believe that the HEART framework is a valuable tool for the following reasons and use cases:

  • Learning how customers feel about your product.
  • Correlating these learnings with actual customer behaviours.
  • Does the product help achieve key customer tasks or outcomes? Why (not)?
  • What should we focus on? Why? How to best measure?

The HEART framework thus works well in measuring the quality of the user experience, making intangible things such as “happiness” and “engagement” more tangible.

Goals-Signals-Metrics process

The HEART framework goes hand in hand with the Goals-Signals-Metrics process, which measures the specific goals of a product. I came across a great example of the Goals-Signals-Metrics process, by Usabilla. This qualitative user research company applied the HEART framework and the Goals-Signals-Metrics when they launched a 2-step verification future for their users.

Fig. 2 – Usabila’s application of the HEART framework – Taken from: https://usabilla.com/blog/how-to-prove-the-value-of-your-ux-work/

This example clearly shows how you can take ‘happiness’, a more intangible aspect of Usabilla’s authentication experience, and make it measurable:

  • Question: How to measure ‘happiness’ with respect to Usabilla’s authentication experience?
  • Goal: The overarching goal here is to ensure that Usabilla’s customers feel satisfied and secure whilst using Usabilla’s product.
  • Signals: Positive customer feedback on the feature – through a survey – is a strong signal that Usabilla’s happiness goal is being achieved.
  • Metrics: Measuring the percentage of Usabilla customers that feels satisfied and secure after using the new authentication experience.

The Usabilla example of the HEART framework clearly shows the underlying method of taking a fuzzy goal and breaking it down into something which can be measured more objectively.

Main learning point: The HEART framework is a useful tool when it comes to understanding and tracking the customer impact of your product. As with everything that you’re trying to measure, make sure you’re clear about what you’re looking to learn and how to best interpret the data. However, the fact that the HEART framework looks at aspects at ‘happiness’ and ‘engagement’ makes it a useful tool in my book!

Related links for further learning:

  1. https://www.interaction-design.org/literature/article/google-s-heart-framework-for-measuring-ux
  2. https://www.dtelepathy.com/ux-metrics/
  3. http://www.rodden.org/kerry
  4. https://medium.com/uxinthe6ix/how-we-used-the-heart-framework-to-set-the-right-ux-goals-4454df39db94
  5. https://library.gv.com/how-to-choose-the-right-ux-metrics-for-your-product-5f46359ab5be
  6. https://www.appcues.com/blog/google-improves-user-experience-with-heart-framework
  7. https://clevertap.com/blog/google-heart-framework/
  8. https://medium.com/@dhruvghulati/google-s-heart-framework-a-critical-evaluation-a6694421dae
  9. https://gofishdigital.com/heart-framwork-plan-track-measure-goals-site/
  10. https://www.youtube.com/watch?v=vyZzbsL_fsg
  11. http://www.jjg.net/elements/
  12. https://usabilitygeek.com/combining-heart-framework-with-goals-signals-metrics-process-ux-metrics/

What’s so special about SenseTime!?

Question: What do the following products have in common?

Product 1 — Smart glasses worn by Chinese police officers

https://techcrunch.com/2018/02/08/chinese-police-are-getting-smart-glasses/

These smart glasses connect to a feed which taps into China’s state database to detect out potential criminals using facial recognition. Officers can identify suspects in a crowd by snapping their photo and matching it to their internal database.

Product 2 — Wrong360, a peer-to-peer lending app

               https://technode.com/2013/06/24/rong360-online-financial-product-search-platform/

Wrong360 is a Chinese peer-to-peer lending app which aims to make obtaining a loan as simple as possible. When users of the Wrong360 app enter the amount of loan, period, and purpose, the platform will automatically do the match and output a list of banks or credit agencies corresponding to the users’ requests. On the list, users can find the institution names, products, interests rate, gross interests, monthly payment, and the available periods, etc. Applying for a loan can done fully online, and the app uses facial recognition as part of the loan application process.

Product 3 — Security camera

Security cameras in public places to help police officers and shopkeepers by improved ways of face matching. Traditionally, face matching is based on trait description of someone’s facial features and the special distance between these features. Now, by extracting the geometric descriptions of the parts of the eyes, nose, mouth, chin, etc. and the structural relationship between them, search matching is performed with the feature templates stored in the database. When the similarity exceeds the set threshold, the matching results are shared.

                                                         http://www.sohu.com/a/163629793_99963310

 

Product 4 — Oppo mobile phone

                                      https://www.notey.com/blogs/device-SLASH-accessories?page=4

 

Oppo specalise mobile photography and uses artificial technology to enable features such as portrait photo-taking, bi-camera photo-taking, and face grouping.

Question: What do the following products have in common?

Answer: They’re all powered by SenseTime artificial technology.

Whether it’s “SenseTotem” — which is being used for surveillance purposes — or “SensePhoto” — which uses facial recognition technology for messaging apps and mobile cameras — it all comes from the same company: SenseTime.

The company has made a lot of progress in a relatively short space of time with respect to artificial intelligence based (facial) recognition. The Chinese government has been investing heavily in creating an ecosystem for AI startups, with Megvii as another well known exponent of China’s AI drive.

A project with the code name “Viper” is the latest in the range of products that SenseTime is involved. I’m intrigued and slightly scared by this project which is said to focus on processing thousands of live camera feeds (from CCTV, to traffic cameras to ATM cameras), processing and tagging people and objects. SenseTime is rumoured to want to sell the Viper surveillance service internationally, but I can imagine that local regulations and data protection rules might prevent this kind of ‘big brother is watching you’ approach to be rolled out anytime soon.

Main learning point: It seems that SenseTime is very advanced with respect to facial recognition, using artificial intelligence to combine thousands of (live) data sources. You could argue that SenseTime isn’t the only company building this kind of technology, but their rapid growth and technological as well as financial firepower makes them a force to be reckoned with. That, in my mind, makes SenseTime very special indeed.

Related links for further learning:

  1. The billion-dollar, Alibaba-backed AI company that’s quietly watching everyone in China
    Most Chinese consumers have likely never heard of SenseTime. But depending on where they live, it might be looking at…qz.com
  2. This Chinese Facial Recognition Surveillance Company Is Now the World’s Most Valuable AI Startup
    SenseTime raised $600 million from Alibaba and others at a valuation of over $3 billion, becoming the world’s most…fortune.com
  3. China Now Has the Most Valuable AI Startup in the World
    has raised $600 million from and other investors at a valuation of more than $3 billion, becoming the world’s most…www.bloomberg.com
  4. China’s SenseTime, the world’s highest valued AI startup, raises $600M
    The future of artificial intelligence (AI), the technology that is seen as potentially impacting almost every industry…techcrunch.com
  5. Chinese police are using smart glasses to identify potential suspects
    China already operates the world’s largest surveillance state with some 170 million CCTV cameras at work, but its line…techcrunch.com
  6. Facial Recognition in China with SenseTime – Nanalyze
    If you’ve spent any meaningful amount of time in a managerial role, you probably understand the importance of having a…www.nanalyze.com
  7. Rong360: Online Financial Product Search Platform · TechNode
    Launched in 2011, Rong360 operates an online financial product search platform providing loan recommendations for small…technode.com
  8. OPPO and SenseTime Jointly Build an AR Developer Platform
    OPPO and SenseTime Jointly Build an AR Developer Platform (Yicai Global) March 19 — Chinese handset maker Guangdong…www.yicaiglobal.com
  9. About Us – OPPO Global
    OPPO is a global electronics and technology service provider that delivers the latest and most exquisite mobile…www.oppo.com
  10. Megvii, Chinese facial recognition startup with access to government database, raises $460 million
    Megvii Inc., a facial recognition development startup also known as Face++, raised about $460 million from the…www.fastcompany.com

My product management toolkit (28): testing price sensitivity

Normally when I talk to other product managers about product pricing, I get slightly frightened looks in return. “Does that mean I need to set the price!?” or “am I now responsible for the commercial side of things too!?” are just some of the questions I’ve had thrown at me in the past.

“No” is the answer. I strongly believe that as product managers we run the risk of being all things to all people — see my previous post about “Product Janitors” — and I therefore believe that product people shouldn’t set prices. However, I do believe it’s critical for product people to think about pricing right from the beginning:

  • Do people want the product?
  • Why do they want it?
  • How much are they willing pay for it?

Answers to these questions will not only affect what product is built and how it’s built, but also how it will be launched and positioned within the market. I’ve made the mistake before of not getting involved in pricing at all or too late. As a result, I felt that I was playing catchup to fully understand the product’s value proposition and customers’ appetite for it.

Fortunately, there are two tools I’ve come across which I’ve found very helpful in terms of my comprehending the value a product is looking to achieve — both from a business and customer perspective: the Van Westendorp Pricing Sensitivity Meter and the Conjoint Analysis respectively.

The Van Westendorp Pricing Sensitivity Meter has helped me to learn about the kinds of pricing-relating customers to ask (target) customers:

  • At what price would you consider the product to be so expensive that you would not consider buying it? (Too expensive)
  • At what price would you consider the product to be priced so low that you would feel the quality couldn’t be very good? (Too cheap)
  • At what price would you consider the product starting to get expensive, so that it is not out of the question, but you would have to give some thought to buying it? (Expensive/High Side)
  • At what price would you consider the product to be a bargain — a great buy for the money? (Cheap/Good Value)

The aforementioned Van Westendorp questions are a good example of a so-called “direct pricing technique”, where the pricing research is underpinned by the assumption that people have a basic understanding of what a product is worth. In essence, this line of questioning comes down to asking “how much would you pay for this (product or service)?” Whilst this isn’t necessarily the best question to ask in a customer interview, it’s a nice and direct way to learn about how customers feel about pricing.

Example customer responses to the Van Westdorp questions — Taken from: http://www.5circles.com/van-westendorp-pricing-the-price-sensitivity-meter/

The insights from applying these direct questions will help in better understanding price points. The Van Westendorp method identifies four different price definitions:

Point of marginal cheapness (‘PMC’) — At the point of marginal cheapness, more sales volume would be lost than gained due to customers perceiving the product as a bargain and doubting its quality.

Point of marginal expensiveness (‘PME’) — This is a price point above which the product is deemed too expensive for the perceived value customers get from it.

Optimum price point (‘OPP’) — The price point at which the number of potential customers who view the product as either too expensive or too cheap is at a minimum. At this point, the number of persons who would possibly consider purchasing the product is at a maximum.

Indifference price point (‘IPP’) —Point at which the same percentage of customers feel that the product is getting too expensive as those who feel it is at a bargain price. This is the point at which most customers are indifferent to the price of a product.

Range of acceptable pricing (‘RAI’) — This range sits between the aforementioned points of marginal cheapness and marginal expensiveness. In other words, consumers are considered likely to pay a price within this range.

Van Westendorp price sensitivity meter (example) — Taken from: https://www.qualtrics.com/uk/market-research/pricing-research/

 

In addition to the Van Westendorp Price Sensitivity Meter, I’ve also used Conjoint Analysis to understand more about pricing. Unlike the Van Westendorp approach, the conjoint analysis is an indirect pricing technique which means that price is combined with other attributes such as size or brand. Consumers’ price sensitivity is then derived from the results of the analysis.

Sample conjoint analysis question — Taken from: https://www.questionpro.com/survey-templates/conjoint-analysis-retirement-housing/
Sample conjoint analysis question — Taken from: https://www.questionpro.com/survey-templates/conjoint-analysis-retirement-housing/

 

When designing a conjoint analysis study, the first step is take a product and break it down into its individual parts. For example, we could take a car and create combinations of its different parts to learn about combinations that customers prefer. For example:

Which of these cars would you prefer?

Option: 1

Brand: Volvo

Seats: 5

Price: £65,000

Option: 2

Brand: SsangYyong

Seats: 5

Price: £20,000

Option: 3

Brand: Toyota

Seats: 7

Price: £45,000

This is an overly simplified and totally fictitious example, but hopefully gives you a better idea of how a conjoint analysis takes into account multiple factors and will give you insight into how much consumers are willing to pay for a certain combination of features.

Main learning point: I personally don’t expect product managers to set prices for their products or design price research. However, I do think we as product managers benefits from a better understanding of the pricing model for our products and a better understanding of what constitutes ‘value for money’ for our customers. The Van Westendorp Price Sensitivity Meter and the Conjoint Analysis are just two ways of testing price sensitivity, but are in my view to good places to get started if you wish to get a better handle on pricing.

Related links for further learning:

  1. Van Westendorp pricing (the Price Sensitivity Meter) – 5 Circles Research
  2. Conjoint analysis – Wikipedia
  3. Why You Should (Almost) Never Use the van Westendorp Pricing Model
  4. Van Westendorp’s Price Sensitivity Meter – Wikipedia
  5. Pricing research: A new take on the Van Westendorp model | Articles | Quirks.com
  6. Easy Guide: How To Run a Van Westendorp Pricing Analysis – Dimitry Apollonsky
  7. Van Westendorp Price Sensitivity Meter
  8. Conjoint Analysis – introduction and principles

 

My product management toolkit (25): understanding the “unit economics” of your product

As a product manager it’s important to understand the unit economics of your product, irrespective of whether you’re managing a physical or a digital product. Unit economics are the direct revenues and costs related to a specific business model expressed on a per unit basis. These revenues and costs are the levers that impact the overall financial success of a product. In my view there are a number of reasons why I feel it’s important for product managers to have a good grasp of the unit economics of your product:

  • Helps quantify the value of what we do – Ultimately, product success can be measured in hard metrics such as revenue and profit. Even in cases where our products don’t directly attribute to revenue, they will at least have an impact on operational cost.
  • Customer Value = Business Value – In an ideal world, there’s a perfect equilibrium between customer value and business value. If the customer is happy with your product, buys and uses it, this should result in tangible business value.
  • P&L accountability for product people (1) – Perhaps it’s to do with the fact that product management still is a relatively young discipline, but I’m nevertheless surprised by the limited number of pr0duct people I know who’ve got full P&L responsibility. I believe that having ownership over the profit & loss account helps product decision making and and accountability, not just for product managers but for the product teams that we’re part of.
  • P&L accountability for product people (2) – Understandably, this can be a scary prospect and might impact the ways in which we manage products. However, owning the P&L will (1) make product managers fully accountable for product performance (2) provide clarity and accountability for product decisions, (3) help investments in the product and product marketing and (4) steep product management in data, moving to a more data informed approach to product management.
  • Assessing opportunities based on economics – Let’s move away from assessing new business or product opportunities purely based on “gut feel”. I appreciate that at some point we have to take a leap, especially with new products or problems that haven’t been solved before. At the same time, I do believe it’s critical to use data to help inform your opportunity assessments. Tools like Ash Maurya’s Lean Canvas help to think through and communicate the economics of certain opportunities (see Fig. 1 below). In the “cost structure” part of the lean canvas, for example, you can outline the expected acquisition or distribution cost of a new product.
  • Speaking the same language – It definitely helps the collaboration with stakeholders, the board and investors if you can speak about the unit economics of your product. I know from experience that being able to talk sensibly about unit economics and gross profit, really helps the conversation.

Now that we’ve established the importance of understanding unit economics, let’s look at some of the key components of unit economics  in more detail:

Profit margin per unit = (sales price) – (cost of goods sold + manufacture cost + packaging cost + postage cost + sales cost)

Naturally the exact cost per unit will be dependent on things such as (1) product type (2) point of sale (3) delivery fees and (4) any other ‘cost inputs’.

In a digital context, the user is often the unit. For example, the Lifetime Value (‘LTV’) and Customer Acquisition Cost (‘CAC’) are core metrics for most direct to consumer (B2C) digital products and services. I learned from David Skok and Dave Kellogg about the importance of the ‘CAC to LTV’ ratio.

Granted, Skok and Kellogg apply this ratio to SaaS, but I believe customer acquisition cost (‘CAC’) and customer lifetime value (‘LTV’) are core metrics when you treat the user as a unit; you’ve got a sustainable business model if LTV (significantly) exceeds CAC. In an ideal world, for every £1 it costs to acquire a customer you want to get £3 back in terms of customer lifetime value. Consequently, the LTV:CAC ratio = 3:1.

I’ve seen companies start with high CAC in order to build scale and then lower the CAC as the business matures and relies more on word of mouth as well as higher LTV. Also, companies like Salesforce are well known for carefully designing additions (“editions”) to increase customer lifetime value (see Fig. 2 below). 

Netflix are another good example in this respect, with their long term LTV view of their customers. Netflix take into account the Netflix subscription model and a viable replacement for another subscription model in cable. The average LTV of Netflix customers is 25 months. As a result, Netflix are happy to initially ‘lose’ money on acquiring customers, through a 1-month free trial, as these costs costs will be recouped very soon after acquiring the customer.

Main learning point: You don’t need to be a financial expert to understand the unit economics of your products. Just knowing what the ‘levers’ are that impact your product, will put you in good stead when it comes to making product decisions and collaborating with stakeholders.

 

 Fig. 1 – Lean Canvas template by Ash Maurya – Taken from: https://blog.leanstack.com/

 

Fig. 2 – Pricing and functionality overview for Salesforce’s New Sales Cloud Lightning Editions:

 

Related links for further learning:

  1. https://soundcloud.com/saastr/saastr-142-why-cac-ltv-is-the
  2. https://inpdcenter.com/blog/understanding-product-economics-improve-product-development-success/
  3. https://people.kth.se/~msmith/ii2300_pdf/product_realization_7_2016.pdf
  4. https://www.quora.com/What-are-unit-economics
  5. https://youtu.be/RG_eyn0fRXs
  6. https://medium.com/@markroberge
  7. https://www.slideshare.net/RaviLakkundi/product-management-pricing-31102059
  8. https://www.inc.com/guides/price-your-products.html
  9. http://accountingexplained.com/managerial/cvp-analysis/cost-plus-pricing
  10. https://www.quora.com/What-are-unit-economics
  11. http://www.forentrepreneurs.com/saas-metrics-2-definitions-2/
  12. http://www.problemio.com/business/business_economics.php
  13. https://www.slideshare.net/austinneudecker/startup-unit-economics-and-financial-model
  14. https://www.linkedin.com/pulse/understanding-saas-business-model-unit-economics-ben-cotton/
  15. https://thepathforward.io/how-to-estimate-your-unit-economics-before-you-have-any-customers/
  16. https://thepathforward.io/unit-economics-by-sam-altman/
  17. http://launchingtechventures.blogspot.co.uk/2014/04/e-commerce-metrics.html
  18. https://medium.com/@parthgohil/understanding-unit-economics-of-e-commerce-9c77042a2874
  19. https://yourstory.com/2017/02/unit-economics-flipkart/
  20. https://www.entrepreneur.com/article/283878
  21. https://hbr.org/2016/08/a-quick-guide-to-value-based-pricing
  22. https://unicornomy.com/netflix-business-strategy-netflix-unit-economics/
  23. https://hbr.org/2017/04/what-most-companies-miss-about-customer-lifetime-value

Book review: “Designing with Data”

I’d been looking forward to Rochelle King writing her book about using data to inform designs (I wrote about using data to inform product decisions a few years ago, which post followed a great conversation with Rochelle).

Earlier this year, Rochelle published “Designing with Data: Improving the User Experience with A/B Testing”, together with Elizabeth F. Churchill and Caitlin Tan. The main theme of “Designing with Data” the book is the authors’ belief that data capture, management, and analysis is the best way to bridge between design, user experience, and business relevance:

  1. Data aware — In the book, King, Churchill and Tan distinguish between three different ways to think about data: data driven; data informed and data aware (see Fig. 1 below). The third way listed, being ‘data aware’, is introduced by the authors: “In a data-aware mindset, you are aware of the fact that there are many types of data to answer many questions.” If you are aware there are many kinds of problem solving to answer your bigger goals, then you are also aware of all the different kinds of data that might be available to you.
  2. How much data to collect? — The authors make an important distinction between “small sample research” and “large sample research”. Small sample research tends to be good for identifying usability problems, because “you don’t need to quantify exactly how many in the population will share that confusion to know it’s a problem with your design.” It reminded me of Jakob Nielsen’s point about how the best results come from testing with no more than 5 five people. In contrast, collecting data from a large group of participants, i.e. large sample research, can give you more precise quantity and frequency information: how many people people feel a certain way, what percentage of users will take this action, etc. A/B tests are one way of collecting data at scale, with the data being “statistically significant” and not just anecdotal. Statistical significance is the likelihood that the difference in conversion rates between a given variation and the baseline is not due to random chance.
  3. Running A/B tests: online experiments — The book does a great job of explaining what is required to successfully running A/B tests online, providing tips on how to sample users online and key metrics to measure (Fig. 2) .
  4. Minimum Detectable Effect — There’s an important distinction between statistical significance — which measure whether there’s a difference — and “effect”, which quantifies how big that difference is. The book explains about determining “Minimum Detectable Effect” when planning online A/B tests. The Minimum Detectable Effect is the minimum effect we want to observe between our test condition and control condition in order to call the A/B test a success. It can be positive or negative but you want to see a clear difference in order to be able to call the test a success or a failure.
  5. Know what you need to learn — The book covers hypotheses as an important way to figure out what it is that you want to learn through the A/B test, and to identify what success will look like. In addition, you can look at learnings beyond the outcomes of your A/B test (see Fig. 3 below).
  6. Experimentation framework — For me, the most useful section of the book was Chapter 3, in which the authors introduce an experimentation framework that helps planning your A/B test in a more structured fashion (see Fig. 4 below). They describe three main phases — Definition, Execution and Analysis — which feed into the experimentation framework. The ‘Definition’ phase covers the definition of a goal, articulation of a problem / opportunity and the drafting of a testable hypothesis. The ‘Execution’ phase is all about designing and building the A/B test, “designing to learn” in other words. In the final ‘Analysis’ phase you’re getting answers from your experiments. These results can be either “positive” and expected or “negative” and unexpected (see Fig. 5–6 below).

Main learning point: “Designing with Data” made me realise again how much thinking and designing needs to happen before running a successful online A/B test. “Successful” in this context means achieving clear learning outcomes. The book provides a comprehensive overview of the key considerations to take into account in order to optimise your learning.

Fig. 1 — Three ways to think about data — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 3–9

  • Data driven — With a purely data driven approach, it’s data that determine the fate of a product; based solely on data outcomes businesses can optimise continuously for the biggest impact on their key metric. You can be data driven if you’ve done the work of knowing exactly what your goal is, and you have a very precise and unambiguous question that you want to understand.
  • Data informed — With a data informed approach, you weigh up data alongside a variety of other variables such as strategic considerations, user experience, intuition, resources, regulation and competition. So adopting a data-informed perspective means that you may not be as targeted and directed in what you’re trying to understand. Instead, what you’re trying to do is inform the way you think about the problem and the problem space.
  • Data aware — In a data-aware mindset, you are aware of the fact that there are many types of data to answer many questions. If you are aware there are many kinds of problem solving to answer your bigger goals, then you are also aware of all the different kinds of data that might be available to you.

Fig. 2 — Generating a representative sample — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 45–53

  • Cohorts and segments — A cohort is a group of users who have a shared experience. Alternatively, you can also segment your user base into different groups based on more stable characteristics such as demographic factors (e.g. gender, age, country of residence) or you may want them by their behaviour (e.g. new user, power user).
  • New users versus existing users — Data can help you learn more about both your existing understand prospective future users, and determining whether you want to sample from new or existing users is an important consideration in A/B testing. Existing users are people who have prior experience with your product or service. Because of this, they come into the experience with a preconceived notion of how your product or service works. Thus, it’s important to be careful about whether your test is with new or existing users, as these learned habits and behaviours about how your product used to be in the past can bias in your A/B test.

Fig. 3 — Know what you want to learn — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, p. 67

  • If you fail, what did you learn that you will apply to future designs?
  • If you succeed, what did you learn that you will apply to future designs?
  • How much work are you willing to put into your testing in order to get this learning?

Fig. 4 — Experimentation framework — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 83–85

  1. Goal — First you define the goal that you want to achieve; usually this is something that is directly tied to the success of your business. Note that you might also articulate this goal as an ideal user experience that you want to provide. This is often the case that you believe that delivering that ideal experience will ultimately lead to business success.
  2. Problem/opportunity area — You’ll then identify an area of focus for achieving that goal, either by addressing a problem that you want to solve for your users or by finding an opportunity area to offer your users something that didn’t exist before or is a new way of satisfying their needs.
  3. Hypothesis — After that, you’ll create a hypothesis statement which is a structured way of describing the belief about your users and product that you want to test. You may pursue one hypothesis or many concurrently.
  4. Test — Next, you’ll create your test by designing the actual experience that represents your idea. You’ll run your test by launching the experience to a subset of your users.
  5. Results — Finally, you’ll end by getting the reaction to your test from your users and doing analysis on the results that you get. You’ll take these results and make decisions about what to do next.

Fig. 5 — Expected (“positive”) results — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 227–228

  • How large of an effect will your changes have on users? Will this new experience require any new training or support? Will the new experience slow down the workflow for anyone who has become accustomed to how your current experience is?
  • How much work will it take to maintain?
  • Did you take any “shortcuts” in the process of running the test that you need to go back and address before your roll it out to a larger audience (e.g. edge cases or fine-tuning details)?
  • Are you planning on doing additional testing and if so, what is the time frame you’ve established for that? If you have other large changes that are planned for the future, then you may not want to roll your first positive test out to users right away.

Fig. 6 — Unexpected and undesirable (“negative”) results — Taken from: Rochelle King, Elizabeth F. Churchill and Caitlin Tan — Designing with Data. O’Reilly 2017, pp. 228–231

  • Are they using the feature the way you think they do?
  • Do they care about different things than you think they do?
  • Are you focusing on something that only appeals to a small segment of the base but not the majority?

Related links for further learning:

  1. https://www.ted.com/watch/ted-institute/ted-bcg/rochelle-king-the-complex-relationship-between-data-and-design-in-ux
  2. http://andrewchen.co/know-the-difference-between-data-informed-and-versus-data-driven/
  3. https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/
  4. https://vwo.com/ab-split-test-significance-calculator/
  5. https://www.kissmetrics.com/growth-tools/ab-significance-test/
  6. https://select-statistics.co.uk/blog/importance-effect-sample-size/
  7. https://www.optimizely.com/optimization-glossary/statistical-significance/
  8. https://medium.com/airbnb-engineering/experiment-reporting-framework-4e3fcd29e6c0
  9. https://medium.com/@Pinterest_Engineering/building-pinterests-a-b-testing-platform-ab4934ace9f4
  10. https://medium.com/airbnb-engineering/https-medium-com-jonathan-parks-scaling-erf-23fd17c91166

 

Design with Data.jpg

 

 

 

 

My product management toolkit (17): Assess market viability

Whether you’re a product manager or are in a commercial or strategic role, I’m sure you’ll have to assess market viability at some point in your career. For that reason, I wrote previously about assessing markets, suggesting tools that you can use to decide on whether to enter a market or not.

A few weeks ago, I listened to a podcast interview in which Christophe Gillet, VP of Product Management at Vimeo, gave some great pointers on how to best assess market viability. Christophe shared his thoughts on things to explore when considering market viability. I’ve added my sample questions related to some of the points that Christophe made:

  1. Is there a market? – This should be the first validation in my opinion; is there a demand for my product or service? Which market void will our product help to fill and why? What are the characteristics of my target market?
  2. Is there viability within that market?  Once you’ve established that there’s a potential market for your product, this doesn’t automatically mean that the market is viable. For example, regulatory constraints can make it hard to launch or properly establish your product in a market.
  3. Total addressable market – The total addressable market – or total available market – is all about revenue opportunity available for a particular product or service (see Fig. 1 below). A way to work out the total addressable market is to first define total market space and then look at percentage of the market which has already been served.
  4. Problem to solve – Similar to some of the questions to ask as part of point 1. above, it’s important to validate early and often whether there’s an actual problem that your product or service is solving.
  5. Understand prior failures (by competitors) – I’ve found that looking at previous competitor attempts can be an easy thing to overlook. However, understanding who already tried to conquer your market of choice and whether they’ve been successful can help you avoid some pitfalls that others encountered before you.
  6. Talk to individual users  I feel this is almost a given if you’re looking to validate whether there’s a market and a problem to solve (see points 1. and 4. above). Make sure that you sense check your market and problem assumptions with your target customers.
  7. Strong mission statement and objectives of what you’re looking to achieve  In my experience, having a clear mission statement helps to articulate and communicate what it is that you’re looking to achieve and why. These mission statements are typically quite aspirational but should offer a good insight into your aspirations for a particular market (see the example of outdoor clothing company Patagonia in Fig. 2 below).
  8. Business goals  Having clear, measurable objectives in place to achieve in relation to a new market that you’re considering is absolutely critical. In my view, there’s nothing worse than looking at new markets without a clear definition of what market success looks like and why.
  9. How to get people to use your product – I really liked how Christophe spoke about the need to think about a promotion and an adoption strategy. Too often, I encounter a ‘build it and they will come’ kind of mentality which I believe can be deadly if you’re looking to enter new markets. Having a clear go-to-market strategy is almost just as important as developing a great product or service. What’s the point of an awesome product that no one knows about or doesn’t know where to get!?

Main learning point: Listening to the interview with Christophe Gillet reinforced for me the importance of being able to assess market viability. Being able to ask and explore some critical questions when considering new markets will help avoid failed launches or at least gain a shared understanding of what market success will look like.

 

Fig. 1 – Total available market – Taken from: https://en.wikipedia.org/wiki/Total_addressable_market

1000px-tam-sam-market

Fig. 2 – Patagonia’s mission statement – Taken from: http://www.patagonia.com/company-info.html

screen-shot-2017-01-20-at-07-21-29

Related links for further learning:

  1. http://www.thisisproductmanagement.com/episodes/assessing-market-viability
  2. http://www.mindtheproduct.com/2013/05/poem-framework/
  3. http://smallbusiness.chron.com/determine-market-viability-product-service-40757.html
  4. https://en.wikipedia.org/wiki/Total_addressable_market
  5. https://blog.hubspot.com/marketing/inspiring-company-mission-statements

Lending revisited: Bond Street

Bond Street lends to small businesses that might typically struggle to get a loan from traditional banks. In a recent talk on a MIT Fintech course that I was doing, David Haber – Bond Street’s CEO/Founder – mentioned how Bond Street saw a clear niche in the market for small business loans and acted on it. Haber encountered a problem that seemed pretty common for early stage, online small businesses: banks or other financial services offering small loans for short durations at high rates. To resolve this problem, Bond Street offers loans range between $50k-$500k, for as long as 1-3 years and with rates starting at 6% (see Fig. 1 below).

Fig. 1 – Loan size, rate and terms comparison between Bond Street and other small business lenders – Taken from: https://bondstreet.com/

screen-shot-2016-10-11-at-07-42-33

Fig. 2 – Overview of Bond Street positioning – Taken from: https://bondstreet.com/blog/an-introduction-to-small-business-financing/

bond-street-v2

In the MIT talk, Haber mentioned that OnDeck – a direct competitor of Bond Street – offers small business loans for an average amount of $35k, 10 months’ duration and charges of 40% Annual Percentage Rate (‘APR’). Bond Street competes on rate and speed, but as Haber explained, the business is very focused on “offering more value beyond the economics of a loan, since capital is essentially a commodity.”

Haber then explained that technology allows Bond Street to not just innovate on the loan transaction itself, but to provide a great customer experience on either side of the transaction. For example, by offering a borrower data about similar size businesses, the borrower can then make a better informed decision about taking up a loan.

Fig. 3 – Screenshot of Bond Street online loan application form – Taken from: https://www.nav.com/blog/376-decoding-a-loan-offer-from-bondstreet-4788/

screen-shot-2016-10-11-at-07-56-36

Haber mentioned one other thing which really resonated with me: “building an ecosystem around your business.”  By, for example, leveraging data on an entrepreneur across a network of (similar) entrepreneurs, Bond Street and others can really help people grow their businesses. This doesn’t mean committing data violations, but using data to build an ongoing relationship with one’s customers, and being able to warn them about potential risks or suggest new market opportunities.

A great example is how easy Bond Street makes it for its customers to link to their accounting packages (see Fig. 4 below). I see this is a simple but good example of creating an ecosystem where data is combined in such a way that people and business can derive tangible benefits from it. Through linking to your accounting package as part of the loan application process, businesses save a lot of precious time and effort, since they no longer have to manually input all kinds of financial data.

Fig. 4 – Screenshot of Bond Street’s functionality which links to one’s accounting software – Taken from: https://www.nav.com/blog/376-decoding-a-loan-offer-from-bondstreet-4788/

bondstreet-accounting-link

 

Main learning point: Even though lending isn’t a new proposition, I really like what Bond Street are doing when it comes to offering loans to small businesses. It has carved out a specific market niche – small, early stage businesses – that it targets with a compelling proposition and an intuitive customer experience to match.

Related links for further learning:

  1. https://www.thebalance.com/what-does-apr-mean-315004
  2. https://bondstreet.com/blog/category/resources/
  3. http://www.forbes.com/sites/laurashin/2015/06/18/6616/
  4. http://www.peeriq.com/p2p-explosion-business-models-may-change-risks-still-need-managed/
  5. https://bondstreet.com/blog/an-introduction-to-small-business-financing/
  6. https://bondstreet.com/blog/a-beginners-guide-to-cloud-based-accounting-software-ii/
  7. https://www.fundera.com/blog/2016/06/01/application-process-works-bond-street
  8. https://angel.co/bond-street
  9. https://www.nav.com/blog/376-decoding-a-loan-offer-from-bondstreet-4788/
  10. https://www.fundera.com/blog/2016/06/01/application-process-works-bond-street