Product
It’s All Science: Building Better Products With Growth Theory
A marketer turned product leader bridges the gap.
Before I built software, I sold it. When I first moved into product management, I was hesitant to talk about my marketing background. I feel like marketing often gets a bad rap. When I would tell people what I did for a living, their responses brought to mind an image of a shady puppet master manipulating the masses. While this may seem dramatic, it’s also indicative of a divide I’ve seen in product thinking. Product managers make a lot of noise about understanding users, but there’s often a pretty wide gap between the people building products and the people using them. Marketers can be the bridge between these groups, and there’s a lot that product teams can learn from them.
Growth marketing is a sub-domain within the larger umbrella of marketing. While there’s no single widely accepted definition, for the purpose of this article let’s call growth marketing the discipline of growing an audience by acquiring, engaging, and retaining users. An axiom of growth marketing is that these goals should be approached scientifically, with data, experimentation, and rigor. The application of the scientific method toward the practice of growing a user base is often referred to as growth theory.
So what does growth theory have to do with building products, and how can product teams learn from the marketers who take their products to the masses? Alliteration sticks, so let’s break this down into what I’ll call the Three M’s of Growth-Based Product Thinking: Mindset, Methodology, and Maneuvering.
Mindset
Above all else, growth theory requires a growth mindset rooted in the scientific method, which the Encyclopedia Britannica defines as, “The process of observing, asking questions, and seeking answers through tests and experiments.” In a product sense, this translates to an agile (not necessarily Agile) approach to testing and iterating. As product teams, we have an opportunity to solve problems for our users. To approach these problems scientifically, we must adhere to the following axioms:
- Every assumption is a hypothesis to be tested.
- Embrace humility and fallibility: Disproved hypotheses are learning opportunities.
- Test early and often: Embrace minimum viable product to prove or disprove hypotheses.
- Ask clear questions: Define tests with isolated variables and clear success criteria.
- Trust quantitative over qualitative data.
A good hypothesis
Many of the above points center around the importance of a good hypothesis. In this case, good has nothing to do with accuracy. A good marketing hypothesis can be ludicrously off-base, and still good, so long as the isolated variable being tested and result being sought are clearly defined. So long as the x and y of the hypothesis, “If I do x, it will have y impact,” are clearly identified, there is no such thing as a bad hypothesis (within reason).
Measuring impact
Impact should be tied to a single, clearly measurable KPI, and tests should be run in isolation, with only one variable tested at a time. A/B and multivariate tests are a great way to quickly gather this type of data. Even in cases where a product is not yet testable, this mindset can help avoid the pitfalls of assumption making by rooting all decisions in agile humility. Never assume you’re right about what a user wants (or, for that matter, that a user is right about what they want), and test as quickly and inexpensively as you can.
Methodology
Methodology, in this case, refers to putting into practice the mindset of growth theory. Once we start thinking of all decisions as hypotheses to be tested, how can we go about testing these hypotheses? The market is full of fantastic products for testing and analysis, so many of which are well suited to a scientific approach to product development.
Testing
On the testing side, tools like Unbounce, Google Optimize, Sumo, and VWO (among many, many others) allow for agile testing of hypotheses through low-lift MVPs and rapid A/B or multivariate tests. Using these tools before committing code can save time and money in the long run and allow teams to test assumptions against users before building costly solutions. This can look like building out MVP products in low-code environments but can also be even simpler. I’ve run many tests that use lead capture or other upstream low-lift user actions to define user interest in downstream product decisions. In other words, you don’t have to build a product to find out whether users will want to use it (though wanting to use a product and actually using it are often very different).
My analytics checklist
On the analysis side, tools like Google Analytics, Amplitude, and Mixpanel (again, among others) allow for variable isolation and clear definition of success. A good analytics setup is the foundation upon which everything else rests.
One of my functions at Postlight is to serve as the unofficial in-house Google Analytics rep. Analytics tools are gargantuan in scope, and can easily become overwhelming. I advise my peers when approaching an analytics environment to always go in prepared. Data discovery is much like spelunking — aimless wandering can be dangerous and disorienting.
When I’m preparing to dive into an analytics environment, I go through a relatively simple checklist:
- First, I confirm that my analytics setup is correct, or (in most cases) I identify at least which data points I can trust and which I cannot. For those unfamiliar and interested in learning more about how to audit an analytics setup, I love this blog post by my friend Mike Taylor.
- Next, I identify the data sources that correspond to my hypothesis. I make sure to look at a reasonable amount of data, taking into account seasonality and statistical significance.
- Once my data sources are clear, it’s usually simple enough to look at the data and confirm or challenge my hypothesis. If I’ve followed the scientific method well, I’ll even have a clear test script baked into my hypothesis.
- Bad hypothesis: “Adding onboarding and better images to my app will be good.”
- Okay hypothesis: “Adding onboarding to my app will increase user retention.”
- Great hypothesis: “Adding an onboarding workflow to my app will increase conversion rate by 20% or more.” This hypothesis is great for two reasons: it has a single variable on each side of the equation (onboarding and CVR), and it defines an exact success criteria (a ≥20% improvement in CVR).
Maneuvering
Maneuvering shifts from the exploratory to the defined. In essence, research and don’t reinvent the wheel. When we think about testing everything, it’s easy to get carried away. I’ve seen more resources burned on silly tests (I’m looking at you, CTA button color testing) than I’d like to admit, and I have fallen victim myself to starting from scratch where there is a ton of research to build on. The field of marketing psychology is well defined, and those working in it have done a huge amount to help us understand user behavior. While there is value in testing everything, there is so much information already available, which can save product teams a lot of time.
Sidenote: This is the side of marketing that is easy to misuse and misunderstand. Marketing psychology can be manipulative, and product teams should be stringent in not using it in these ways. When ethically used, marketing psychology is a tool for better understanding and communicating with users, always with the goal of serving the user.
There are so many books worth reading on this subject, and a quick Google search of “marketing psychology tips” will get you a long way. Here’s a list of some of the tricks I regularly bring into design sessions when building products:
Eyes and images
- Users tend to interact with information in an F-shape, so cluster your most important content on the left and toward the top.
- A picture is worth a thousand words, and an image tends to motivate and create better engagement than a text block.
- Based on the way the brain processes information, it’s most effective to place text on the right and images on the left.
- In images or illustrations with eyes, users tend to follow those eyes. Eyes facing the user will stop a user in place. Eyes looking toward a button or important place in a product will draw attention to that place.
Information
- Analysis paralysis: Users get overwhelmed by too much information and too many choices. In most cases, fewer options make decisions more likely.
- To the above, clustering related content helps users absorb more information without becoming overwhelmed or fatigued.
- When communicating information, it’s most effective to do so using language that evokes an emotional response. Benefit-driven language is less powerful, and feature-driven language even less.
Decision-making
- Users are more likely to make a large decision if they are first able to make a small, lower-friction decision. This creates momentum and early buy-in.
- Scarcity motivates decision-making. Fear of missing out and loss-aversion are both tactically sound ways to increase likelihood of a user taking an action.
- Users trust other users. Introducing social proof can be a great way to reassure users and increase engagement.
- Colors influence mood (even CTA color testing…fine). Brush up on color theory to better understand the way your color choices affect your users’ perceptions.
Pricing
- People anchor to the first value they see. Using discounts can be a tremendously effective way to increase perceived value.
- The decoy effect references the practice of using an expensive decoy option (like a lobster on a diner menu) to encourage customers to spend more on lower-priced items. Studies have shown that customers are more likely to spend $30 on a menu item if there is a $50 item on that menu, when that $30 item might otherwise be perceived as too expensive (just make sure you know how to cook that lobster if you’re offering it).
- Showing annual prices in installments versus lump sums reduces friction.
These points may seem most relevant for certain types of product screens, and were researched in order to better understand behaviors around conversions. This data, however, can be just as valuable in non-conversion scenarios. Understanding user behavior is crucial when building products, and there’s a ton of research out there that can support product teams in forming better and deeper hypotheses.
I’ve seen firsthand the impact these principles can have on product development. An agile scientific approach makes for better decisions, stronger products, and quicker timelines. Additionally, a hypothesis-based approach to decision-making can create more psychological safety and lead to significantly more interesting questions. By leveraging these three M’s, product teams can learn from marketers and create products that better fit their users’ needs.
Reed Whitmont (he/him) is a Senior Product Manager at Postlight. Say hello at reed.whitmont@postlight.com.