Understanding the Role UX Design Plays in Growth Experiments
Updated: Apr 20
“Ah, the classic designer’s conundrum — user vs. business. For a Growth Designer, knowing how to balance the two is paramount.” May Wang
Having recently moved into a growth PM role at my company, one question has remained front-of-mind for me: “What role does UX design play in growth experiments?” At the heart of this question lies what I consider to be values that can sometimes be at odds with one another: the needs of the user and a business’s bottom line. What does an experiment potentially “causing harm” mean from the perspective of the user experience (and not in terms of its impact on revenue)? Have we considered other testing methods? How do we weigh questions the data can’t answer, such as why one version did better with users than another? How do we design experiments with accessibility and ethics in mind? What role does qualitative research and data play in growth?
Growth (also known as “growth engineering”, “growth marketing”, or previously as “growth hacking”) is a function within organizations focused on growing the company’s user base. The foundation of all growth is data; growth teams are decidedly data-driven and are tasked with conducting “experiments” (think: A/B or multivariate testing), determining statistical significance for tests, analyzing incoming data, and extracting insights from the resulting data.
My previous experiences with A/B testing were limited to email. Many years ago, I was responsible for crafting all the emails that my company sent out. We A/B tested different emails each week to determine which version increased open rates or click-through rates. I would often write seven subject lines per email for every email I prepared (50 or more per week depending on the workload), and my manager would choose the ones to A/B test. Sometimes we also tested out headers in the body of the email or the copy that appeared on the large call-to-action (CTA) buttons. Copy (or text) changes are one of the many things that can be tested, of course; visual design treatments, the placement of site elements, and site navigation itself can be tested.
I’m most familiar with research and testing methodologies from having studied UX design in grad school. The scale of each of these research types differs (in other words: the number of participants needed for a focus group or a survey to reach some sort of significance will differ from the number of participants needed for a usability test). The same is true for growth experiments; teams use data to determine sample sizes, choose metrics to focus on, and to define statistical significance for a particular experiment. Major companies are able to scale their growth efforts to dizzying numbers, leveraging tools like experimentation platforms, site telemetry, and analytics platforms to run and analyze their experiments in real-time.
I’ve since learned that growth no longer falls under the sole purview of marketing teams. Today, it’s a cross-functional activity comprised of individuals from different backgrounds working together specifically on growth. (The team I’m on is in engineering). There are even growth product designers embedded in growth teams. May Wang’s blog post “The Product Designer’s Guide to Growth” goes into more detail about what it means to be a growth product designer and the role they play in the growth process: in her role, her team “makes sure our quantitative knowledge is always backed by qualitative user research, and that we combine the two to drive and inform our design decisions.”
User research was the cornerstone of my graduate program. And I was taught that obtaining informed consent before initiating any research activity was important to create trust. In other words, we explicitly told participants why we were doing this research activity, how the research would be used, what information we were collecting, and who would have access. We’d let participants know they could stop us at any time with questions or if they felt uncomfortable. After we read the terms, we ask participants for their explicit consent (a signature, or verbal consent if recording a video) before starting the activity. If participants choose not to move forward, the activity cannot move forward. This is called informed consent.
This is not to say that UX practitioners always act in the best of their users. Here are two excellent reads about the responsibility designers have to behave ethically (“Subverted Design” by Product Designer Joel Califa) and a research paper on dark patterns by researchers at Princeton University and the University of Chicago:
“The main threshold is whether the risk exceeds that of “minimal risk”. Minimal risk is defined as the probability and magnitude of harm that a participant would encounter in normal daily life. The harm considered encompasses physical, psychological and emotional, social, and economic concerns. If the risk exceeds minimal risk, then informed consent is required. In most, but not all, online experiments, it can certainly be debated as to whether any of the experiments lead to anything beyond minimal risk.”
(The Udacity course recommends that every employee involved in A/B testing “be educated about the ethics and the protection of the participants”, a recommendation I wholeheartedly agree with.)
There’s a lot more to this topic and I highly recommend reading the following articles to get a deeper and nuanced look into ethical A/B testing and what this means for growth teams:
“I’m not naive — I know that corporations don’t prioritize user needs unless those needs already align with company goals. I also accept the limited agency of any given designer to effect change within an organization. But I don’t accept that it isn’t our responsibility. In an ideal world, products would be made ethically by default. But this isn’t an ideal world and big revolutions start out small. I think Designers are really well suited for the task.” Joel Califa
Finding the Right Tool for the Job
Finally, there’s the question of considering how growth experiments that affect the product experience fit into other testing methods.
In the last chapter of the terrific book Just Enough Research, Erika Hall says the following:
“[Split] testing can be seductive because it seems to promise mathematical certitude and a set-it-and-forget-it level of automation, even though human decision-making is still necessary and the results remain open to interpretation within the larger context. The best response to a user-interface question is not necessarily a test. Additionally, these are activities that affect the live site itself, which presents some risk.” Erika Hall
This came to mind last week in a chat with a colleague about an experiment idea. The experiment would test against a very specific structural element, and I asked my colleague if they knew about it or were involved, since they have the requisite expertise to help guide stakeholders in the right direction. It turns out she was—project stakeholders met with her to understand how this change would affect the rest of the site and what other testing methods they should also explore. I think this is a win; we need more people in the room with different perspectives of the product and the business to help teams make the right call.
A/B testing need not be isolated from other research methods, and growth teams should also be in a position to offer different alternatives depending on an experiment’s goals. (Udacity’s “Intro to A/B Testing” course goes into detail about what makes for good A/B tests and what A/B testing isn’t as good for.)
The more I learn about growth, the better. I have the opportunity to work with business analysts and growth strategists who have extensive experience with experimentation. Part of my job is to understand as much as I can about growth experiments, from data to implementation and analysis. The tension between advocating for user needs and supporting business goals is always there, and I believe that is a good thing; it’s important to think about the end-user experience and to consider whether or not a particular test can help meet stated goals. Developing a deep understanding of both quantitative and qualitative research and data collection methods, and how these work in concert, makes for stronger growth teams.
I created the Pocket Data 101 list to help me keep track of all the recommended resources I’m using to learn more about data (Excel, SQL, and statistics). The Pocket UX list is a list of resources (free and others for a fee) and tools I used during grad school:
Like this post? Sign up for my Developer Content Digest newsletter.