COMPANY
Intuit Mailchimp
YEAR
2023
ROLES
Senior UX Researcher
Program Manager
KEY SKILLS
Research Findings
Quick Insights
Stakeholder Engagement
Measuring Success
Mailchimp: Rapid Research Program
Problem
Problem: Our product teams needed a solution to conduct experiments and gather customer feedback quickly.
HMW Statement: How might we streamline the process for product teams to conduct experiments and collect customer feedback efficiently, ensuring that insights are gathered and implemented rapidly before launching new features or experiences?
Solution: While at Mailchimp, I worked closely with design partners and key stakeholders to develop an early version of that solution. At its core, the rapid research program provides product teams with a structured and efficient approach to collecting customer feedback through moderated usability testing before introducing new features or experiences.
Initiation
In the initiation phase, we have the capacity to involve up to 3 squads in simultaneous usability testing. Before each project kick-off, I craft a comprehensive work-in-progress research plan using the information provided during intake to foster effective collaboration with stakeholders and design partners. During the kick-off call, my primary goal is to align everyone on key dates, recruitment criteria, business goals, and success metrics, ensuring everyone's input is valued and considered.
Communication
Proper documentation and communication are of utmost importance when working with multiple design partners. As a project manager and UX practitioner, I have taken the initiative to establish a structured working agenda for each meeting. This helps to ensure that all discussions are focused and productive. Additionally, I facilitate asynchronous communication to allow flexibility and accommodate different time zones. By providing clear next steps for all research efforts, I ensure everyone is on the same page and there is no confusion or ambiguity.
XFN Feedback
I strongly encourage feedback from our design partners between meetings, as their insights are invaluable to our project's success. To facilitate this process, I have found that utilizing a private channel with regular status updates works best. This dedicated space allows the collective to focus on the intricate details of the project without being overwhelmed by information overload from various channels, ensuring that each partner's feedback is not just heard, but also influential in shaping our project's direction.
Planning
After a comprehensive walkthrough of the test materials with the design partners, I meticulously finalize all research plans and discussion guides (test scripts). This ensures that we are fully prepared for the final walkthrough of the test materials, instilling confidence in our process.
Recruitment
Once the final walkthrough of all test materials is complete, I make revisions based on feedback from design partners and proceed with outbound communication efforts to prepare our recruited test participants. I make sure all necessary documentation is reviewed and signed prior to testing.
Observation Rooms
To ensure that stakeholders and design partners feel included in the research process, I set up observer rooms to allow them to be hands-on during testing. In my experience, involving your team from the start will result in a more efficient readout. I also create a FigJam board for the design partners to add notes about their observations during testing.
Execution
Once all testing is complete, I start synthesizing the data. While each researcher has a unique approach to synthesis, I consider the desired outcome before deciding how to synthesize the data. I do this because having the ability to share my work with the design partners and key stakeholders is a priority to continue to build their trust. I deeply value my design partners' insights as they are invaluable in guiding design changes. I prioritize those initially, as our goal within the program is to move quickly while exceeding our design partner’s expectations.
Research Summary
As identified in our prospective research plans, I answered any questions my design partners had before the readout. This collaborative approach, where everyone's input is valued and actively sought, allows us to focus the presentation on things we were surprised to learn in our work. This helped catch insights that may not have been initially apparent, fostering a strong sense of shared discovery.
Feedback
I consider my readouts successful if there are few or no clarifying questions about my presentation. To ensure the accuracy of my assumptions, I share a survey to gather feedback from my design partners. Asking for feedback regularly has been crucial for my professional development. And while getting feedback from busy stakeholders can sometimes be difficult, any response can be helpful.
Success Metrics
To better understand the impact of the program, we will want to consider track the following success metrics:
1. Increase in Experiment Volume
Metric: Tracking the number of experiments or usability tests conducted per quarter before and after the program’s introduction.
Goal: Showing a measurable increase in the number of experiments conducted, indicating enhanced capacity for rapid testing.
2. Improvement in Feature Adoption Rates
Metric: Comparing the user adoption rates of features tested with the rapid research program against those developed without its insights.
Goal: Demonstrating higher adoption rates for features informed by rapid research, suggesting the program’s effectiveness in refining product decisions.
3. Enhancement in Customer Satisfaction Scores
Metric: Monitoring customer satisfaction scores (CSAT) changes for features or experiences tested through the program.
Goal: Achieving an improvement in satisfaction scores, indicating that user feedback has positively impacted product development.
4. Increased Stakeholder Engagement
Metric: Measure the engagement level of design partners and stakeholders in the research process, such as participation rates in observing testing sessions or in decision-making processes informed by test results.
Goal: Show a significant increase in stakeholder engagement, reflecting the program's value in fostering collaboration and informed decision-making.
5. Feedback Quality and Actionability
Metric: Evaluating the quality and actionability of feedback collected through structured assessments or stakeholder surveys.
Goal: Achieving high feedback quality and actionability scores, indicating that the insights generated directly contribute to product improvements.