Going from feature factory to continuous discovery in 1 year
Specific process updates made each quarter and what they resulted in.

👋 This is a summary of a talk I gave at Dallas Product Camp.View the entire talk on YouTube →
Changing process is hard. It introduces risk, requires behavior change, and needs to be maintained. Typically we already have processes in place, which creates a natural pull back to the old, familiar way of doing things. I call this process inertia.
For this reason, I like to make changes incrementally. This helps prevent us from shocking the system too much, overwhelming the team, and ultimately succumbing to process inertia.
Although process change can be risky, the payoff can also be huge. In our case, shifting from feature factory to continuous discovery has been a massive win.
This is not a playbook. Rather, this is my account of my experience with my team. I focus on the specific organizational and process changes we made over the course of 1 year.
Overview
2019 Q4
🏭 Feature Factory
👥 Siloed by role
🚢 Output-focused goals
2020 Q1
🤝 Autonomous, cross-functional teams
📜 Introduced “Team Charters” for vision/strategic direction at the team level
2020 Q2
🔄 Dual-Track Agile
🏆 Outcome-focused goals
📈 Team-level KPIs
2020 Q3
🚀 Rapid experimentation
🏃♀️ Design sprints
2019 Q4
At the end of 2019, this is roughly how our product and engineering organization was structured:
Although collaboration and agile-things were happening… when you zoomed out we were a bit siloed by role, and ideas tended to flow from left to right.
Leadership typically came up with ideas, they then partnered with UX to flush it out. From there a feature was assigned to a PM who would then partner with Engineering to plan development. This is when things got agile.
We iterated through releases and, to be fair, we did remain agile on how we built the feature how. The problem was that we were already committed to what we were going to build.
At a high level, our process looked something like this:
This is not all bad. There were pros and cons to this way of working.
Pros
- We got really good at building features
- We got really good at shipping… I mean, really good
Cons
- Stressful on our Engineering Leads, they were managing the development of 2–3 features across a single team
- Hard to “turn the ship.” Once we started being agile, we were already committed to the feature
- Our goals were output focused. We had a predefined solution going into a goal, and success was measured by the delivery of that solution
2020 Q1
At the beginning of 2020, we made one fundamental organizational change: we introduced Cross-Functional Product Teams.
This change was owned by our Product and Technology leadership and directors. It’s something we had been discussing for 2+ years at this point, and finally, we did it 👏
Rather than being siloed by role, we split product and engineering into teams made up of product management, UX, engineering, QA, business analysts, and account managers. There were two primary drivers of this change, 1) empower the teams with autonomy, and 2) reduce stress on our engineering leads.
Another thing we introduced at this time was Team Charters. These are living documents that capture the purpose, strategy, and vision for the team. They serve three primary functions:
1. Alignment within the team
2. Coordination across teams
3. Accountability to executives
Outcomes
- Teams started setting their own roadmaps (with plenty of input from stakeholders)
- Product Management, UX, and Engineering started collaborating more at a strategic level
- Product ideas started to come in from more roles — i.e. BA and Account Management
2020 Q2
In Q2 we introduced three process changes:
1. Dual Track Agile 🔄
2. Outcome-focused goals 🏆
3. Team KPIs and metrics 📈
Dual Track Agile
In order to “continuously discover” we needed to separate discovery from delivery, and dual-track is what accomplishes this. Essentially you have two Kanban boards, one for each track. As user stories flow through discovery, they are either validated or deprioritized. Validated stories then go into the backlog for your delivery track.
Our process started to look less linear and more cyclical. There are many cycles within discovery as well as in delivery. In addition, there are feedback loops between discovery and delivery. It’s difficult to capture the collaboration dynamics in a simplified model (like the above), however, the main point is that there are many opportunities for cross-pollination of ideas and data between the two tracks.
Outcome-focused goals
These are goals that focus on the outcome of the work, rather than the solution itself. They do not specify what exactly is going to be built, rather they specify what metric will move.
For example:
Output-focused goal → “Ship v1 of the Satisfaction Pulse”
Outcome-focused goal → “Increase the percentage of weekly completed surveys by 25%”
Don’t give the team a solution and say, “build this.” Instead, give them a goal and let them figure out how to get there.
Team KPIs and Metrics
We already had KPIs and Metrics as a departmental level, however we needed to build them out for the teams.
This is absolutely essential if you’re going to adopt outcome-focused goals, because you need a way to measure your outcome. Ideally, the team KPIs are designed as leading indicators for one or more departmental level KPIs. This helps ensure the teams are coordinated towards a shared outcome.
To define these, we held a North Star Metric workshop to identify our leading and lagging indicators at a team and departmental level.
Outcomes
- The discovery track allowed us to descope lots of ideas before building them. Through discovery, we were able to identify the shortcomings of an idea early.
- We started to iterate several times on an idea before building it, ensuring that we were satisfied with the design and reducing rework.
- Engineering started to build feasibility prototypes. When our designer’s plate was full and we wanted to prototype an idea, engineering started to jump into discovery and build prototypes. These became inputs into the designers' work when their bandwidth opened back up.
2020 Q3
At this point, we had laid a lot of the foundation necessary to allow us to start to move through discovery at a rapid pace. We had aligned, autonomous teams, discovery track was formalized, and metrics were in place to measure outcomes.
In this quarter we introduced two discovery processes:
1. Rapid experimentation 🚀
2. Design sprints 🏃♀️
Experimentation Principles
Our goal in experimentation is to gather quantitative data to validate ideas. Over the course of Q3 we developed a short list of experimentation principles:
- Ownership → take ownership over any experiment you’re working on. We aim to move fast, and in order to do this we need to have a bias towards action.
- Communication → We need to be aligned within the team and also across teams when running experiments. Communication is key.
- Iterate & Innovate → We try to default to iterating on what already exists in an innovative way. This is because we can learn more rapidly this way. We build ontop of recent experiments and features already in production. Sometimes we also test out a completely new concept, but this is more costly so we reserve this for the occasional experiment.
- Learn every week → Our goal is to learn something every week. Ideally we are learning from experiments that launched. However, if we fail to launch an experiment one week it is still valuable to learn from the missed launch and try to improve our procss.
- Purposeful → Experiments should move us towards our goals, they should be relevant and timely, and they should only run as long as they need to. We try to end an experiment once we captured a learning so that we don’t forget about experiments running in the background.
- Share results → We need to share our learnings out to other teams and other departments. This helps us broadcast the value of this approach as well as build culture of learning.
Key Experimentation Roles
In addition to experimentation principles, we also defined a list of key experimentation roles that are necessary for us to launch experiments successfully:
- Technician → responsible for implementing the experiment and ensuring that it works properly.
- Experiment Designer → defines the test, the variations, any associated segmentation, and the desired outcome of the experiment.
- Analyst → ensures the necessary metrics are in place to measure the experiment results. If they are not, then builds the metrics out or does the post-experiment analysis work.
- Copywriter → owns and gets necessary signoff on any copy in the experiment. This became a common blocker for us without clear ownership, so we made it it’s own role.
- Coordinator → coordinates with other teams and department, ensuring the necessary stakeholders are aware of the experiment before launch and that any concerns have been addressed prior to launch.
Design Sprints
I’m not going to write a lot about design sprints (there’s a book!). However, I will just note that we expanded our design sprint from 1 week out to 3 weeks. There were a few reasons for this:
- Everyone’s busy
- We had other responsibilities to maintain during the design sprint
- We could not afford to set everything else aside while running the sprint
For more info on design sprints:
Conclusion
It has been quite a transformation in the span of one year, and the outcomes have been incredible. For example, through experimentation we were able to beat our alltime KPI record by 260%. Truly amazing!
I will say, however, practicing continuous discovery is in some ways harder than working as a feature factory. It requires that you rest in the mystery and become even more comfortable with ambiguity. For example, we rarely know what we’re going to be building in three months. Rather, we have a general idea of the direction we’re heading in, and we’re rapidly iterating along that path while closely monitoring success metrics.
Practice continuous discovery requires more communication as well. Because there is less long-term clarity around what you’re building, all members of the team need to stay in consistent communication.
Although continuous discovery is harder in some ways, it’s also more fun and the payoff is bigger. When you’re working as a feature factory, you know what you’re building but you may not care if it’s successful. This can be a bit discouraging. In continuous discovery, you’re constantly measuring your impact and have a greater sense of ownership over your work. I’ve found that this generally elevates motivation and increases professional fulfillment.
I hope this story is helpful for you, can’t wait to hear any questions or comments!
This is a summary of a talk I gave at Dallas Product Camp. View the entire talk on YouTube