Hey, I’m Colin! I help PMs and business leaders improve their technical skills through real-world case studies. For more, check out my live courses, Technical Foundations and AI Prototyping.
In this post, I try to frame up processes teams should adopt when leveraging AI for prototyping. I include the lifecycle of a prototype, its purpose, and some best practices when adopting AI in a team setting.
Let’s dive in!
Hypothesis → Prototype(s) → Measurement → Implementation
Product is already rife with frameworks that promote iterative processes, like the Double Diamond design process and Agile planning. Turns out we didn’t invent these ideas – frameworks like Plan-Do-Check-Act from Six Sigma and Hypothesis - Experiment - Analysis - Conclusion from the Scientific Method long predated our patterns.
These frameworks all share one common pattern — iterative, data-supported experimentation to achieve a clearly stated goal.
AI tools can enable PMs to get their ideas in front of users in minutes instead of hours, but how do we measure if those ideas are actually good? It starts with a clear hypothesis.
Hypothesis
“Clear opinions, loosely held” is one of the more common phrases I’ve heard to describe high caliber PMs and leaders. It means you have conviction in the path forward but are still capable of changing your mind when shown evidence you’re incorrect. Said another way, it means you have a hypothesis (and confidence to back it up).
Oftentimes PRDs start with a simple hypothesis. They state how things are now and how they will be in the future if we implement the change. They outline a vision for the customer’s life being improved and for the business’s growth.
For prototyping, I don’t see a need to change this. As long as you have a clear hypothesis you want to test, any form of documentation, from PRFAQ to Opportunity Solution Tree, will do.
Prototypes
This is where AI dramatically transforms the traditional workflow. Instead of settling for a single solution due to time constraints, product managers can generate multiple AI prototypes (3-5) that explore different approaches to solving the identified problem.
It’s key to remember that these are not intended to capture the exact user interactions or your exact design system. These prototypes are used to communicate the idea itself – does anyone even want this feature? How do people use it when it’s actually in their hands?
Measurement
Measurement is the missing step in discussions so far about AI prototyping. How do we actually know if this prototype helps us achieve our goal?
The main ways to measure are quantitative and qualitative feedback.
User interviews are the most common way to collect qualitative feedback. You observe any friction points, questions, and uncertainties that come up while the user interacts with the prototype. This first hand experience is invaluable and allows you to further refine your approach.
On the quantitative side, AI prototypes allow you to set up actual product analytics directly on your prototype, from click metrics to user session recordings. Ideally you use this to measure and compare many user experiences at once. Which prototype did the best job of achieving the goal? Is there evidence that your hypothesis is correct?
If none of your prototypes drive improvements, you can either attempt to refine your approach further, or kill the project and move on (which just saved you, your designer, and your engineers a massive amount of time).
Implementation
Once we’ve finalized the approach, we can actually implement the feature. Once again, our prototype comes in handy.
When working with designers, you already have a clearly defined vision for what the feature should accomplish. You shouldn’t get too attached to the exact designs or interactions in your prototype – chances are your designer will do a better job.
With engineering, you have a clickable, visual artifact that can be used to drive discussion. From here, we can follow the typical development lifecycle.
Organization & Optimization
These ideas are not theory – there are product teams implementing AI with these patterns today. But there are still three main questions I’ve seen:
How does the team stay organized?
How does the team collaborate and reduce rework?
How fast can you run this cycle?
I’m still working on solutions in this space, but for now I’ll provide a few best practices:
Create a baseline prototype library: Instead of reimplementing the same starting point each time, create a prototype that matches your existing application, then copy it. Add your new feature on top of the copy. Whenever you need to prototype something new, create a new copy of the baseline.
Build design systems: Whether you import directly from Figma or use screenshots, a great asset to build is reusable components. This design system can be leveraged to build prototypes across team members so that everything has the same look and feel.
Think AI first: If you’re building a genAI feature, it’s very challenging to prototype in a static design tool. Leverage AI prototyping to test new ideas with real customers by actually using an LLM in your user testing.
Putting It All Together
Once you build the skill of prototyping with AI, the next challenge is adoption. Putting clear processes in place to measure, implement, and organize your team’s use of AI is the first step for product leaders in starting to drive use of AI in their organizations.
To move faster, consider building baseline prototypes and design systems you can copy. Use AI prototyping first when implementing genAI solutions to provide customers with a more realistic feeling for the product.
Very clever and actionable 👌
How can teams integrate AI prototyping without clashing with existing processes or relying too much on these tools? Do you see any risks in using them?