From Information to Evidence: How Context Informs Product Discovery Decisions
Feb 11, 2025
From Information to Evidence: How Context Informs Product Discovery Decisions
Product Discovery is meant to reduce uncertainty by gathering evidence—but let’s be honest, too much of a good thing can backfire. It’s easy for Product Teams to get stuck in an endless loop of experiments and interviews, all chasing the mythical “perfect” insight that’ll magically solve everything. To avoid spinning in circles, teams need to focus not just on collecting data but on understanding the context behind it.
Think of Product Discovery like driving on a highway toward a destination that was only vaguely described to you—and your GPS isn’t working. You’re overwhelmed with incoming signs and traffic patterns. You’re anxious about missing the right exit, but if you take the first one too soon, you risk committing to a solution that’s not actually solving a real problem.
You start asking yourself:
Does this landscape look like the place we’re headed?
Why are all these cars getting off here?
Have we seen that same exit name three times already?
This is the trap many Product Teams fall into—taking a premature exit based on weak assumptions, or staying on the highway too long, endlessly analyzing without ever delivering something valuable.
Most of what Product Teams gather during Discovery is just information. But to make confident decisions, you need more than just noise—you need evidence.
This article will walk through how to evaluate the strength of your Discovery inputs by looking at two key dimensions: proximity and commitment. We’ll also share practical examples of how these concepts apply in real-world Discovery scenarios.
Here’s what we’ll cover:
How to distinguish between information and evidence
Why context—especially proximity and commitment—matters
A simple evidence mapping framework
Three case studies showing how to apply this in practice
A reminder on how to avoid endlessly chasing “more data”
Differentiating Information from Evidence
One of the toughest questions during Discovery is: what should we act on?
Do we redesign the homepage just because a few users emailed support about a missing button?
Does a 5-day A/B test tell us enough to solve mobile churn?
Should we build that flashy new feature just because a competitor launched it?
Claude Shannon famously said, “Information is the resolution of uncertainty.” But in product work, that needs a tweak:
“Evidence leads to a reduction of uncertainty.”
Here’s how we’ll define the difference:
Information is raw, often context-free input. It might be a user opinion, a trend, or an observation. Discovery surfaces a lot of this as teams move through interviews, tests, and research.
Evidence is qualitative or quantitative data that either supports or challenges a clear assumption. What makes it useful is context: how it relates to the Discovery goal you’re currently working on.

Why Context Turns Information Into Evidence
The real difference between information and evidence? Context. Without it, data is just noise. With it, that same data can help Product Teams make stronger, more confident decisions.
Product Discovery isn’t just about collecting facts about your users—it’s about figuring out which insights actually help you reduce uncertainty and move forward. It’s not the volume of research that leads to better decisions; it’s how well that research fits the situation you’re in.
Understanding Context: Proximity and Commitment
Yes, teams need to gather useful information about their users. But they also need a clear, shared understanding of which types of insights actually matter in their environment. This isn’t about following some rigid formula—it’s about agreeing on a few solid principles that can guide better decisions.
Two questions help teams sort the useful signals from the noise:
1. How close are you to the source?
Not all insights are created equal. There’s a big difference between hearing about a user problem secondhand and observing it directly. Vague competitor updates or “someone said this in a meeting once” are not strong signals. The closer you are to the actual behavior or conversation, the more valuable the insight becomes.
2. How serious is the commitment?
Saying “I’d definitely use this” isn’t a commitment—it’s wishful thinking. Comments like that, or last-minute requests from executives, are easy to collect but often weak signals. Real commitment looks like behavior: clicking, signing up, paying, switching. The more someone actually does, the more weight their feedback should carry.
By mapping insights across these two axes—proximity and commitment—teams can get a clearer picture of what counts as real evidence. That’s the idea behind Evidence Mapping.

The Four Evidence Quadrants: Mapping Signal vs. Noise
Once you start mapping evidence along the axes of proximity and commitment, patterns emerge. One of the clearest? First-hand data almost always beats secondhand anecdotes. But it’s not absolute—what counts as strong evidence still depends on what decision you’re trying to make and the data available. That’s why teams need to build a shared understanding of what they value more in their context—say, firsthand user interviews over competitor feature releases.
A Closer Look at the Quadrants of Evidence Mapping
Let’s break down each quadrant of the evidence grid and what kind of signals live there.
1. First-Hand + Serious Commitment: Your Gold Standard
This is the most reliable type of evidence. You collected it yourself, and it’s backed by real user behavior or decisions. A few examples:
After saying they liked your product, someone went on to recommend it via LinkedIn DMs to five colleagues.
Analytics showing how your top-priority user segment behaves on your product page, compared to what they said they preferred in a survey.
This quadrant helps you make decisions with real confidence.
2. First-Hand + Lip Service: Seems Promising, But Not Enough
Here’s where teams often get tripped up. Just because someone says something in person doesn’t mean it’s a strong signal. This quadrant includes feedback that looks promising but lacks follow-through. Examples:
Feature requests submitted through forms or casual online reviews.
Expert opinions based on personal experience—but not tied to actual user behavior.
To level up these insights, nudge users toward real action: pre-orders, trial sign-ups, referrals. If they care enough to commit, then it becomes more meaningful.
3. Anecdotal + Serious Commitment: Real Effort, Wrong Source
These insights can look strong from the outside—there’s real action involved—but they’re disconnected from your actual users or product context. Examples:
A competitor launched a new feature. It required serious work on their end—but you have no idea what drove the decision.
A government regulation drops. It’s important, sure—but probably not based on user behavior.
In both cases, you’re reacting to someone else’s signals, not your own data. That’s risky.
4. Anecdotal + Lip Service: The Noisy Bottom Left
This is where weak, indirect input lives. It might sound urgent, but it’s vague, low-stakes, and disconnected from actual users. For example:
A user says a feature “might be nice someday” during a support chat, and now it’s a Jira ticket.
A stakeholder brings in a laundry list of UI feedback from a cousin over dinner.
Sure, it’s all well-meaning. But it’s not the kind of insight you want driving product decisions.
The takeaway? Not all data points are equal. And unless you start evaluating the context behind the evidence, you’ll waste time chasing the wrong signals.

Case Studies: Applying Evidence in Product Discovery
To ground this in real-world practice, here’s a case study that shows how teams move from weak inputs to strong evidence in Product Discovery.
Note: The following example is based on a fictional company, Medisync, and anonymized to respect NDA agreements.
From Sales Proxies to Co-Creating Evidence in B2B Discovery
At Medisync—a fictional B2B health-tech company—the team’s starting point was a product-level North Star Metric: the number of online bookings made by hospitals.
Since Medisync acquired hospitals through its internal sales team, product management received a constant flow of feature suggestions the sales reps claimed hospitals “needed.” These inputs were anecdotal at best—secondhand and unverified—but they offered a starting point, especially given how difficult it was to talk directly with end-users inside hospitals.
To move forward, the product team used a MECE tree to break down what could be driving their North Star Metric. They combined these hypotheses with the analytics data they already had. This gave them a directional sense of what was happening based on their own first-hand insights—but it still wasn’t real evidence of which features would move the metric.
Eventually, they found one area with real overlap: hospital agendas that weren’t yet available online, but could be. This created a shared space between what sales was hearing and what the product team could investigate further.

From Co-Creation to Real-World Validation
With a direction now grounded in both internal signals and early analytics, the Product Team at Medisync could move forward with confidence. They started facilitating co-creation workshops with actual users—hospital staff and administrators—to validate their assumptions and refine potential solutions directly on-site. This helped turn vague hypotheses into grounded, testable insights backed by real-world interaction.
Shipping to Learn, Not Just to Ship
In most cases, it makes sense to delay building during Discovery as long as you can. But sometimes, shipping early is the fastest way to get to real evidence—if you’re doing it to learn, not just to launch.
Note: The following example is based on a fictional company, CuraLink, and anonymized to respect NDA agreements.
At CuraLink—a fictional B2C healthcare platform—the team faced intense pressure during the early stages of the COVID-19 pandemic. Government mandates around vaccination appointments were shifting by the day. There wasn’t time for weeks of upfront qualitative research.
So the team launched a first version of their appointment-booking flow based on limited, anecdotal information about the new regulations. But they didn’t stop there. They immediately tapped into their doctor network to gather first-hand feedback while the feature was live.
“The speed at which we needed to validate assumptions was unlike anything I’d seen before. We had to ship first, then validate in real time based on user behavior,” one product lead shared.
Once the solution was in place and usage data started rolling in, they used that momentum to gather deeper qualitative insights. This flipped the usual process on its head: rather than asking users how a solution might feel in theory, they could now observe how it worked in practice—and make informed decisions based on actual behavior.

Letting Context Drive Decisions, Not Rigid Playbooks
This example illustrates a critical point in Product Discovery: context shapes how we interpret and act on evidence. Conventional frameworks might push teams through the same cycle of interviews, tests, and feedback loops every time. But Discovery isn’t about blindly following checklists—it’s about reducing uncertainty in a way that fits the situation.
Sometimes that means skipping a phase. Sometimes it means diving deeper into an unexpected insight. In the end, Discovery is a toolkit, not a sequence. And the right tool depends on where you are and what you’re trying to figure out.
Turning Feature Suggestions into Discovery Inputs, Not Backlog Items
Note: The following example is based on a fictional company, CookNest, and anonymized to respect NDA agreements.
At CookNest, a digital recipe platform, the team experienced a noticeable shift in how they approached new feature ideas. In the past, everything went into a standard prioritization framework—one big stack of unrelated suggestions, ranked top to bottom. That approach had its limits.
They moved to a more structured process. Now, all incoming ideas are submitted through a central repository. But—and this is key—just because something is submitted doesn’t mean it gets built.
Instead, the team uses a theme-based roadmap. They group suggestions into broader themes and focus their Discovery efforts on one theme at a time. From there, they explore ideas within that theme based on existing insights.
For example:
They start by checking for stored evidence, such as past user interviews (qualitative) or behavioral data (quantitative).
If the existing data is thin, they go deeper into problem-space research.
If there’s already a strong foundation, they move forward with prototype testing or targeted experiments to validate specific ideas.
This structured-yet-flexible approach helps the team stay aligned on what matters while making space for creativity and learning.

Here's how a Senior Product Lead puts it:
“Adding an idea to a theme is not a commitment to building it. Every idea is kind of treated similarly. Some ideas have a lot of information. Others are just like, I thought about this. It could be cool. That doesn't matter so much at this time for us, because what we do with these ideas is we just use these ideas as a signal that there is a theme emerging. The ideas can still be changed a lot, but at least if someone has this idea or observed some kind of user feedback, it's a signal that something is maybe happening in that direction that we want to look into.”
By treating ideas as signals instead of a delivery backlog, the Discovery team at KitchenStories (Note: This is a fictional name used for illustrative purposes and anonymized to respect NDA agreements) can dig into the actual problem and explore potential solutions with an open mind—rather than locking into specific feature work too early.
Considering Context Helps Avoid the Trap of Endless Evidence
It’s easy to worry about doing too little Product Discovery. But doing too much can be just as risky. Think back to that highway example: you don’t want to take the first exit too fast, but you also don’t want to drive in circles forever.
So how do you know when you've done enough Discovery?
The key is putting your evidence in context—evaluating it based on proximity and commitment. As seen in Fanny’s B2B experience at MediCareTech (Note: This is a fictional name used for illustrative purposes and anonymized to respect NDA agreements), even the weakest signals can be valuable when you understand how much they’re really worth.
Discovery is about reducing uncertainty—not chasing perfect answers.
Every decision about your next Discovery move should be based on your current level of uncertainty, which depends on your product, industry, lifecycle stage, available resources, team skills, and company culture. As Douglas Hubbard puts it in How to Measure Anything, you shouldn’t focus on answering every possible question—you should focus on reducing uncertainty based on what you already know. Then act on that new clarity to decide what to do next.