Just Thoughts #57: What makes a stage talk worth a founder’s time?
A comprehensive scoring framework for evaluating speaker personas, content density, and program relevance.
The first rule of reading these thoughts is always the same: Why should I think this way?
Because the older I get, the more I realize that how we think shapes how we tell our story. And how we tell our story shapes how others understand what we build.
In the last post, I explored the overlaps among athletes, politicians, and artists at the startup stage. Half provocation, half an honest attempt to answer the question:
“Who actually belongs on stage?”
This post continues that thread, but adds the dimension I work with every day now:
The story behind why people speak the way they do.
Because evaluating a talk is assessing a story, ultimately.
And evaluating a story is assessing the thinking behind it.
This time, we explore both:
A framework for the stage, and a framework for the mind.
Headlines in this edition;
The Honest Confession
The Master Prompt: A Framework for My Mind (and for Storytelling)
Running Slush Founder Stage 2025 Through the Machine (Gemini Experiment)
The Two Talks That Actually Told a Story (Slush Founder Stage 2025)
Building the Framework for who should be on Startup Stages and why
Scoring Talks before booking
Decision Tree for Stage Program Evaluation
The real lesson: Frameworks help assess the quality of the Story
A New Call to Action: New Services to help you tell your story.
1. The Honest Confession (Continued)
I still rarely watch stage programs live. I watch them when the noise has settled, when I can pause, rewind, and take notes.
Distance creates clarity.
Clarity reveals narrative.
And narrative reveals truth.
After writing the previous post, I wanted to test my own thinking more rigorously.
Just Thoughts #56: Why Inspiration Isn’t Enough for a Startup Event
The first rule of Just Thoughts: Why should I think that way?
So I ran the entire Slush 2025 founder stage program through Google Gemini, using a scoring rubric built upon the thinking of the last piece, this time paying equal attention to each talk's storytelling quality.
Who tells a clear story?
Who tells a valuable story?
Who tells a story that actually unblocks someone else?
According to the data, only two talks stood out. And yes… I watched both.
Before we get there, let me explain the tool I used to listen.
2. The Master Prompt: A Framework for My Mind (and for Storytelling)
I’ve been refining what a Master Prompt for program directing translates into for years. In my mind, this is a tacit way of thinking. In the generative AI sense, you’d ask AI to create a prompt that does the evaluation work for you.
However, generative AI still feels like a tool, not the thinker. It still needs a ground truth, and mine is what I’m laying out in this article.
What I’m trying to say is that a Master Prompt does in seconds what to me is a closer thinking ritual for designing stage programs:
A scaffold for clarity.
A way to shape raw intuition into coherent thought.
A filter for noise.
It forces me to:
Anchor to the fundamental question
Strip away verbal decoration
Find the narrative throughline
Listen for what isn’t being said
Translate instinct into structure
And stay consistent with my own logic
It also reflects the work I now do formally: helping founders, operators, and leaders tell their stories through A Founder’s Friend, the service I’ve launched to help people articulate what they truly mean.
And once I began evaluating Slush talks using both the Evaluation Framework and the “Master Prompt”, something clicked:
Talks that work are stories that work.
Stories that work come from thinking that works.
The inverse is also true.
3. Running Slush Through the Machine (Gemini Experiment)
Gemini ingested the program → categorized speakers → evaluated skill overlap → mapped founder relevance → and scored each talk.
This time, I layered story structure into the scoring:
clarity → conflict → insight → resolution → relevance.
Most talks scored fine. Two scored exceptionally well. Only two held up when viewed through all lenses at once.
The actual decision trees for evaluation and selection frameworks can be found at the end of this post.
However, before skipping to the end, here are the results.
4. The Two Talks That Actually Told a Story
Best of Day 1 - “The Unbreakable Founder: Mental Resilience & The Long Game”
Score: 46/50
Speaker: Hanno Renner (CEO, Co-Founder at Personio)
Timestamp: 07:59:01
This talk wasn’t a highlight reel.
It was a truth reel.
No ego.
No bravado.
No manufactured inspiration.
Instead, Hanno speaks openly about:
Headwinds
The pressure to sell prematurely
Nervous system regulation
Breathwork and ice bathing
The discipline required to build over decades
The talk had everything a real founder story needs:
Conflict: psychological and operational
Vulnerability: honesty about fear
Insight: why founders burn out
Utility: actual techniques
A narrative arc: from pressure → to clarity → to endurance
This wasn’t a talk.
It was a recalibration.
As a human listener, there was one caveat: it was almost a clear question-answer format. There was no dialogue: simply beef, no fat.
For a founder who can’t deliver a great talk or has time to practice one, this is the perfect format. Merely use plain English to answer questions; no need to think about structure.
For a human seeing people on stage, you expect a dialogue, the way the best podcasters go back and forth with the people they interview.
However, I think we can conclude that every format has its proper time and place. Given the circumstances, I think this one nailed it.
Best of Day 2 - “The Post-App Era: Community-Led Hardware”
Score: 45/50
Speaker: Carl Pei (Founder Nothing)
Timestamp: 46:16
Carl delivers something many talks attempt but rarely achieve:
A coherent narrative about the future.
His thesis is bold:
“Hardware isn’t dead. The App Store model is.”
Supporting argument:
AI will generate interfaces dynamically
Apps as static icons will fade
Hardware becomes a stage for intelligence, not a host for apps
Community isn’t a marketing channel. It’s a governance model (Nothing literally elected a community member onto its Board).
Why it scored so high:
He didn’t simply describe the future.
He positioned Nothing inside it.
Clarity.
Originality.
Conviction.
Narrative.
For the human watching, the format is the same as the first, but with more dialogue, and what feels like less prep from Tom, who seems to have a lot to say. But does a better job than Harry (interviewing Hanno) in hyping the founder.
I strongly disagreed with one thing Tom mentioned: that the future Carl is describing allows people to move up Maslow’s Hierarchy.
Maslow’s hierarchy is about finding self-actualization, which may sometimes come through doing the grunt work I can do. Think Mr. Miyagi teaching Karate Kid doing “wax on, wax off”
Even if AI can do it, finding your way is doing it manually first. That’s me, teaching myself with up to 20 hours a week, writing without fully automated AI assistance for one year straight.
5. Building the Framework for who should be on Startup Stages and why
Everything from the previous post still holds, the things founders can, and should learn from:
Athletes (15–25% Overlap)
3 Things Athletes Teach Founders:
Elite mentality & discipline
Performing under pressure
Normalizing failure and bouncing back
Primary Value Type:
Mindset
Major Limitations:
Haven’t built products or served users
Optimize themselves, not user needs
When They Belong on Stage:
When they’ve crossed into venture building
Serena Williams (Serena Ventures)
Kevin Durant (35V)
Nico Rosberg (Rosberg Ventures)
When They Don’t:
When the talk is pure inspiration without applicability
Politicians (≈5% Overlap)
3 Things Politicians Teach Founders:
Media mastery & attention dynamics
Rhetoric that sticks
Empathy at scale (talking to the “single voter”)
Primary Value Type:
Policy intelligence + attention architecture
Major Limitations:
Not builders
High risk of fluff or generic speeches
When They Belong on Stage:
When they unblock ecosystems
EU regulation
Funding frameworks
Innovation policy
Procurement pathways
When They Don’t:
When asked to “inspire,” motivate, or share personal journeys
Artists & Creatives (15–40% Overlap)
3 Things Artists Teach Founders:
Brand and identity building
Narrative creation and a story that travels
Distribution understanding
Primary Value Type:
Identity + narrative + distribution
Major Limitations:
If they’ve never treated their art as a business
No distribution or operational experience
When They Belong on Stage:
When they understand distribution & have built ventures
Jay-Z (streaming/business empire)
Dr. Dre (Beats)
Wu-Tang Clan (strategic scarcity case study)
When They Don’t:
When their craft never extended into business or audience-building
Scientists (25–50% Overlap)
3 Things Scientists Teach Founders:
Structured uncertainty
Deep problem solving
Data-driven reasoning
Primary Value Type:
Deep systems insight
Major Limitations:
Often weak in distribution, branding, or GTM
When They Belong on Stage:
Hard tech, frontier tech, AI, biotech
When They Don’t:
Sessions requiring brand, narrative, or user-facing insight
Chefs (20–40% Overlap)
3 Things Chefs Teach Founders:
Execution under chaos
Repeatable excellence
Experience and sensory-driven design
Primary Value Type:
Operations and quality
Major Limitations:
Not digital-native builders
Limited scaling parallels
When They Belong on Stage:
Leadership
Operations
Customer experience and hospitality
When They Don’t:
Product strategy or technical scaling discussions
Military Leaders (20–35% Overlap)
3 Things Military Leaders Teach Founders:
Clarity under uncertainty
Team cohesion and morale
Decision-making with incomplete information
Primary Value Type:
Leadership under pressure
Major Limitations:
Not customer-centric
Not necessarily experienced in creative or iterative processes
When They Belong on Stage:
Crisis leadership
Execution frameworks
High-pressure operating models
When They Don’t:
Creative, identity, or consumer-facing topics
When you look at speakers through narrative logic, the question shifts from:
“Who is interesting?”
to
“Who helps founders think better?”
That’s who belongs in the startup stages.
6. Scoring Talks before booking
A scoring system for evaluating fit based on proposals.
Topic Relevance (0–5)
Is the subject aligned with what founders in the room genuinely need at their current stage, market, and context?
Specificity (0–5)
Is the talk narrowly and concretely defined?
(Not “How to Succeed” but “How We Found PMF in Enterprise AI After 6 Failed Pilots.”)
Speaker Fit (0–5)
Is this speaker actually the right person for this topic?
Do they have lived experience that matches the subject?
Insight Density (0–5)
Does the outline contain real data, frameworks, specifics, examples, and details?
Not vibes, not philosophy—actual substance.
Non-Obviousness (0–5)
Does the talk surface things founders rarely hear?
New ideas, uncomfortable truths, contrarian lessons, or candid reflections?
7. Decision tree for stage program evaluation
This is for evaluating the quality of presenters who have been on stage.
Step 1 — Who Is On Stage?
1. Are they a founder/operator? If yes → proceed. If no → evaluate cross-over relevance.
2. Are they a cross-over athlete/artist/politician who has built something? If yes → proceed.
3. Are they a domain thought leader in their field (scientist/chef/military/etc)? If yes → proceed.
4. Are they a pure celebrity? If yes → decline.
5. Do the experiences they want to share map directly to founder challenges? If unclear → decline.
Step 2 — What Is the Talk About?
1. Is it actionable? If no → reject.
2. Is it specific? If no → reject.
3. Based on real experience? If no → reject.
4. Does it match the audience's stage? If no → reassign.
5. Is it non-obvious? If no → downscore.
6. Does it unblock many founders? If yes → higher score.
7. High value per minute? If yes → approve.
7.1 FOUNDER / VC TALK EVALUATION FRAMEWORK (0–50 POINTS)
For evaluating talks given by VCs and founders who are active operators, builders, and decision-makers in the startup ecosystem.
This system measures whether the talk delivers real value to other founders, not PR, not mythology, not polished success narratives.
1. Actionability (0–5)
Does the talk give founders practical, concrete insights they can apply next week?
Not theory, but usable tools.
2. Transparency & Honesty (0–5)
Does the speaker reveal how things actually worked?
Failures, doubts, wrong paths, tradeoffs, not a polished hero story.
3. Depth of Insight (0–5)
Does the talk go beyond clichés?
Does it unpack real mechanics, decisions, frameworks, or mental models?
4. Founder-Relevance Fit (0–5)
Is the content relevant to the founders in the room? Is it relevant stage-wise, domain-wise, and context-wise?
5. Experience Credibility (0–5)
Is this person qualified to speak on this exact topic based on lived experience, not reputation?
6. Originality (0–5)
Does the talk offer something new, unexpected, or non-obvious?
Avoids recycled conference wisdom.
7. Clarity & Communication (0–5)
Is the talk structured, clear, logical, and easy to follow?
Can the audience retain the message?
8. User / Audience Grounding (0–5)
Does the speaker anchor ideas in real user, market, or team experiences?
Do founders recognize themselves in the examples?
9. Value Per Minute (0–5)
How much insight is delivered per minute?
Does the talk respect the audience’s limited time and attention?
10. Ego-to-Value Ratio (0–5)
Is the talk about helping the audience, not about glorifying the speaker’s narrative or personal mythology?
Score:
42–50 → Should’ve been on the mainstage
35–41 → Should’ve been on topic-specific stage
25–34 → Should’ve been a discussion
<25 → Should’ve Declined
7.2 NON-VC / NON-FOUNDER TALK EVALUATION FRAMEWORK (0–50 POINTS)
For evaluating talks given by athletes, politicians, artists, scientists, chefs, military leaders, creators, journalists, and other non-operator personas.
This system measures whether the talk gives founders real value, not inspiration, not personality, not career nostalgia.
1. Transferability of Insight (0–5)
How directly can the lesson be applied to building a company?
Does it map cleanly into founder reality?
2. Relevance to Founder Challenges (0–5)
Does the talk address a genuine founder problem?
Examples: pressure, identity, narrative, distribution, leadership, execution, policy barriers.
3. Depth of Expertise (0–5)
Is the speaker showing mastery of their domain?
Not surface-level or anecdotal.
4. Practical Extraction (0–5)
Can the audience extract actionable principles from the talk?
Can they write down three things they can do differently on Monday?
5. Domain-to-Founder Bridge (0–5)
Does the speaker successfully connect their world to the founder world?
Athlete → discipline
Artist → distribution
Politician → attention
Scientist → structured uncertainty
Chef → operations under chaos
Military → clarity under pressure
6. Originality of Perspective (0–5)
Is the talk non-obvious, fresh, unexpected?
Avoids clichés (“resilience,” “believe,” “follow your passion”).
7. Narrative Coherence (0–5)
Is the talk structured, clear, and easy to follow?
Story arc, logic, clarity, pace.
8. Humility-to-Self-Myth Ratio (0–5)
Does the speaker avoid “my heroic journey” storytelling?
Is it about the audience, not their ego?
9. Audience Engagement / Relatability (0–5)
Does the founder audience connect with the examples?
Do the metaphors fit a startup context?
10. Value per Minute (0–5)
How dense is the insight?
Is the talk worth the founders’ time?
Because non-founder speakers can add real value —
but only if:
Their insights are transferable
Their perspective is relevant
Their talk is actionable
Their ego doesn’t overshadow the learning
They can bridge their world to ours
This filters out:
Celebrity monologues
Politician self-promotion
Artist mystique stories
Athlete “work harder” clichés
Scientific theory lectures
Military war stories without translation
And ensures only meaningful, usable, high-ROI content reaches founder stages.
8. The Real Lesson: Frameworks help assess the quality of the Story
The Evaluation Framework gives structure to how I evaluate others. The way I use it provides structure to my self-evaluation. AI helps with clarity, reduces bias through scoring (hopefully), and scales assessments at speed.
This translates to the lessons.
Do not take this at “face value”; every program curator should create their own evaluation frameworks that fit their event and the feel they want to make for their specific audience. Online or offline. Curating the framework has now become the work.
AI does not substitute for human perception and experience. It can simply optimize the things you ask it to optimize, while completely missing something obvious to you as a human.
Nonetheless, somewhere between frameworks, intuition, and evaluations sits the work I now do every day:
Helping founders turn their thinking into stories…
and their stories into clarity.
Watching the two Slush founder stage talks reminded me of something simple:
Good stories reveal good thinking.
Good thinking reveals good leadership.
Good leadership reveals good design.
This is why storytelling is not branding.
Not fluff.
Not style.
It’s coherence.
It’s meaning.
It’s identity.
It’s the thing we build long before the product exists.
9. A New Call to Action
I’ve launched a new site — A Founder’s Friend — focused entirely on helping founders, leaders, and operators tell better stories:
👉 https://www.afoundersfriend.com/services
If you explore the page and see something you think you can help with, or if you’re curious about working together, reach out.
Until next time! Keep thinking, keep questioning! Some unique storytelling coming up with fellow Substack writers Rob Snyder & Richard Makara, about GTM, and no, I have not yet run my analysis on it, so it will be fun to see if this chat would’ve landed on a startup mainstage or not!




