Daniel Rosehill Hey, It Works!
AI Symposium 2026: simulating a conference with agentic personas
· Daniel Rosehill

AI Symposium 2026: simulating a conference with agentic personas

An experiment in perspective synthesis — using AI agents to simulate a conference where diverse personas deliver speeches on AI's impact.

What would happen if you assembled a room full of people -- HR professionals, CEOs, doctors, artists, hardened skeptics, breathless AI enthusiasts -- and had each of them stand up and deliver a three-minute speech about AI's impact from their unique vantage point? In real life, organizing something like that would be a logistical nightmare: coordinating schedules across industries, booking a venue, dealing with the inevitable cancellations and ego management. But with AI agents, you can spin up the whole conference in an afternoon. That's the premise behind AI Symposium 2026.

Perspective synthesis through agentic AI

AI Symposium 2026 is an experiment in what I've started calling perspective synthesis -- using AI agents to generate diverse viewpoints on a common theme. The concept is intentionally clean: an orchestration agent distributes prompts to sub-agents, each configured with a distinct persona and professional worldview, and each agent delivers a speech. No rebuttals, no debates, no complex interaction choreography. Just a collection of perspectives, compiled into conference proceedings.

danielrosehill/Claude-AI-Conference View on GitHub

I chose the conference format deliberately, and it's worth explaining why. A panel discussion needs rebuttal orchestration -- agents have to read and respond to each other's arguments, which introduces a lot of complexity around turn-taking, relevance, and coherence. A think tank needs institutional memory -- agents need to build on previous sessions and maintain consistent positions over time. A conference is a one-shot event: everyone speaks once, in parallel, and you get diversity of viewpoint without requiring the agents to interact with each other at all. It's the lowest-overhead format for maximum perspective diversity.

The output formats I'm excited about

I'm planning two output tracks. The first is a PDF with all the speeches concatenated, each preceded by a brief introduction of the agent's persona and professional background. This is the straightforward reading experience -- flip through and absorb diverse perspectives at your own pace.

The second, and the one I'm genuinely most excited about, is an audio version rendered through TTS. Picture this: an MC agent introduces each speaker, there's voice diversity across the different personas, and you can listen to the whole conference as a podcast. A conference that never happened, with speakers who don't exist, producing perspectives that are genuinely novel. There's something delightfully surreal about that, and I think the audio format makes the experience far more immersive than reading text on a screen.

Part of a broader exploration

This project doesn't exist in isolation. I've been running a whole series of experiments with multi-agent AI for perspective synthesis, each exploring a different interaction format. There's Panel of Claudes (multi-agent debate with rebuttal mechanics), AI Agent UN (simulated General Assembly with structured voting), and Peace in the Middle East (geopolitical ideation challenge). Each format teaches me something different about how multi-agent interactions produce emergent value -- or don't.

danielrosehill/Panel-Of-Claude View on GitHub danielrosehill/AI-Agent-UN View on GitHub danielrosehill/Peace-In-The-Middle-East View on GitHub

The conference format sits at the simplest end of this spectrum, which is exactly its strength. When you don't need agents to interact, you can focus entirely on the quality of individual perspectives and the diversity of the overall collection. The orchestration complexity drops to nearly zero, letting you scale to dozens or even hundreds of speakers without worrying about conversational coherence.

What I've learned so far

The project is currently in its design phase -- I'm defining agents, refining the methodology, and working out the implementation approach. One early lesson: the quality of persona definition matters enormously. A prompt that says "you are a doctor" produces generic output. A prompt that says "you are a rural GP in her 50s who has watched AI diagnostic tools change her practice over the past three years" produces something with real texture and specificity. The personas need backstory, professional context, and emotional stakes to generate speeches that feel like they come from a real conference rather than a prompt factory.

The project is open source, and I'd love contributions -- especially from people who have ideas for interesting personas or want to help build the TTS audio pipeline. Check it out on GitHub.

danielrosehill/Claude-AI-Conference View on GitHub