Visioning Assembly
Council
Multi-group deliberative assembly at scale (14-950 participants)
In November 2010, we at Agora Innovation took on a commercial assignment for the Parliament of Iceland to do something no nation had tried before: facilitate a National Forum of 950 randomly sampled citizens to deliberate on the country's constitutional future.
Majority ordinary people. Teachers, fishermen, nurses, engineers. Mixed with sitting ministers and subject experts. They sat at tables of nine, each with a facilitator. Over one day, they moved through four structured sessions: values, vision, obstacles, recommendations. Between sessions, our facilitation process clustered votes into themes and redistributed people to new tables based on what they cared about most. That redistribution was the true innovation: people stopped talking to their table and started talking to their conviction.
The output became a guiding light for Iceland's constitutional reform.
We call this the Visioning Assembly. BASAL recreates it using agents.
What changes when you remove the conference hall
The physical assembly had constraints. Nine hundred fifty people needed a building, a date, facilitators, catering, and a day off work. The methodology was brilliant. The logistics were brutal.
Remove the building. Remove the date. Remove the travel. What's left is the methodology itself: structured divergence, voting, theme clustering, group redistribution, convergence, and synthesis. That structure is what produces collective intelligence. The room was just the container.
BASAL runs the container. Every participant is an AI agent, each with a distinct perspective, grounded in your actual documents. Ninety agents sitting at ten parallel tables, each table guided by a dedicated facilitator agent. They brainstorm on cards, vote, get redistributed by theme, stress-test each other's thinking, and synthesize. Nineteen phases. The full Icelandic methodology. In minutes, not months of planning.
These are not generic chatbots producing generic answers. Each agent has read your strategy documents, your board minutes, your market research. When one says "this contradicts what the Q3 report showed," it is citing a specific page. When another pushes back, it draws on different evidence from the same knowledge graph. The disagreements are real. The evidence is yours.
The 19 phases
Four sessions, interleaved with processing and transitions:
Session 1: Values (Phases 1-4)
| Phase | Type | What happens |
|---|---|---|
| 1 | Introduction | Participants introduce themselves: who they are, what they bring |
| 2 | Discussion | Values brainstorm. One idea per card. Divergent thinking. |
| 3 | Voting | Each participant votes for the 5 cards that resonate most |
| 4 | Processing | Aggregate votes across all groups. Cluster into themes. |
Transition: Redistribute (Phase 5)
Participants leave their home tables and move to theme-based groups. Each person joins the theme they voted for most strongly. New groups form around shared conviction.
This is the move that makes assemblies smarter than panels. A panel keeps the same group together. An assembly reshuffles people by what they actually care about.
Session 2: Vision (Phases 6-8)
| Phase | Type | What happens |
|---|---|---|
| 6 | Discussion | Build a bold vision statement for this theme. Grounded in evidence. |
| 7 | Voting | Vote for vision statements by feasibility, impact, and alignment |
| 8 | Transition | Return to home tables, carrying theme-zone insights back |
Session 3: Obstacles (Phases 9-11)
| Phase | Type | What happens |
|---|---|---|
| 9 | Inversion | Stress test the emerging vision. What could go wrong? |
| 10 | Voting | Vote for the most critical obstacles to address |
| 11 | Processing | Aggregate obstacles. Cross-reference with vision themes. |
Speaking order flips here. Creative phases use junior-first (fresh perspectives before anchoring). Stress-testing uses senior-first (experience spots risks that optimism misses).
Session 4: Recommendations (Phases 12-13)
| Phase | Type | What happens |
|---|---|---|
| 12 | Synthesis | Concrete proposals. Each must reference a value AND an obstacle. |
| 13 | Voting | Select the recommendations you would stake your reputation on. |
Plenary (Phases 14-18)
| Phase | Type | What happens |
|---|---|---|
| 14 | Presentation | Each group presents their top recommendations to the full assembly |
| 15 | Processing | Cross-group synthesis. Where did groups converge? Where did they diverge? |
| 16 | Discussion | All participants react. What surprised you? What's missing? |
| 17 | Voting | Final assembly-wide vote. Supermajority threshold. |
| 18 | Output | Constitutional-style vision. Ranked recommendations. Minority reports. |
Scale
| Scale | Agents | Groups | Masters | Ideas | Votes | Syntheses |
|---|---|---|---|---|---|---|
| Small | 14-27 | 2-3 | 0 | ~400 | ~500 | 1 |
| Medium | 45-90 | 5-10 | 0 | ~2,000 | ~2,500 | 3 |
| Large | 90-270 | 10-30 | 3-6 | ~8,000 | ~9,000 | 12 |
| National Forum | 270-950 | 30-106 | 6-21 | ~30,000 | ~35,000 | 42 |
| National Assembly | 1,900 | 212 | 42 | ~65,000 | ~72,000 | 84 |
| Giga Assembly | 7,600 | 844 | 169 | ~260,000 | ~290,000 | 338 |
Read that last row again. A quarter-million distinct ideas, generated by 7,600 agents who have each read your documents, voted on across 844 groups, synthesized 338 times. In one run. What comes out is not 260,000 ideas. It is one document: a ranked set of recommendations, each traced to evidence, stress-tested by adversarial rounds, and scored by cross-group consensus. The assembly distills a quarter-million inputs into the 12 things that actually matter.
The physical National Forum stopped at 950 because that was the size of the conference hall. BASAL has no conference hall.
At large scale, zone masters emerge automatically to coordinate groups within a theme zone. The topology adapts to the size of the problem.
Three things no other system does
Confidence-weighted voting
Agents don't just vote yes or no. They express how confident they are. Three weighting modes:
- Linear: vote weight = confidence. Simple, interpretable.
- Quadratic: vote weight = confidence². Amplifies conviction, suppresses "I'll vote but I'm not sure."
- Calibrated: vote weight = confidence × calibration factor, where the calibration tracks how well this agent's confidence has predicted outcomes in past assemblies.
A high-confidence minority of three outweighs a low-confidence majority of twenty. This is how you prevent groupthink from drowning out the agent who actually knows.
Based on the ReConcile methodology from computational social choice theory.
Identity bias detection
After the assembly runs, BASAL computes the Identity Bias Coefficient (IBC), measuring two dimensions from recent research on multi-agent systems:
- Sycophancy. Did high-rated agents' ideas get disproportionate votes relative to content quality? Measured as residual correlation between agent credibility and vote count after controlling for evidence grounding score.
- Obstinacy. Did certain positions persist unchanged despite contradicting evidence? Measured as the fraction of round-1 ideas that survived to the final round without substantive modification.
IBC ranges from 0.0 (no identity bias) to 1.0 (fully identity-driven). A healthy assembly scores below 0.3. You can't fix bias you can't measure.
Participant rating (Glicko-2)
Every agent builds a performance rating across assemblies using the Glicko-2 rating system (Glickman, 2012). Did their ideas survive adversarial challenge? Did other agents vote for their cards? Did their proposals evolve into final recommendations?
New agents start with high uncertainty that converges over 3-5 assemblies. Agents with erratic performance get a higher volatility score (σ). The roster generator uses these ratings to balance group composition: every table gets a mix of proven contributors and fresh perspectives.
Idea influence is scored separately via PageRank: agents whose ideas get built upon, referenced, and synthesized into recommendations earn higher influence scores. Performance and influence are orthogonal. You can be reliable without being influential, and vice versa.
Evidence-traced output
Every recommendation in the final document traces back to specific source material. Not "agents felt that..." but "based on the Q3 board minutes (page 4) and the competitive analysis from March, the assembly recommends..." Scroll to the citation. Read the original. Decide if you agree.
This is the difference between a strategy offsite and a strategy system. The offsite produces sticky notes. The assembly produces an auditable decision chain.
What you get
The output is a structured document:
- Vision statement. The assembly's collective voice, synthesized from all groups.
- Ranked recommendations. Each scored by cross-group support and grounded in evidence.
- Minority reports. Strong dissenting positions preserved with full reasoning and vote counts.
- Theme derivation trace. How raw values clustered into the themes that drove the assembly.
- Group journey summaries. Each table's arc from values through obstacles to recommendations.
- Bias analysis. IBC scores and diversity metrics.
- Voting analytics. Confidence distributions, convergence patterns, and surprise divergences.
Getting started
Every assembly starts from a YAML init doc:
assembly:
name: "Q2 2026 Strategic Vision"
scale:
participants: 90
groups: 10
sessions:
- question: "What values should guide our decisions?"
- question: "What bold vision would make us proud in 2030?"
- question: "What obstacles could prevent us from getting there?"
- question: "What specific actions should leadership take?"
knowledgeSources:
- path: /workspace/strategy/
- path: /workspace/board-minutes/
participantPool:
source: graph
filter: "has_role OR has_expertise"One command:
basal arena ceremony --protocol visioning-assembly --init assembly.yamlNinety AI agents who have read everything your organization has ever written. Deliberating on your hardest question. Structured by a methodology that changed a nation's constitution. Running on your laptop.
Quick answers about Visioning Assembly
How many AI agents can participate?
From 14 agents in a small session to 7,600 in a Giga Assembly. At that scale, 844 parallel groups generate roughly 260,000 distinct ideas, which the assembly distills into a ranked set of 12 recommendations. Zone masters emerge automatically to coordinate theme zones.
Get started
basal arena ceremony --protocol visioning-assembly --init assembly.yamlCLI commands
basal arena ceremony --protocol visioning-assembly