MetaSPN × Reaction Canvas
Research Preview
Open Research Initiative

Help shape the future of
audience intelligence

Every audience has a hidden structure. We're building the instrument to reveal it — and we need researchers, creators, and builders to help us understand what's possible.

Perspective map — live simulation
Audience perspective landscape
Simulated
More engaged
Less engaged
Skeptical
Aligned
0
Cohort members
4
Active experiments
Perspectives mapped
100%
Results published

A new instrument for understanding audiences

Every presentation, film, livestream, and classroom lecture produces a reaction. That reaction has structure — clusters of people who respond similarly, fault lines of disagreement, moments of unexpected unity. We've never had the instrument to see it in real time.

Reaction Canvas changes that. Thumbs on phones, synchronized to a shared experience, dimensionally reduced to a live perspective map. The audience's hidden structure made visible — and navigable.

Layer 5
Generative loop
Audience reaction shapes the next generated scene in real time
Frontier
Layer 4
Temporal dynamics
The braid — how perspective clusters form, tangle, and reunite
Soon
Layer 3
Perspective map
Dimensional reduction reveals the hidden structure of the room
Now
Layer 2
Reaction canvas
Real-time signal from every audience member, synchronized
You are here
Layer 1
Shared experience
The film, talk, or stream the audience is collectively watching
Foundation

Three signals. One instrument.

When you join a research session, you watch a film or live talk with a canvas open on your phone. You're not voting. You're playing an instrument — and the map of where everyone sits is the data.

👍
Engaged / Agree

You're with it. This resonates. Move your signal left — toward this idea.

Neutral / Pass

No strong signal. You're watching, processing, not committed yet. Centre holds.

🤔
Skeptical / Question

Something's off. You're not convinced. Move right — push back on it.

Reaction Canvas — live
◀ agree
neutral
question ▶

Try dragging the dot

Early access to every experiment

Research cohort members don't just participate — they see the data. After every session, you get your position in the perspective map and the aggregate structure of the room.

🗺️
Your position in the map

After each session, see exactly where you sat in the perspective landscape — and how you compared to the clusters that formed.

🔬
Raw research data

Access to session data, perspective maps, and findings before they're published anywhere else. You're inside the experiment.

🎬
First screenings

Every film we produce using this pipeline plays for the research cohort first. Your reactions shape what gets made next.

📡
SOTA proximity

Direct access to the teams building at the intersection of AI, generative media, and collective intelligence. Not a newsletter — a seat at the table.

How a research session works

1

You get an invitation

We send cohort members a session link 24–48 hours before each experiment. Every session has a specific research question — "does this framing move this audience toward this action?" — stated before the film plays.

2

You open the canvas on your phone

A simple URL. No app. The canvas shows a 2D field. You drag your signal in real time as the film plays — left for engaged, right for skeptical, centre for neutral.

3

The map forms in real time

Everyone's signal feeds into a live perspective map. Clusters emerge. Fault lines appear. Moments of unity. The structure of the room becomes visible as the film plays.

4

You see your position — and the whole picture

After the session, you get your data: where you sat in the map, what cluster you were closest to, how your reaction trajectory compared to others. Then the full aggregate findings.

5

We publish the results regardless

Every experiment produces a finding. We publish misses with the same weight as hits. The timestamp is the integrity. You'll know what we learned before anyone else does.