Adaptive Music Middleware

Feltwork

By StoneKey

Music that thinks human.

Adaptive game music middleware trained on human improvisational intelligence. Built for the compound emotional states no current system can handle.

StoneKey Sigil
Incorporated March 25, 2026
Sessions Complete 14 Sessions — 135 Rows
Inter-Rater Agreement 83% — Experiment 1 Passed
Predictive Model 80.0% Accuracy — Experiment 2 Passed
Seeking Technical Co-Founder
The Problem

Game music has never felt the player

Current adaptive music systems tag a scene as tense or calm, trigger a pre-composed segment, and loop it. The emotional logic is algorithmic. It sounds like it.


What these systems cannot handle are compound emotional states. Grief alongside beauty. Loneliness shifting into connection. Fragile hope coexisting with unresolved fear. These are the states that make a scene unforgettable. They require a human being who has felt them.

"An AI trained on finished recordings learns what music sounds like. It does not learn how a musician decides what comes next when the emotional ground shifts."
How It Works

Decisions, not descriptions.

Each training session captures what no public dataset has ever contained. A musician improvises. At the same moment, every emotional decision is narrated in real time — not reconstructed afterward. The result is a record of musical intelligence at the moment it happens.

01Performance Audio

Live piano recorded as high-quality audio. Every dynamic, timing, and register decision captured in the recording — not just what notes were played, but how they were felt.

02Narrated Emotional Reasoning

The musician narrates every emotional transition in real time as it happens. Labels are time-aligned to the musical event, not reconstructed afterward. The decision captured at the moment it is made.

03A Proprietary Annotation Method

A structured labelling protocol developed specifically for this problem. The methodology is what makes this dataset different from any other narrated performance recording.

04Independent Validation

Sessions are labelled by independent annotators without access to the original labels. Agreement rate between annotators is the primary measure of schema stability.

The Demo

Before the methodology: the practice it is built on.

What you are about to watch is not a demonstration of Feltwork. It is proof of the human capability Feltwork is designed to capture.


Nicholas Clarke improvises on grand piano. Stevie Clarke narrates a scene in real time. Neither has foreknowledge of where the other will go. The emotional transitions you hear are real, unscripted, and unrehearsed. This is the practice the annotation schema is built to encode — the ability to navigate emotional territory in real time, at the moment of decision, without a script.

Performed live by Nicholas Clarke. Single take. The emotional arc emerged in the moment — it was not pre-planned.

Blind Listening Test & Exploratory Analysis — March 2026

Three independent listeners. One minute of piano. No context given.

A structured improvisation was performed with deliberate emotional architecture. Listeners were asked only: what emotional shifts do you hear?

Sadness / Loneliness ✓ Calm / Warmth ✓ Joy / Excitement ✓ Uncertainty / Question ✓

All three listeners correctly identified each emotional state, confirming that the performances encode emotionally recoverable structure.

Session 001 was then submitted to Gemini 3.1 Pro Preview with no labels or guidance as an independent structural check. The model identified 13 emotional transitions, 4 compound emotional states, and 3 unexpected musical responses — consistent with the annotator's labels. This is a plausibility signal, not a model validation.

The signal does not need to be argued. It is in the recording.

Research Demo — In Production

Structured sessions. Pre-stated arcs. Real-time transition narration.

The research demonstration — short structured sessions with the emotional arc stated on camera before recording, transitions narrated in real time as they happen, and independent annotators labelling without access to the original labels — is currently in production as part of Phase 1 dataset construction.

Session 001 is encoded. The schema is locked. Recording is underway.

If you want to be notified when the research demo is available, use the contact form below.

The Moat

The dataset no one else can build

The moat is not the music. It is not the middleware. It is the dataset.


A proprietary corpus of narrated real-time improvisational decisions, built through structured recording sessions using a methodology developed for this specific purpose. Timestamped, emotionally labelled, and built on a practice that takes years to develop. Not a budget. Not a scraper. Years.


Most AI startups begin with zero proprietary data. StoneKey begins with a years-long archive of improvised performances and a structured methodology for building a dataset no one else can replicate.

14
Sessions complete — Schema B locked

14 sessions encoded, annotated, and locked. 135 data rows. No detected annotation inconsistencies following full audit. April 2026.

83%
Inter-rater agreement — Experiment 1 Passed

Independent annotators reached 83% Tier 1 agreement across Sessions 008-010. The schema is not a private language.

80%
Model accuracy — Experiment 2 Passed

Random Forest classifier — 80.0% accuracy, p=0.0002, majority class baseline 69.6%. Decisive result. March 24, 2026.

0
Datasets like it

No public dataset contains real-time emotional decision narration alongside live performance. The corpus is the moat.

The Methodology

Noetic Cartography

Feltwork is the first application of a broader scientific methodology: Noetic Cartography — the project of mapping expert navigational intelligence in a form that can be transferred, studied, and learned from.


Every expert carries decision architecture that lives below the threshold of explicit articulation. It cannot be taught from manuals. It disappears when the expert retires. Noetic Cartography captures it in real time, time-aligned to action, independently verifiable across observers with zero coordination.


Music improvisation is the founding domain currently being mapped. Cross-domain pilot sessions completed April 2026 across physiotherapy, military and public health, and psychiatric nursing produced 15 independent convergence points across two frontier AI systems from different developers. The same three navigational patterns confirmed across all four domains.


Explore the methodology →
The Corpus

Narrated expert decisions, time-aligned to action. The raw data of navigational intelligence.

The Schema

Machine-readable navigation labels. The structured vocabulary that makes the corpus learnable by AI.

The Convergence

Independent observers arriving at the same signal with zero coordination. The verification mechanism that proves the schema is not a private language.

Architecture

A conductor, not a composer

Feltwork outputs musical steering instructions rather than generating raw audio. Low compute, real-time compatible, integrates into existing middleware. Studios do not rebuild their pipeline. They add an emotional conductor to what they already have.

Proprietary DatasetStructured recording sessions capturing real-time emotional decisions. Phase 1 target: 40,000+ annotated transitions.
Transition ModelSequence model trained on decision data — learns how music navigates the space between emotional states.
Steering OutputA sequence of musical control parameters that guide the transition — generated in real time, auditable before execution.
Game Engine IntegrationWwise · FMOD · Unreal Engine 5 · Unity — no pipeline rebuild required

AI can generate music that sounds human.
Feltwork generates music that thinks human.

Roadmap

From dataset to standard

Now
FoundationStoneKey Music Technology Incorporated — March 25, 2026. 14 sessions encoded, 135 rows. Experiments 1 and 2 passed. Technical co-founder search active — equal founding partnership offered.
Q3 2026
BuildDataset Phase 1 complete. Model architecture established. First training runs. Initial studio outreach. Accelerator: CDL games/AI stream.
Q4 2026
PrototypeWorking prototype demonstrated to studios. First integration pilot agreed.
2027
First LicenceFirst paid studio licence. Dataset expansion to additional practitioners.
2028+
PlatformMultiple licences active. First non-gaming vertical. Feltwork becomes the emotional intelligence middleware standard.
Get in Touch

Start a conversation.

Investors, studios, co-founder candidates, collaborators. If you felt something watching the demo and you want to talk about what comes next, fill out the form.


Currently seeking: a technical co-founder with a background in sequence modelling, audio ML, or reinforcement learning. Equal founding partnership. Remote-friendly.

Improvisation Archive@Stonekeymusic on YouTube
LocationCharlottetown, PEI, Canada

Send a message directly to nicholas@stonekeymusic.com — include your name, your background, and what brought you here. Every serious inquiry receives a reply.

Send an Email