1. Background & Problems
Background

Cognitive neuroscience research on music cognition seeks to understand how musical experiences shape neural dynamics, emotion, and memory, particularly in cognitive decline associated with Alzheimer's disease and dementia.

Core Problem

Despite advances in EEG and behavioral methods, current systems lack the temporal precision required for real-time neural–musical synchronization, leading to fragmented interpretations of brain–sound dynamics.

→ Addressed in 3 Results
Root Cause 1

Reliance on expert-annotated music analysis persists due to the absence of an integrated, fully automatic, real-time harmonic analysis system in computational music theory.

→ Addressed in 2B Musical Processing
Root Cause 2

Structural music-theoretical analysis is largely excluded from health science research on music and memory.

→ Addressed in 2E Physiological Processing
2. Method
2A. System Architecture
Concurrent Data Streams

DAW, BCI, wearable sensor integration.
Python backend signal processing.
JavaScript real-time WebSocket layer.
Concurrent multi-stream data acquisition.

🎛️DAW
🧠BCI
📡WBS
🐍Python
JSJavaScript
⚙️OpenBCI
2B. Musical Processing ✦ Solves Root Cause 1
Real-Time Functional Harmonic Analysis — Fully Automatic, No Human Annotator

Musical Processing Pipeline
How RC1 is resolved:
Live MIDI stream ingested via OSC → Python; no human annotator required
Pitch-class set built in a tempo-adaptive sensory-memory window (1 beat → 2 bars, auto-scaling)
211-entry lookup tensor maps any pitch-class configuration to a Roman numeral token in O(1)
Bass / soprano / chord / root tokens emitted at millisecond precision per musical beat
Time-signature–sensitive windows auto-adjust when tempo changes live in Ableton
Complete harmonic analysis runs without any pre-labeled corpus or post-hoc annotation
2C. Audio Processing
Live Audio Input Pipeline

Live FFT spectral analysis.
Browser-based Web Audio API.
Audio Processing
2D. Behavioral Processing ✦ Real-Time Annotation
Concurrent Valence, Emotion & Response Capture

Valence & emotion buttons (Pos/Neu/Neg + 26-category Russell circumplex) time-stamp listener responses directly onto each multimodal corpus row, supporting both internal and external annotation modes
Go/No-Go paradigm captures keystroke reaction times per harmonic event; all behavioral markers are synchronized with EEG beat boundaries and exported as structured CSV fields alongside harmonic tokens
2E. Physiological Processing ✦ Solves Root Cause 2
Physiological–Musical Synchronization — Bridging Music Theory & Health Science

How RC2 is resolved:
Roman numeral tokens (harmonic function) are time-stamped and mapped directly onto EEG segments
EEG band powers (δ/θ/α/β/γ) extracted per electrode, locked to each musical measure boundary
HR / HRV (Polar H9 via BLE) synchronized to musical beat cycles via the same temporal model
Cycle-locked physiological snapshots enable direct structural music analysis in health research
Concurrent WebSocket streams deliver real-time multimodal corpus rows with ms-level timestamps
Framework is reproducible & exportable — each row captures full harmonic + physiological state
Physiological Dashboard
3. Result

RC1 resolved (2B): A fully automatic real-time harmonic analysis pipeline — using a 211-entry lookup tensor — emits Roman numeral tokens at millisecond precision without any human annotator or pre-labeled corpus.    RC2 resolved (2E): Harmonic tokens are directly time-stamped and mapped onto EEG band powers, HR/HRV, and behavioral responses — synchronizing musical structure with physiological signals in a single exportable multimodal corpus row.

Data Categories
4. Conclusion

This tool provides a reproducible infrastructure for examining how harmonic features engage neural, cardiovascular and behavioral mechanisms of emotion and memory, supporting future applications in music therapy and translational research in cognitive neuroscience.

Acknowledgment
I sincerely thank Dr. Chris White, Dr. Lisa Sanders, and Dr. Alan Reese for their guidance and support.