A neuroscientist and a musician are an unlikely duo. One analyzes electrical signals in the brain, while the other writes music that can shape those signals. Cognitive neuroscientist Kim Awa and musician Chaz Knapp came together at James Turrell’s The Way of Color to explore a simple question: Can brainwaves be transformed into music?
The project’s inspiration
Located at Crystal Bridges Museum of American Art in Bentonville, Arkansas, The Way of Color draws visitors into a calm, multicolored sky for 30 to 40 minutes. The show stimulates sensory circuits in the brain which produce electrical activity.
Technology such as the electroencephalography, commonly known as EEG, collects and provides visualizations of these electrical signals using electrode patches or hats placed onto the scalp. These are especially useful in clinical contexts such as a sleep study or epilepsy research where patterns linked to attention, rest and emotion can easily be measured for diagnosis. However, Awa and Knapp showed how versatile this machinery can be by creating music out of these signals.
The science
Once these signals are recorded, EEG machines separate input into five bands; delta, theta, alpha, beta and gamma, which all vary in speed. While most would just see an incomprehensible pattern of lines drawn across a screen. Awa had a more unique view of the data. “I treat the data as a continuous time-based signal, almost like a musical score unfolding moment by moment,” she said.
Awa and Knapp then organized the data for verification. While Knapp notes the usage of Python in cleaning the data was a “technical barrier,” he also found the process “slower and more deliberate, but also more rewarding.”
Much like sound waves, brainwaves can also be correlated to varying intensity and different rhythms. The carefully organized data was the basis for the final compositions. Awa remarked, “At that point, the data itself is not ‘sound,’ it is still visual and numerical.”
Behind the scenes: musical production
Knapp, used to improvising in different settings, worked his same magic when it came to the rhythms of these waves. Awa explained that different brainwaves were assigned specific sounds which were then rearranged into patterns. This was an interesting challenge as Knapp needed to balance a pleasant musical arrangement without ignoring the original dataset.
While the duo initially wanted to transform all the data into a singular piece, Knapp notes “Breaking the work into individual pieces felt more personal and allowed each participant’s experience to exist on its own terms.”
Every participant came into the skyspace with their own mental states, some of which were similar to other participants. However, each composition remained unique even through slight differences in frequency, timing and intensity. It influenced a “combination of overlapping patterns” that Awa believes made the music unique.

First impressions & key takeaways
When Awa introduced these pieces to her colleagues, she noticed they all felt emotionally overwhelmed when considering the realization what they were hearing was “not a composed soundscape, but a mirrored reflection of another person’s inner experience in a specific moment.” Awa stresses the music is best experienced less on a critical level and is more geared towards encouraging listeners to hear the parts that resonate with them specifically as an indirect understanding of someone’s mental space in relation to their own.
On a cultural level, Knapp honed in on this feeling of indirect connection by mentioning even those who didn’t directly participate were able to contribute creative input to the project just by listening. Knapp hopes listeners at home will realize art can be experienced in day-to-day life rather than solely on a streaming service and the radio.
Knapp and Awa consider this project a success at bridging the gap between art and science, while also challenging the ways we look at each field in their own regard.
Visit Chaz Knapp’s official YouTube channel and the Art Bridges Foundation’s channel to listen to these recordings and for more information.
Q&A with project authors
The following is an interview transcript with musician Chazz Knapp
Q. How was it like working with Kim Awa on the project?
A. Working with Kim was genuinely inspiring. She has a brilliant, curious mind and is an incredibly thoughtful artist, which made collaboration feel very natural from the start. When she first approached me with the idea, I was immediately intrigued by both the concept and her openness. She trusted my interpretations completely and gave me the creative freedom that a project like this really requires. Because the territory was largely unexplored for both of us, there was no rigid blueprint to follow.
Q. What inspired you to join in on the project?
A. What initially drew me in was the novelty of the idea. Creating music from brain data is not something you encounter often, and that alone made it compelling. Most of the music I make is rooted in improvisation — capturing a moment as it happens. From that perspective, using brain data to generate sound felt deeply personal, even though the data wasn’t my own. These were individual moments outside of my own lived experience, and the idea of translating someone else’s internal state into music fascinated me. It felt like an extension of the same practice I use in my own work, just viewed through another person’s consciousness.
Q. Describe how it was working with raw brain data to create music.
A. Working with raw brain data was challenging in ways I hadn’t experienced before. Kim and I both had to write Python scripts to begin shaping the material into something musical, which was new territory for me. That technical barrier made the process slower and more deliberate, but also more rewarding. There were apparent limitations. I wanted the brainwaves to speak for themselves, but the question quickly became: how do you guide chaos without flattening it? A lot of the work involved listening closely, finding moments within the output that felt meaningful, and allowing those moments to become the foundation of each piece. Instrumentation was another challenge. Deciding what sounds best honors the data without overpowering it. While the project was initially envisioned as a single long piece combining all 40 stems, I realized that even five stems carried so much emotional weight. Breaking the work into individual pieces felt more personal and allowed each participant’s experience to exist on its own terms.
Q. What will listeners gain from the experience of listening to music created with their brainwaves?
A. I’m hesitant to say what anyone should gain from listening. Everyone brings their own experiences into the act of listening. If anything, I hope it encourages people to think differently about sound, about consciousness, and about the quiet ways art exists in everyday life. Art is present whether we are actively aware of it or not, and this project simply makes that presence a little more visible, or perhaps more audible.
Interview with cognitive neuroscientist Kim Awa
Q. What information does EEG actually capture, and what does it typically miss?
A. Our bodies communicate through a combination of electrical and chemical signals. Every thought, movement or perception involves tiny electrical changes in the brain. EEG, short for electroencephalography, is a technique that measures this electrical activity as it emits from the scalp.
By recording these signals in real time, EEG helps researchers understand when the brain is active and how patterns of activity change during different mental states. What EEG does especially well is tracking timing. It can show when brain activity fluctuates and how quickly the brain responds to an experience, down to the millisecond. However, EEG typically misses precise location and can only indicate broad regions of activity, such as the left or right frontal areas of the brain. It cannot reliably pinpoint activity to very specific structures deep inside the brain. Other methods like fMRI, provide more detailed spatial information and help build upon our understanding of what is happening inside the brain when we engage with art and creative practices.
Q. When brainwaves are turned into music, what’s being translated: signal, pattern, or interpretation?
A. In EEG research, scientists follow very specific steps to make sure the data we collect is as clean and accurate as possible, and this project follows those same standards. First, the raw EEG signals go through a process called preprocessing, which is similar in spirit to audio engineering. This includes filtering out unwanted noise and removing signals caused by things like eye blinks or muscle movement, so we know we are truly working with brain activity.
Once the data is cleaned and verified, it is typically separated into five frequency bands commonly studied in EEG research: delta, theta, alpha, beta and gamma. Each of these bands has been linked to different kinds of brain activity, such as wakefulness, attention, relaxation, imagination or focused thinking.
Q. Can two people with different mental states produce similar EEG-based music?
A. The short answer is yes and no. EEG data is organized into frequency bands, such as delta, theta, alpha, beta and gamma, and each of these bands spans a range of electrical frequencies. For example, delta activity typically falls between about 0-4 Hertz.
Even when two people are in what we might label as the same mental state, the fine-grained details of how their brains express that state can differ. Those small differences influence how the music sounds. It is also important to note that the brain does not operate in just one band at a time. All of these frequencies are active simultaneously, interacting with one another in complex ways.
So, while two people’s brain-derived music might share certain qualities or textures, the underlying patterns are not identical, reflecting the unique and dynamic nature of each person’s brain activity
Featured image: Photo courtesy Dutchess Braincore & Wellness Center
Edited by James Sutton & Kester Kafeero










