The year is 2023. A young couple cruises down the highway in a car. The radio is tuned to piano music as they discuss a hiking trip planned for the next month. Suddenly, there’s an uptake in the tempo, and the couple exchange a knowing look.
Imagine, for a moment, that the sudden accelerando is not simply a musical device (though it functions quite nicely with the piece). Imagine that the quickening of notes is actually a subtle warning that a solar flare is inevitable, and those with sensitive electronics should take precautions.
This reality is not only possible—it’s in progress. Through a process called data sonification, researchers at the University of Michigan have collaborated with the Experiential Music Lab to create a prototype of such a scientifically based radio, available on YouTube.
This sample, based on stored rather than live-streamed data, uses characteristics of solar wind from the Advanced Composition Explorer (ACE) satellite to adjust the loudness, tempo, and dissonance of an algorithmically generated piano piece. Given the right information, anyone could interpret the status of the solar wind based on the music playing from their car radio.
Sonification—the process of turning scientific data into sound, including music—takes many forms. Though the idea may seem alien, the well-known Geiger counter relies on principles of sonification. The concept of a Geiger counter is simple—as levels of radiation increase, the rate of clicking increases. Sonification also is applied to more complex data sets, like earthquakes, solar radiation, and even simulated particle collisions.
According to NASA Goddard Space Flight Center fellow Robert Alexander, some have even tried to apply sonfication to stock market trends. ÛÏPeople have experimented with sonifying the stock market, the idea being that if you can detect new patterns in the stock market you can potentially make a lot of money,Û said Alexander in an interview with Earthzine. He admits that nobody has used sonification to correctly predict the stock market—yet.
Methods: audification and parameter mapping
So how does a huge digital file of data become a piece of sound or music that can yield useful information?
There are two commonly used methods for turning data into sound. The simpler one is audification: writing data directly to a sound file. This method is usually employed in cases where there is one data set with a huge number of points. For example, the earthquake and solar radiation sonifications mentioned above were made using audification.
Alexander, who specializes in data sonification for his doctoral studies at the University of Michigan, said that sonification can help especially in areas where a visual representation of the data would be overwhelming or difficult to interpret. ÛÏIf you think about what’s happening underneath the hood, everything looks like squiggly lines—whether it’s a wave form from a satellite observation, a needle moving back and forth on a seismograph, or measurements of microvoltages on the scalp from an electroencephalograph,Û Alexander said.
ÛÏOr Stravinsky’s Û÷Firebird Suite,’Û added Aaron RobertsåÊ, a heliophysicist who works in data sonification at NASA Goddard Space Flight Center.
The other standard method is parameter mapping, in which different aspects of a data set are related to distinct musical effects. The solar wind radio, created by Fabio Morreale and the Experiential Music Lab in collaboration with researchers at the University of Michigan, is created through parameter mapping. In this example, bulk speed affects beats per minute, the proton density alters the volume, and the ion temperature changes the mode, consonance, and melody direction. ÛÏLower ion temperatures result in music that is generally more dissonant, while higher temperatures result in more consonant and pleasant sonorities,Û Alexander explained.
Alexander said the process for turning a data file into a useful sound file is similar to creating false-color images of celestial bodies. ÛÏWe can transpose [the waves] into the range of human hearing, kind of like a false color image. We make an aesthetic decision about how to adjust the data so that we can appreciate what’s happening,Û Alexander said.
Lily Asquith, a particle physicist at CERN, was one of the leads on the LHCSound project, in which scientists and musicians used simulated particle collision data to create music. The team used parameter mapping to create the musical pieces on their website. ÛÏWe would have it where, for example, a louder sound is a larger object, or a higher frequency would mean something is moving more quickly,Û said Asquith. ÛÏThere are probably 20-40 parameters for a data set. We found that a certain 5 parameters were most useful, so we used those in the parameter mapping,Û said Asquith.
Asquith, Alexander, Roberts, and their colleagues aren’t the only ones interested in sonifying data—the International Community for Auditory Display (ICAD) founded in 1992, has hosted 19 conferences in 11 different countries, all with the focus of exploring how data can be represented in soundåÊ. ICAD held its most recent conference July 6-10, 2013 in Lodz, Poland.
Sonification for analysis?
Data sonification is interesting, of course, but can it actually be useful to scientists? Asquith gives a conditional ÛÏnoÛ—at least for her field.
ÛÏParticle physics data analysis is statistical, so sonification probably couldn’t yield any new insight,Û said Asquith. ÛÏI’m more interested in sonification for outreach purposes.Û
On the other hand, Roberts åÊismore optimistic about the potential for sonification as an analysis tool. ÛÏI think of it in terms of data mining. You’ve got algorithms for data mining now that go in and calculate a whole bunch of statistics about this data set and look for patterns in those statistics, and you can do really well sometimes. (Sonification) is data mining with your ear. You’re trained for those patterns already because youÛ÷ve spent a lifetime listening, åÊsaid Roberts in an interview with Earthzine.Alexander says that sonifying data may be a way to make use of some of the abundance of unused data available. ÛÏThat’s the crux of the problem—there’s so much data,Û Alexander said. ÛÏThere are missions planned that will be an order of magnitude higher in terms of data collection.Û Last year, the White House announced a Big Data initiative to foster the use and analysis of collected data, including funding for the EarthCube project.
Alexander has already had some success with sonification. Working with a research group at University of Michigan, Alexander noticed an oddity in an audified data set that turned researchers on to an important line of inquiry. ÛÏI sonified something like 15 different data parameters from ACE. I heard some harmonics that sounded particularly strong and potentially interesting, but the group wasn’t looking at that data.Û Alexander asked the researchers what could be causing the harmonics, and the ensuing investigation led them to a new discovery.
One of the Roberts’ and Alexander’s goals is to determine exactly how quantitative auditory analysis can be. ÛÏPart of my research is to flesh out what the strengths of the ear are, quantitatively and qualitatively,Û Alexander said. ÛÏI’ve conducted some initial studies in which we have people listen to solar wind data sets and then visually observe a spectrogram of the exact same data set. There’s a relatively high correlation between the assessments people make.Û
What data will work?
Like Asquith, Roberts and Alexander also acknowledge the necessity to use data that has a structure conducive to sonification. ÛÏWe’re trying to determine what types of data are best suited to auditory analysis,Û Alexander said. ÛÏThere are many aspects of the human auditory system that we need to consider—for instance, one sound can mask or block another. If you’ve ever tried to talk to someone at the side of a busy street you have a hard time trying to hold a conversation if a big truck goes by.Û
Alexander said that the same concept applies to sonification of data, explaining that multiple sources in the same data set could overpower one another and impede analysis.ÛÏWe don’t hear things at equal loudness at all frequencies. Above or below a certain frequency, we have to turn a sound up for it to seem like it’s at equal loudness, Alexander said.
One of the biggest challenges to successful use of sonification is providing context to the audio. ÛÏIt’s just like if you were to open up the Astrophysical Journal to any random page, show it to someone on the street, and ask if they could learn anything from a random visual diagram. If they don’t understand what’s being represented, if they don’t understand what the colors mean, if they don’t understand the axes, they can’t extract any of the information presented there,Û Alexander said.
Despite the challenges, Alexander and Roberts are hopeful that sonfication can be used to make scientific discoveries. ÛÏI don’t think we’ve fully exploited the potential,Û Roberts said.
ÛÏNinety-nine percent of the time it’s easy enough to explain what you’re hearing, but that small fraction of the time where you hear something and it hasn’t been documented before, that’s really exciting,Û Alexander said.