Agnieszka Roginska - Immersed in Audio
Agnieszka Roginska is Professor of Music Technology and the Vice-Chair of the Music and Performing Arts Professions Department at New York University. She’s also this year’s President of the AES - the Audio Engineering Society, the global professional association for audio technologists with over 12,000 members.
Agnieszka’s specialism is 3D and immersive sound, and her research career has touched on every imaginable aspect of what sound can do in acoustic space - from its applications in gaming, VR and computing to the military, and from monitoring traffic to tracking the migration of birds.
Dubber Hi, I’m Andrew Dubber. I’m the director of Music Tech Fest, and this is the MTF Podcast. Your ears are amazing, and your brain, that’s amazing too. Sound, and particularly how we perceive sound in three-dimensional space, is this incredible phenomenon that we take for granted a little bit. But when you stop to think about it, as Agnieszka Roginska does, well, it’ll blow your mind, and maybe even become something of an obsession.
Agnieszka is a professor in the Music Technology Department at NYU, and this year she’s also the president of the AES, the Audio Engineering Society. And her specialism is immersive 3D audio for games, for VR, for movies, for music, and for a whole lot of stuff you might not have thought of, from the military to traffic management, educational psychology to bird migration. I spoke to Agnieszka from her lockdown in Woodstock, and even though it was just a Zoom call she managed to make it seem pretty immersive all the same. This is professor, president, Agnieszka Roginska. Enjoy.
Dubber Agnieszka Roginska, thanks so much for joining us for the MTF Podcast today.
Agnieszka Thank you for the invitation, I’m happy to be here.
Dubber You’re sitting… I can see you’re surrounded by nature, you’re not surrounded by studio equipment. I take it that you’re in lockdown but not in New York itself.
Agnieszka Yeah, unfortunately not. So normally I would be in New York City in Manhattan, in Greenwich Village, but given COVID-19 we left the city a couple of months ago. And we’ve been isolated, completely isolated, in the woods of Woodstock, New York, where it’s lovely here, surrounded by trees and deer and wild turkeys.
Dubber Fantastic. And I guess you get to spend your time being busy being the president of AES remotely. Do you want to tell us a little bit about what that means and what AES is?
Agnieszka Sure. That’s one of the hats that I wear, yes, is the president of the Audio Engineering Society. The AES is the largest organisation of professional audio people and music technology, including producers, recording engineers, people who are interested in signal processing, acoustics, gaming, spatial audio, immersive sound… The list is very, very, very long. It’s a society that has been around for over 75 years, and traditionally it was started for people who were professionals in the field of audio engineering, which back then was very limited to recording, broadcast, and so on. And over the past almost eight decades now, the society has truly evolved to envelop the broader meaning of audio engineering, which of course includes gaming now. And the industry is growing at a tremendous rate, and so is the meaning of the Audio Engineering Society.
So it’s a society, it’s an organisation, that has over 12,000 members worldwide. In fact, we’re in all continents except Antarctica. And we have student members, from a very young age, in fact, to professionals who have been in this industry for a very long time, and some of them have retired and they’re giving back to the community. And it’s a really wonderful community of people who come together for the sole purpose of the fact that they’re interested in audio, and they work in audio. And that is their passion, and that is their work. And so we have both student sections and professional sections around the world, over 120. Much more, they’re growing every month, every year. And it’s a way for the community to come together and connect with professionals, and that means different things to different people, right?
So if you’re a student, it’s a wonderful way for you to connect to people who are professionals in this field, so that you can learn. You can learn things that are beyond everything that you’re learning in your academic institution, and connect with professionals at a very early age, and start to network and learn from all these wonderful people who are around you. If you’re at a… If you’re just graduating from college or you’re in your early part of the career, it’s an amazing way, again, to network and to get integrated into the community. To find job opportunities, and to really learn about what the industry is about. If you’re a little bit later on, mid-career or so… As you know, the industry keeps changing at a tremendous rate.
Dubber That’s for sure, yeah.
Agnieszka And I think it’s faster and faster as we move on with time, and you absolutely need to stay relevant. You need to keep learning. It’s not something that you learn how to do your craft once and then you’re set for life. You really need to keep learning, educating yourself. And so the AES provides you with this infrastructure, with this community, with this education, that takes you and helps you develop. Helps you develop as a professional and helps you develop your expertise and evolve as an audio engineer.
And then of course if you’re later on in your career, it’s also a great way if you’re in your retirement, or thinking about retirement, it’s a wonderful way to give back to the community. And it keeps the connection with the students or young professionals who are just starting out, and it’s a really wonderful way to keep in contact with the community.
So the AES is just this very large community of people who are around the world, and we’re here to help people connect and help people learn. And education, I would say, is at the forefront of what the Audio Engineering Society is about.
And of course, there is a whole aspect to it that deals with standards and setting standards for the audio, and we know about all these standards that the AES has set throughout the past many, many decades. So I’ve been… I am the president this year, so this year from January 1st.
Dubber So it’s a one year gig?
Dubber It’s a one year gig and you’re voted in, presumably?
Agnieszka Yes, so I was voted in. And in fact, the way the society is organised, there is always three presidents at any one period of time. There is one president who is the president-elect, which means that’s the person who will be the acting president next year. There is the acting president. So that’s my year now, as acting president. And then there’s the past president, who is the president who was president last year. And the three of us work together with the board of directors, and also with the board of governors who are representatives from the leadership around the world. And also the AES has a full-time staff and an executive director and people who work with Colleen Harper to run the society from an infrastructure perspective.
Dubber And you’re also a professor.
Agnieszka I am indeed a professor. I’m a professor of Music Technology at New York University, in New York City.
Dubber Focussing on 3D audio, particularly.
Agnieszka Yeah. So that’s what I’ve been doing for the past… Gosh, about 25 years now I’ve been in the field of 3D audio, which as you can imagine looked a lot different 25 years ago than it does today.
Dubber What did 3D audio look like 25 years ago?
Agnieszka I will say that about 25 years ago… So, yeah, about in the mid-90s was when 3D audio started to gain momentum. I think it was the first time… Although immersive sound, 3D audio, dates back to prehistoric times. So our ancestors when they lived in caves, and when they were… Well, during the day they would be more or less in an anechoic, in a free field environment. So when they went back to their caves they were just absolutely mesmerised by the sounds that they heard, and there’s a lot of evidence that points to… The drawings that we now find in caves, that date back to prehistoric times, a lot of them are in acoustically important places. A lot of them are in places where the acoustics are just wonderful. And you can imagine, you go into these caves, you don’t have a flashlight, you don’t have any… You don’t have your iPhone that you can guide your… So you really just have the sound of your voice to guide you, most often. So when you come into these areas where the acoustics are just so amazing and enchanting and magical, that’s where they spend a lot of their time. So immersive sound dates way, way back.
Dubber Great space for storytelling, I would have thought, as well.
Dubber With that sort of context, yeah.
Agnieszka Yeah, absolutely.
Dubber So, yeah, it’s got a history. But the technology of 3D audio, I think of things like quadraphonic sound, I guess, in the 60s. But then where are we at when you come into the picture?
Agnieszka So in the mid-90s, as I mentioned, this is where 3D sound really starts to gain momentum. Specifically for gaming, right? So this is where we have our Sound Blaster card. The computers are still not powerful enough to do any kind of meaningful real-time 3D audio processing. Most of the CPU is spent on graphics back then, but 3D audio gains a little bit of the importance, and this is where we start to develop a lot of the algorithms.
First of all, we start to gain a much greater understanding of how it is that we hear in a three-dimensional space. How is it that we only have two ears, we only have, really, two channels on our heads, but yet we can tell where sounds are coming from. From the front and the back, and up and down, and near and far. And it was during that time in the 80s and the 90s that we are doing a lot of research about the fundamentals of space, spatial hearing. And this is really important because it’s important for us to understand first how we hear in the three-dimensional space if we have any possibility of recreating these kinds of illusions and simulations synthetically. So we’re doing a lot of research, and the computers are starting to become a little bit more powerful.
We’re also starting to understand that we can measure the filtering characteristics of human bodies, these things called… You know how we have, of course, the head that’s a certain shape. But we have these funky flappy things on each side of the head, our ears, what are called our pinnas, and those contain the acoustical fingerprint of each location. And soon we start to realise that we are able to hear sounds in the three-dimensional space because our bodies are basically these very complex directionally-dependant equalisers. And so when the sound comes from a specific direction it puts an acoustic fingerprint based on the shape and size of your head, and specifically your ears. And we also start to realise that everybody’s ears and heads are unique, and everybody has a different way of… A different acoustic fingerprint for each one of those locations. So we started to realise that we need to be mindful of what kind of filtering we do so that your experience of 3D sound is excellent, but it’s going to be different than my experience of 3D sound because we have these different characteristics.
And of course, because machines are not powerful enough in the 90s, we’re doing a lot of research of, well, how can we make this so that everybody has a good experience of 3D audio? Especially in gaming, which was a really very important application back then. So that’s where we were in the mid-90s. So I did, when I was… I did a PhD at Northwestern University, and this was basically what I was studying, is how to acoustically measure these HRTFs, or head-related transfer function, and superimpose them to any sound to create the illusion of hearing sound in the three-dimensional environment.
Dubber And is this what we call binaural audio?
Agnieszka Exactly, yeah. So binaural audio is a… The true definition of binaural audio is a two-channel sound, two-channel audio, that has on it superimposed spatial characteristics. So you’re not… It’s not stereo, although it’s just two channels. But it’s two channels that have these acoustically superimposed characteristics that give you information of not just what sounds your listening to but where the sounds are in azimuth, elevation, and also distance.
Dubber Right, okay. And what’s the… Apart from being able to replicate the sounds that are around us, what are the practical implications of that? How do you then apply that research that you’re doing?
Agnieszka So the applications are manyfold. Right now we’re really starting to use binaural sound basically everywhere. And especially now with the emergence of VR and AR in a much more consumer-based way. Because VR has been around, again, for decades. When I started doing 3D audio we were doing VR applications as well, though VR in the 90s looked very different than it does today. So the applications are manyfold. They are… Still in gaming, very much so. In the music, we’re starting to see a lot of applications in binaural sound, but just immersive sound.
And I want to make a distinction between binaural sound and the overall definition of immersive sound because binaural is something that’s very specific. So two channels with spatial characteristics superimposed on the sound, which has… Whereas immersive audio can mean different things to different people. Immersive audio can mean… An immersive experience for somebody who’s a recording engineer and who is recording a concert, an orchestra, and creates the impression of you being in that concert space, that’s very different than somebody who’s playing a first-person shooter game and is in an interactive environment where the sound has to change as your position and what you’re doing changes. And that’s also very different than if you’re doing an application in education, for example, and you want to superimpose virtual objects so that students can learn better about the topic that they’re learning about and there has to be a component of interaction. Maybe you’re out in an augmented reality space where it has it’s own sound complexities where now you have to be very mindful of how you integrate real objects, real sound objects, with a virtual sound object, and how they blend together. So the application itself becomes very, very important.
Before I came to New York University I was working more in what’s called the mission-critical space, which is applications of… In my case, it was audio, and specifically 3D audio, but for military applications. I was working a lot with the army and the navy and NASA for various applications of 3D sound, and how they can be used to augment or improve the situational awareness and augment how a person perceives the information. Or how a person perhaps has better ability to listen to multiple communication lines coming to them at the same time. So depending on what the application is, you really have to be thinking about how… What kind of technology you use, and what kind of manipulations you have to use to make the experience meaningful for the application that you’re working with.
Dubber You’ve got a wonderful soundtrack of a bird that’s tweeting along with you in the background, and it makes me think about the quality of the microphones that we have available to us in everyday life. Because the fact that I can sit here in the north of Sweden and listen to a bird in your garden in Woodstock is phenomenal, and that it sounds so clear and so bright. Are microphones getting better? And particularly for things like immersive audio and 3D sound, is the technology improving for recording that?
Agnieszka Well it depends what you mean by improving, right? So there are certainly technical advances that are being made to… We now have microphones that are… I’m specifically thinking of microphones that have high-order ambisonics. Microphones that are able to capture sounds from multiple directions at the same time. We now have these technologies, and the quality of them is getting better and better and improving. We also certainly have a much greater availability of the types of microphones that are around for the applications that we’re working with.
So for example in the past few years, ambisonics, as you may know, has made a tremendous emergence. And it’s not because ambisonics is new. It’s been around since the 70s, since Michael Gerzon came up with this idea, but we now have found a very important application for it which is virtual reality, augmented reality, needing to have representation of sounds from all around us. So now, of course, we have a much greater abundance of first-order ambisonics microphones, or higher-order ambisonics microphones, things that we didn’t have in the past. And the quality is really excellent.
And I would say one of the things that I am very excited about is to see these kinds of technologies, these kinds of professional-grade technologies, becoming more available and cheaper to the consumer so that now accessibility is much better than it ever has been. And what does that mean? That means that people who in the past would never have access to these amazing technologies and these microphones, unless, let’s say, you were a recording engineer and you had your huge studio, you wouldn’t have access to those. But now the accessibility is such that anyone… Kids in elementary school are doing recordings using ambisonic microphones. They may not understand fully how it works, but it doesn’t matter because they can experiment and be creative with these technologies and learn about them so that they can do things that otherwise they wouldn’t be able to do. So I’m really excited about the fact that the accessibility is there now, and certainly the quality of the microphones and the headphones and technologies has been improving at a great rate.
Dubber Fantastic. Just before the coronavirus took off we were very involved in a project about sound design for urban environments, and how we could reimagine sound, and how sound is used with the IoT space, with sensors, with… How sound affects the city but also how we can redesign the sound of the city. And since then, of course, cities have shut down, and they sound totally different. Are we going to have the opportunity to rethink how cities should sound, or how they should use sound, as we come out of this, do you think?
Agnieszka Well it’s interesting because there are a lot of people who are doing research on the sounds of cities. In fact, we an NYU have a very large project called SONYC that captures sounds around the city 24 hours a day, seven days a week. And this came out… The original seed of that project was a mission from one of our former mayors in New York City who understood that noise is a big problem. Noise is a big problem in a lot of cities. And in order to mitigate noise, we first have to understand it. And so that project was started, it’s an NSF funded grant now, and we have literally hundreds of sensors, microphone sensors, scattered around the city and they pick up the sound.
And so one of my parts related to this project is a project called CityTones, and CityTones aims at capturing sounds of cities… Think of a room tone, where you want to have the sound of an environment, or the nature environment or an urban environment, but where you don’t want to have a lot of point sources. So we’ve been collecting, through CityTones, sounds from all over the world, both nature sounds and urban sounds, and creating a large library that we use as the backdrop of virtual experiences that we want to create.
But what’s really interesting is now we have the ability to compare and contrast the sounds that we recorded, let’s say, a year ago, and what they sound like today. And the difference in places, especially in urban environments like New York City or like Paris and other large cities that have an enormous amount of sound normally, those sounds are gone. It’s so quiet, it’s almost eerie. And you think about how humans… What a difference humans make in the noise level of these cities. And so I think it’s… This is a very interesting time for researchers to be capturing sounds, and to be looking at… Well, not just sound, but to be capturing data from various sensors, but especially sound, to see what the human impact factor is, and how we manage to change the soundscape of our environments.
Dubber And are we able to do that deliberately, in terms of deciding what cities should sound like? Particularly as electric vehicles start to become more prominent, and the actual… Theoretically, the noise floor drops. Just quieter doesn’t seem like a particularly ambitious goal, whereas designing the sound from scratch and thinking “What should the city sound like?” seems like something that might be an interesting project to tackle.
Agnieszka Yeah, I think it’s a very interesting project for somebody who wants to take it on. I do know that quieter is better for a lot of things. Quieter is better for, for example… There’s been a lot of studies that proved that, for example, children don’t learn as well when they’re in a noisy environment. When their schools are in a noisy environment there’s a big correlation between their test scores and the noise. So I think that there’s something to be said about thinking about how loud our cities should be, and how much exposure we have. And this is not just coming from cars and the noises around us, but think about how loud it is in New York City when you’re in the subway. It is loud, it is loud. It’s 90+ dB SPL, and to have long term exposure to that can cause significant damage.
Dubber As far as NYU is concerned, and the Music Tech programme, what is it that you group together as Music Tech? What is Music Tech at NYU?
Agnieszka That’s a great question. I think that Music Technology… Every institution has a slightly different definition and a slightly different flavour of Music Technology. So at NYU, Music Technology is certainly influenced by where we are. So the fact that we are in New York City, we are driven by, and we also take advantage of, everything that is around the city. Including all the studios, the entertainment industry, and so on. At NYU, Music Technology… Which is a program that’s been around for over 45 years and, again, something that the content of this program has changed tremendously over those years. In fact, when it started it was the Music Business and Technology programme, and since then the two programmes separated, and now we have a separate Music Business programme and we have a separate Music Technology programme.
But at NYU Music Technology, we have several pillars of education that we focus on, and we completely underline to all our students that if you want to be in this field you have to know what is the fundamental and what is under the hood because the industry will change. And I say to all those students who are coming into the programme, I say “You know what, I guarantee you that in four years when you graduate from this programme the industry will look different. There will be another branch of music technology that has appeared, or the industry will have changed in one way or another”, and I say “and you have to be ready for this change, and you have to be ready to continue learning and to continue evolving even after you graduate”. So what we underline in Music Technology at NYU is that you need to have a very strong foundation and basis of knowledge. You have to know the fundamentals. You have to know “What is sound? How does sound behave?”. So you have to know fundamentals of acoustics and signal processing and electronic music. You have to understand the history.
Because we are in the Department of Music, Music and Performing Arts Professions, all Music Technology majors have to take the same classes as all other music majors. So you have to take theory class and music theory classes, music history classes, ensembles, and so on, so that you develop your ears as a musician. So that you understand how to hear and what to listen for. But beyond having that foundation, which includes acoustics, signal processing, electronics… So all our students, undergraduate students, learn how to build circuits, and build analogue circuits and digital circuits, and they learn not just, for example, what an equaliser does, but “How is an equaliser built?”. And they build these equalisers so that they have a much better understanding and affinity for what they’re working with. But beyond that, we are focussed on five pillars of research.
One of them is, of course, recording and production, so everything that that entails. Looking at recording techniques, reproduction techniques. Right now we’re doing a lot of recording techniques that involve immersive sound and that capture and reproduce recordings in an immersive environment. And of course, I would put in that broadcast and live sound and mastering and mixing and everything that goes along with recording production.
Another one is electronic music. And electronic music not just from a compositional aspect, which is of course very important. So creating sounds, our students still work with analogue technology. They spend some time in analogue studios working with analogue synthesisers and analogue tape. So working with analogue technology to create electronic music but also from the other side of electronic music, which means building the controllers that electronic musicians work with. So building interfaces to control music, to control the performance, to control the experience of either the composer or the participant or the lister.
Another very strong branch of research is immersive sound, which is my main area of research. And this involves everything from basic research on how we hear sounds in three-dimensional space, to applying this and creating 3D environments, to doing sound design from gaming, to create VR, and AR, and mixed reality experiences, to creating experiences… For example, I’ve been working a lot with collaborative music-making where you have musicians who are in different locations. Where you bring them together in an environment, where they share this environment, where they can make music together seamlessly just like if you and I are in the same space. We should be able to make music in that same kind of capacity.
And another strong research area is music information retrieval, or, in general, music informatics, which means extracting data and extracting information from music, or generally speaking from sounds. So, for example, if you go to Spotify and you say “I like this song, play me another song that sounds like this song”. So understanding “What are the underlying principals of music?”, harmonic, rhythmic, etc., all these underlying principals of what makes one song sound like another song. Or, generally speaking, extracting data and understanding from just the audio data to identify sounds or to identify specific bird species or specific types of car sounds.
And then the last pillar is music cognition. So it’s not about specifically what we hear, but how we hear it, how we perceive it. How is it that the music derives all of this emotion in us, and creates all this emotional component?
So those are the main pillars of Music Technology at NYU, and it’s a large programme. We have an undergraduate programme, a master’s programme, and a PhD programme, and a total of about 250 students now.
Dubber Right. And, not to put too fine a point on it, but what do they do when they finish?
Agnieszka So some of them go on to work at companies such as Apple, Dolby, Google, and they are signal processing engineers or they work on new products. Some of them go on to be recording and mixing and post-production engineers, and either are absorbed by studios or they start their own ventures, and they’re very successful. We have composers who also become artists and installation artists. We have some students who go on into academia. And especially PhD students, I would say that a lot of them go on into being academics and professors or they are doing research at large companies. So because the breadth is so wide, so is where the students end up after they graduate from NYU. But they’re all very successful.
Dubber And it’s interesting because Music Tech Fest came out of the MIR community, and it came very much from… Michela Magas was the founder of MTF. Was the scientific director for a roadmap for the European Commission for the future of that field. And so a lot of that community ended up within Music Tech Fest, but it’s really interesting to see how that has all shaped and evolved. Because I guess eight to ten years ago it was very much about recommender systems for playlists and those sorts of things, but now you’re talking about bird identification and other things. What other pathways have come out of that area?
Agnieszka For music information retrieval? I think that… Well, so a good example is how we’re gaining a lot of understanding about urban sounds and urban environments, and specifically what constitutes an urban soundscape and what kinds of sounds we can identify, and this has been very informative. If we want to mitigate noise, we first have to figure out what is causing noise. And sound is one of those things that once it happens, it’s gone, and so it’s not like any other event.
And so in New York City, we’ve been gaining a lot of understanding of what constitutes the New York City soundscape. And right now we are just starting another project that will help us inform not just what sounds we’re hearing but where are they coming from and how are they moving, so that we can create trajectories of these soundscapes. And cars and people and even migrations of birds, and which direction are they moving, so that we can be better informed at mitigating these noises.
Dubber When you say migrations of birds, do you mean tracking them with situated microphones in the field?
Agnieszka That’s right, exactly, yeah. So one of my colleagues, Juan Bello, is working on a project that specifically focusses on that that’s tracking bird migrations, where you have distributed networks of microphones and you can identify the sound of the birds, but also keep track of the particular species of birds and how they travel through space.
Dubber Wow. I have to ask, how did you get started in this? What was the first path to the world of audio? Was it mixtapes? Was it listening to the radio?
Agnieszka It’s funny because I sometimes ask myself the question “How did I end up here?”. So my first… I’m a classical pianist, and I have been a musician all my life. And when I went to university at McGill I started studying piano performance, and this was in the very, very early 90s. And during that time McGill, and still does, they have an incredible Sound Recording programme and a Music Technology programme. Back then it was called Computer Applications in Music. And so I became curious in these computer applications in music. And of course back then a lot of it was analogue, was just really the beginning of digital music technologies. But I became fascinated with it, and I ended up doing a double major in Piano Performance and Computer Applications in Music, and what that meant is I had to learn to program. In fact, my first programming class was assembler, and I learned to build compilers, and we started using tapes and doing electronic music and running… Working with Moog synthesisers and running tape loops around the studio, and I thought that was the coolest thing ever.
And so I decided to continue studying in this space, and I ended up going to NYU for my master’s. And when I got to NYU, my first intention was to study sound recording because I was totally fascinated by it, but when I got to NYU I learned purely by accident about the field of 3D sound. And it was one of those moments that the angels were singing, and I thought “Okay, that’s it. That’s all I want to do”. And I literally lived and breathed, day and night, learning about 3D audio and programming 3D audio. It was the beginning of computers that could handle real-time 3D audio processing, and that’s what I ended up doing. And so I continued to study this at Northwestern, and I just dove deeper and deeper. First working more in the military space, and then now being at NYU and working with a lot of very talented students. And I’m just incredibly passionate about this field, and the more I learn about it the more passionate I become.
Dubber It sounds like… When I think of 3D audio, I think of multiple speaker arrays. I think of 36 speakers at different heights and angles surrounding me. What do I need to listen to immersive audio at home?
Agnieszka So right now I think that the world of immersive sound is evolving at a very rapid rate from different parts. So to answer your question about “What do you need to listen to immersive sound at home?”, of course you can experience immersive sound just by having two channels, right? Because really that’s all you need. What matters is what flows into your ears. But immersive sound isn’t just about what’s going through your ears. So sound is also experienced haptically.
Agnieszka Vibration. When you’re sitting in a movie theatre and the bomb goes off or the train passes by and you have that vibration coming from the sub, and you… So you experience sound with your entire body. What is very exciting about right now, and specifically about home theatre, is that we’re getting to have technologies that allow you to customise your experience so that you can listen to your movie… And I’m thinking of Dolby Atmos, MPEG-H, which support what’s called object-based audio where sounds are not just mixed, as they have been traditionally, for a specific channel reproduction, which means that…
Okay, in the past, when you mixed for stereo, you expected a person to have two loudspeakers, and they would be listening in stereo. Or if you’re mixing something for 5.1, for surround sound, you expect that person to have a five-channel setup in wherever they are, and they’re listening over the five channels. But what’s happening now with object-based audio is we’re… We have the possibility of half baking a mix, right? So we produce the mix, but every sound becomes an object, and becomes an object that has data associated with it. Which means that when you are listening to a movie in your home, let’s say it’s really late at night and you don’t want to bother your neighbours and you don’t really care about those bombs going off very loud, what you really care about is the dialogue, you now have the possibility to customise and adjust the level of your mix on the fly depending on how you want to hear it. And this is very exciting, right? Because it means that you can customise your experience. You can customise it to… Let’s say you have a hearing impairment and, again, you want to just focus on the dialogue, you have the ability to do that. But what it also means is that if you have a five-channel setup at home, you have surround sound at home, you can listen to it on five channels. If you have 36 channels, as it sounds like to do Andrew, you can listen to it over 36 channels because you have now the ability to do the final mix on the fly. So it’s really a very exciting time for immersive sound.
Dubber So are the same kinds of advances happening in sound that are happening in vision, for instance? Because I know… I have a lot of Skype calls and Zoom calls and those sorts of things, and the picture quality is often a lot better than the sound quality in those sorts of things. Is there a disparity in terms of attention being paid to these things, or is that just my experience of the world?
Agnieszka Yeah, I think it depends on what community you talk to, right? So if you’re talking to the vision community… Of course, there is amazing graphics advancements happening, and we see this with renderings of avatars, and we see, especially with computer graphics, how far we’ve come. And of course, we wouldn’t be able to do this 20 years ago, or the quality of what we’re being able to do today would be much lesser than it is today. So even for consumer-grade communication, the quality of the visuals is increasing.
But if you talk to the audio community, I would say the same is true. We’re doing tremendous advances in the rendering of audio, in audio communication, where we’re now starting to go beyond just the content that we’re hearing, especially for communication. It’s not just about hearing the dialogue, but it’s about also the sound quality that we’re hearing. Through the fact that you can hear birds…
Dubber And a dog, yeah.
Agnieszka As I’m talking to you right now, just think about that. And we’re separated by thousands and thousands of miles.
Dubber Yeah, it’s amazing. So are the affordances of the recording technology having an impact on what sorts of music are being made?
Agnieszka I always… My philosophy is that technology and art go hand in hand, and one has to drive the other and vice versa. I think because the technology is evolving it is giving new ideas and new forms of expression and creativity to artists, and artists are taking these technologies and running away with it and doing things that they wouldn’t be able to do before. But vice versa. Because now artists are creating new ways of making things, technology is catching up. So it’s this constant evolution moving forward. Technology goes forward, art, creativity goes forward, and so on, so on. And so I think that we are doing different things that we were not able to do before.
And even thinking about just now, the specific situation that we’re faced in where people don’t get together as much as they used to, right? So now we have to be able to make music together across distances. We have to be able to… Like at NYU in our department, all the ensembles, orchestras, jazz ensembles, percussion ensembles, everything had to be taken online. So now we have to be creative of how do we create music together, make it sound good, and perhaps create new forms of music that not just allow us to do the things that we’ve been able to do before, but let’s think of new ways of making music. Let’s use this. Let’s use this in a way that we wouldn’t be able to use this before. The fact that we cannot get together anymore, which means that we can get together in a remote setting with people that we would normally never get together before, and make music before. So now the geographical boundaries are gone.
Dubber But latency must be an issue for sure.
Agnieszka For sure. Latency is always probably the biggest obstacle in making music across distances, and in fact, we’re doing a lot of research in this. I have… One of my main research projects is called The HoloDeck. It’s an NSF funded grant, and we’re building The HoloDeck just like Star Trek, if you’re a Star Trek fan, where it’s an environment that can become any environment, and it’s also an environment that can… That is a collaborative environment. People who are across distances can now collaborate using common objects. So we talk about latency a lot.
Luckily within NYU, within our campus, or in the city, we have this what we call a ‘triangle’ between Washington Square Village and Park, Brooklyn, and the medical centre. We have an incredibly fast network where the latency is on the order of magnitude of one millisecond between the three. So we have basically no latency, and we can collaborate across distances. But of course, if we’re collaborating with our other campuses, such as our campus in Shanghai or Abu Dhabi or other institutions, we have this obstacle. And there’s a physical barrier, there’s the speed of light, where that’s an obstacle. And I’m sure somebody will solve that problem someday, but right now we have… In the best circumstances, we have the speed of light. So latency is always a problem.
But, here’s where musicality and composition and creativity can come in. You can create music and you can compose music that reduces this obstacle, where it’s music that under latency circumstances can still be enjoyable. Or perhaps there are some composers that use the latency and, in fact, make it a feature in their compositions. And this is where… That creativity, and artists and composers use the technology in ways that sometimes we never even would think they should be used and make it into a new form of art and a new form of expression.
Dubber If I’m somebody listening to this podcast and I think “These are really, really interesting conversations, and I want to have more of these sorts of conversations”, you have an answer to that. You have a conference coming up.
Agnieszka Yeah, we have the Audio Engineering Society Convention coming up next week. So that’s from June 2nd through the 5th, and normally it was supposed to be in Vienna, so this convention was supposed to be in Vienna in Austria. Of course, we were not able to do this physically this year. So this year it’s all virtual, and anybody can register for the convention and will have access to literally dozens and dozens of sessions and paper sessions and hear from the experts in the field of audio engineering, and I would say that the broader sense of audio engineering. There will be a lot of sessions that will be talking about immersive sound and gaming and applications in VR and recording and reproduction, so there’s a lot of very exciting sessions going on. So to find out more about it you go to www.aes.org and you can be directed directly to the convention in Vienna next week.
Dubber And obviously you’ll be presenting or keynoting.
Agnieszka I will be presenting, yes. I’ll be part of the opening ceremonies. I’m also going to be part of a panel discussion on binaural sound, and asking the question “How far have we come and where else do we have to go?”.
Dubber I have to ask you, because obviously you’re the expert on this and it’s something that I’ve never been entirely 100% convinced of, is the idea of binaural beats and brainwave entrainment. Is that something that you’ve looked into, and is it nonsense?
Agnieszka So I have a number of students, master’s students, who have looked into binaural beats. And it’s really very interesting because there’s evidence that suggests that binaural beats truly do make an impact and truly do make an effect on how we feel and change our state. I would say there’s a lot more research that needs to be done, but I would say, no, we see some evidence that binaural beats are interesting.
Dubber Interesting is a nice place to leave it. Agnieszka, thanks so much for your time today.
Agnieszka Thank you very much, Andrew. I appreciate it.
Dubber That’s Agnieszka Roginska, and that’s the MTF Podcast. Now, if you’re interested in attending the four day long packed programme at AES Virtual Vienna next week from the 2nd of June, head to www.aes.org now and register. The MTF Podcast is out every Friday, so don’t forget to subscribe. And you can also rate and review wherever you listen to podcasts, we’d really appreciate it. And of course, you should share this with someone else you think might be interested in this sort of thing, particularly if they’re looking at getting into the whole world of audio careers and sound research. I’m Andrew Dubber, you can find me @Dubber on Twitter, and Music Tech Fest is @MusicTechFest pretty much everywhere. Enjoy the rest of your week, take care, and we’ll talk soon. Cheers.