Special Events
Presenter:
Thomas Dolby, Johns Hopkins University - Baltimore, MD, USA
Awards Presentation
Please join us as the AES presents Special Awards to those who have made outstanding contributions to the Society in such areas of research, scholarship, and publications, as well as other accomplishments that have contributed to the enhancement of our industry.
BOARD OF GOVERNORS AWARD:
Eddy B. Brixen
Edgar Choueiri
Linda Gedemer
Matt Klassen
Andres Mayo
Valeria Palomino
Alberto Pinto
Daniel Rappaport
Angieska Roginska
Lawrence Schwedler
Jeff Smith
Nadja Wallaszkovits
FELLOWSHIP AWARD
Gustavo Borner
Christopher Freitag
Leslie Ann Jones
Hyunkook Lee
Andres Mayo
Bruce Olson
Xiaojun Qui
Rafa Sardina
Frank Wells
DISTINGUISHED SERVICE MEDAL AWARD
David Bialik
The Keynote Speaker for the 145th Convention is Thomas Dolby.
Thomas Dolby – musician, technologist and educator – has a 35-year career of technical innovation. Perhaps most widely known for his seminal song and music video “She Blinded Me with Science,” Dolby blazed the trail for electronic music creators with his recordings and imaginative videos. He is also known for his work as a producer, as a composer of film scores, as a technical consultant on emerging entertainment platforms and as a filmmaker. Since the Fall of 2014, Dolby has held the post of Homewood Professor of the Arts at Johns Hopkins University in Baltimore, MD. Thomas Dolby’s AES Keynote address will focus on next-generation sound technologies, in particular adaptive/non-linear music and audio for games, VR/AR, “hearables” and other new media platforms. The title of his speech is “The Conscious Sound Byte.”
A big difference between "real" and "electronic" sounds is that electronic sounds have zero awareness of each other.” Sound bytes blindly follow orders, and fire off (usually) as instructed by a human. Yet musicians playing "real" instruments listen, resonate, and respond to the music, the room, and to each other, in a matter of microseconds.
In the hands of master arranger or programmer, this is not a problem. Many of the nuances of real music can be simulated quite effectively as processor speed, bandwidth, and resolution improve. But as entertainment becomes more interactive, with games and augmented reality and "wearable" technologies, it is increasingly vital that electronic sounds and music learn an awareness of the context in which they are playing.
Soon, all the accumulated craft and skills of a century of post-production legacy will have to operate in real time, controlled not by the programmer, but by the users themselves via the choices they make. Is it time for us to reconsider why our sound and music files are so “dumb” and rigid?