Love it or hate it, autotune has been one of the most consistently exhibited trends in popular music in the 21st century. Since Cher’s 1998 hit ‘Believe’ got the ball rolling and T-Pain proclaimed he was a ‘Rappa ternt Sanga’ in 2005, ‘pitch quantisation’ has been a key component in charting music.
Few innovations have been so reviled and so game-changing. Paul Reed Smith, the iconic guitar maker, went so far as to say the technology had ‘completely destroyed Western music.’
In fact, with the 20th century gradually incorporating more and more technology into music production and performance, with artists in pursuit of a new generational aesthetic, it was only a matter of time before technology reached into the human voice as well as instrumentation.
Disdain for the technology has revealed a music culture that has normalised their own use of technology, heavily distorted guitar effects for example, as authentic and artistic, and favours these vintage aesthetics over the new glitchy and artificial aesthetic wave that has had such an impact in areas finding a popular music voice, like Nigeria, or Columbia. Institutionalised double standards attack the technology’s perceived threat to ‘authenticity’ or ‘integrity’ on grounds set by them, something which Pitchfork’s Simon Reynolds has suggested might be part of a more general ‘class reflex’ within music criticism.
And sure, autotune has become a convention, often overused, and often relied on by artists to make anything they say into the mic sound good. But there is nothing fundamentally different between autotune’s conventionality and other previously dominant conventions, and there have always been hack artists trying to make a quick buck riding the wave – this is nothing new.
Surprisingly, the maths behind autotune processing derives from algorithms used in oil prospecting. According to Pitchfork, autotune’s inventor, Andy Hildebrand ‘made his fortune helping the oil giant Exxon find drilling sites. Using fabulously complex algorithms to interpret the data generated by sonar, his company located likely deposits of fuel deep underground.’
After leaving the oil industry, Hildebrand founded Antares Audio Technologies, and realised that the same maths he used to map the geological subsurface could be used to correct a singer’s pitch. A more recent development by Antares Technologies is called ‘Throat EVO’, which maps the vocal tract as a physical structure, which can then be manipulated on a screen, enlarged, distorted.
An artist can now sculpt his own digital throat, he can design his robot voice, down to the precise width of it’s figmentary vocal chords. This is a departure from the origin of the human voice in the body, it now exists somewhere between the human and the robotic. The sounds that can be made are recognisable yet alien. They are cyborg voices that when used well can imply states of mind or emotion that could not be glimpsed with an unprocessed voice.
I suppose one anxiety this technology could aggravate would be that the voice, unlike the guitar, is a bodily instrument. Someone’s voice is ‘authentic’ and ‘original’ to them, in a way that a guitar simply isn’t, and this common-sense belief fuels anxiety around autotune’s degrading impact on the general ‘character’ of popular music in general.
But once we start using words like ‘authentic’ or ‘original’ when dealing with art we are entering dubious territory. Isn’t music the annexation of sound to the body in a way that could be seen as ‘inauthentic’ in the first place? Aren’t all forms of expression negotiated by our bodies, minds, environments and societies? Where in this matrix can we locate ‘authenticity’?
Humanity has always sought augmentation through and for expression. The difference today is that we have created a realm in which augmentation is no longer limited by the constraints of our bodies. Photoshop has meant we can change, down to the most infinitesimal details, how our bodies look. Social Media has meant we can curate, down to the most carefully nonchalant caption, a constructed social identity.
Although autotune sounds spontaneous and liberated, often anonymous sound engineers spend days perfecting artists like Future or Travis Scott’s vocal processing for just one track. This heavily manufactured spontaneity reflects the a trending dynamic that underlies Photoshop and Instagram. We are creating digital spaces around us in which augmentation has infinite possibilities. No longer is expression necessarily negotiated by our bodies.
Autotune is an artistic index of this trend – the popular demand for voices that are familiar and yet alien, augmented, cyborg, reflect the glittering artificiality of late-stage capitalism.
Autotune is a sign and part of a gradual structure we are organising around our individualities that enables us to manufacture ourselves, to satisfy our will-to-power and ease our rampant insecurities. Soon we may all be our own designer babies. And in some ways, we already are.