When it comes time for historians to analyse pop music from the first two decades of the 21st century, they will conclude that the biggest stylistic innovation (maybe the only stylistic innovation?) has been the widespread digital manipulation of vocals using the Auto-tune program.
By that I don’t mean those artists using the program to adjust their vocals slightly to correct pitch, which is widespread but not especially noticeable as a style. I mean the abuse of the program, twisting vocals so far digitally that they become something transhuman.
Auto-Tune had been around for a minute before T-Pain came along. Invented by mathematician Andy Hildebrand, it was was released to the public in 1997 and employed to dramatic effect on Cher’s. This video is about funny fart sound effect included wet fart sound effect, sharp fart sound effect, diarrhea fart sound effect, fart in water s.
What started as a trick used by a couple of artists became a widely used but often maligned trend in mainstream pop, a creative tool by respected artists, then a ubiquitous sound that it is hard to imagine pop without.
Of course, robotic vocal effects themselves are not new. They have been used in various forms since the invention of the vocoder in the 1960’s. But one notable thing is that for a long time, robotic voices were used as a specific effect – often a way of communicating something about changing technology. From critical assessments of technology like Laurie Anderson’s O Superman (“hold me in your electronic arms”) or Radiohead’s Amnesiac album, to the novelty funk of Zapp’s 80’s hit Computer Love, to the whole robot-as-musician shtick of Kraftwerk or Daft Punk.
The first conspicuous usage of Auto-tune was Cher’s 1998 hit Believe. Roblox free scripts pastebin. At the time Auto-tune was a new technology, and the use of it to completely warp a singer’s voice had not been explored by anyone else. The producers of Cher’s hit actually publicly lied about how they had created the effect in order to keep it a trade secret.
It was most of a decade before an artist would make overt use of Auto-tune a permanent part of their sound. That artist was T-Pain. And though it didn’t seem likely at the time, he would have to go down as one of the most influential artists of his generation. What T-Pain did differently was he didn’t use Auto-tune to make his vocals pitch-perfect, nor did he use it as a one-off effect like Cher had in the pre-chorus of Believe. He used Auto-tune to consciously manipulate his vocals into a compressed digital sound, and he did it all the time – in pretty much every line of every song.
The public loved it. T-Pain had an endless stream of hits from his own records, guest appearances and production credits on other tracks. Between 2005 and 2010, T-Pain appeared on almost 40 charting singles.
It was initially in the world of mainstream pop that the Auto-tune sound took off, with artists like Kesha and Chris Brown joining T-Pain at the top of the charts. But Auto-tune also went all over the hip hop world. Established rappers like Ludacris and Busta Rhymes got guest verses from T-Pain, as well as the megahit Low by Flo Rida. Lil Wayne, one of the biggest names in rap at the time, quickly went from collaborating with T-Pain to joining him at the top of the charts with his own Auto-tuned tracks.
Black Eyed Peas, always a group able to shape-shift according to musical trends, were another act to make Auto-tune mainstream. What A Feeling and its Auto-tune enhanced sugary hook felt actually inescapable for a while. Their other single from the same year, Boom Boom Pow, was similarly saturated in the effect, and its lyric seemed to capture the idea of Auto-tune replacing the unadorned human voice as the sound of pop music – “I got that future flow, that digital spit… I’m so 3008, you so two thousand and late.”
Jay-Z memorably dissed T-Pain and the whole style in his song D.O.A (Death of Autotune), with the classic line “Y’all singing too much, Get back to rap, you T–Paining too much”. But it was the executive producer of that Jay-Z album who was the first to open the door for Auto-tune into critically-respected art – Kanye West getting T-Pain to guest on his song The Good Life.
It was Kanye too who pioneered the next stage in Auto-tune’s development as a musical tool. His album 808s and Heartbreaks was melancholy and bleak, Kanye’s voice disfigured into a mournful wail. This technique he pushed to the extreme a couple of years later on the wordless moaning coda of his epic track Runaway – one of the most critically adored singles off one of the most critically revered albums of our times.
T-Pain’s Auto-tuned verses were almost always upbeat, but after Kanye the effect was increasingly used to communicate pathos. The mournful indie music of James Blake and Bon Iver is one example, but especially I am thinking of the emo-trap/mumble-rap of artists like Travis Scott and Future.
The rise of trap music cemented Auto-tune in society’s audiowaves. Auto-tune just seemed to mesh with the harsh synthetic hi-hats and snare of a trap beat. Artists like Chief Keef, Young Thug and Lil Uzi Vert, alongside those I just mentioned, helped make the Auto-tuned trap sound ubiquitous. Migos’ iconic Bad and Boujee, full of lines like “Cookin’ up dope with a Uzi”, confirmed that even gangsta rap – that old authentic sound of the streets, where “realness” is prized – was now just as likely to be Auto-tuned.
That was 2017, a decade after T-Pain. By this stage, Auto-tune was everywhere. Disposable mainstream pop is full of it, famous examples being Rebeccca Black’s Friday or the novelty electro of LMFAO. But so is the critically respected alternative pop of artists like Grimes and Charli XCX. Mainstream pop-rap superstars like Drake and Post Malone use it extensively, as do hugely successful EDM producers like David Guetta. EDM in fact frequently accompanies Auto-tuned vocals with their instrumental equivalent – digitally pitch shifted synths and bass wobbles. Non-Western countries actually seem to use the effect more than the US – the distinctive pop styles of Africa, the Middle East, India, Korea, and Pasifika reggae all use Auto-tune liberally.
It can even be found in punk and metal music – from the syrupy sweet vocals of pop-punk bands like All Time Low, to the rap/electro/rock hybrid of Falling In Reverse. Hip niche subgenres like Vaporwave and Cloud Rap make extensive use of the effect, plus viral internet memes like Auto-tune The News. The favourite album of last year for many indie blogs – by 100 Gecs – is possibly the most outrageously Auto-tuned thing I’ve ever heard.
In the early days, many people would complain about Auto-tune, mostly with the somewhat feeble reasoning that it showed someone couldn’t sing well (which would also suggest anyone using a distortion pedal couldn’t play guitar). Similarly when “mumble-rap” was gaining popularity many would unfavourably compare it to the lyricism of classic MC’s, but these rappers generally were trying to create a vocal texture more than a lyrical masterpiece. Vampire Weekend were the first high profile white indie band to use the effect on their single California English in 2010. I remember them at the time remarking how music snobs constantly fault pop music for lack of innovations, then as soon as there is something novel they criticise that too. Jay-Z recorded the most famous put-down of all, but by 2018 he and his wife Beyonce were recording the heavily Auto-tuned trap song Apeshit.
These days there would hardly seem a point to criticising Auto-tune. The ship has sailed – to diss it now is like people in the 60’s complaining about electric guitars, or in the 90’s about rappers not being able to sing.
Not long ago, I was listening to a friend’s playlist while we cooked dinner. The singing was Auto-tuned, which is no real surprise. But it made me reflect how these days Auto-tune is so commonplace that we mostly don’t even notice it. We don’t think of it as an effect someone is using, more like that is just the normal sound of pop vocals. It was that realisation that made me reflect on the all-conquering rise of this computer program.
When, and why, did Auto-tune become completely normalised? Asking this question made me think back over its lifespan and consider the context.
In 2005, as T-Pain appeared in the pop charts for the first time, there was another technological sensation sweeping through the music world. It was Myspace, the birth of “social networking” websites. Newscorp payed $580 million for Myspace in 2005, and by the middle of the next year it had bypassed even the big search engines to become the most visited website on the internet.
At the same time, internet dating was being transformed from a fringe activity somewhat looked down on, to a completely mainstream way of meeting potential lovers. Okcupid, the website that helped take online dating into the mainstream, launched in 2004. By 2007, only porn had more paid content online than dating sites. A decade on, a study of American heterosexual couples found that 40% had met online.
Computer gaming was also changing, with the internet allowing the development of massive online role-playing games. The most popular of these, World Of Warcraft, launched in 2004 and within five years had over 10 million players.
What do these online developments have in common? For one, they are all ideas that have expanded massively in the years since. But another thing is they change how we experience the internet – each of them involve constructing a profile online that others interact with. The process is ontological. With these programs, we don’t just use the internet – we become the internet.
The rest, as they say, is history. Facebook, Twitter and Instagram took over from Myspace; and Tinder has eclipsed Okcupid; but the dominance of social media and online dating only increased. Meanwhile, Google emerged from all competitors to become the most dominant search engine. Partly this was because it too creates an online algorithmic profile for each searcher; adapting the flow of information to your own niche interests.
Similar to this was the rise of streaming platforms. Youtube, Netflix and Spotify have come to dominate their media fields, and each creates a profile for users in order to recommend content in line with their tastes.
All these developments were amplified by the launch of the smartphone in 2007. At one point we had to be sitting at a computer to be interacting with the internet, but now it was with us at all times in all places. The division between the online realm and the “IRL” physical world became very hard to define.
Over the last fifteen years, our online avatars have grown in stature and complexity. They are how we interact with friends, strangers and lovers. They are how we play, how we shop, how we are entertained and informed. We cultivate and manicure them in order to appear the best to potential mates and employers. For many of us, our physical selves live solitary and uneventful lives, but our online avatars are constantly busy and feeling important. For a generation raised with social media and smartphones, it is quite possible to feel socially anxious when relating in person while being confident and extroverted online.
But our online selves are not just something we create and maintain. They shape us. As websites are designed by sophisticated artificial intelligence algorithms to provide targeted content to each user, and as online content makes up more and more of our lives; it is our online avatars that define what news we see, what new music and video we experience, what products are advertised to us, which friends we interact with, what romantic possibilities are opened to us. As we increasingly use Facebook to find social events, and Google searches or maps to get us around in the real world; it is our online avatar and its algorithmically defined preferences that determine the movements of our physical bodies.
In this state, existing partly in the physical world and partly in cyberspace, it makes sense that we increasingly listen to music made with that curious android voice of Auto-tune. What had once seemed an unnatural affectation has come to seem more and more normal as it increasingly resembles our day to day life. And it’s not just the amount of Auto-tune music that has increased – it is also the scope of life experience that is represented in this art.
When T-Pain first hit the charts, he was almost always upbeat, just as the hype for early forms of social media was. His songs were like those ostentatious selfies taken to show how exciting your life is – from the nightclub glamour of Low to the conspicuous wealth display of comedy hit I’m On A Boat (“Everybody look at me ‘cos I’m sailing on a boat”). As the Black Eyed Peas “I gotta feeling tonight’s gonna be a good night” hook became inescapable, so too did the pouted lips and self-conscious poses of the selfies, and the feeling that social media was helping us to curate our lives, our connections to others, and even our society into something better.
Over the next few years, things started to change. As I’ve already said, it was Kanye’s 808s And Heartbreaks album that began the change, the mumble rap of Future and Travis Scott that cemented it. Auto-tuned rap was increasingly a melancholy, bleak soundtrack to the alienation of a life always connected yet frequently alone – the mental strain of keeping up appearances online, the addiction to a social media drug that with each hit offers less and less satisfaction.
Like Kanye rapping about his romantic failures or Future about his addiction to sedatives; our relationship to the once-exciting world of social media has gone sour – the optimism of a Twitter-enhanced Arab Spring revolution faded to the nightmare of Cambridge Analytica’s exploitation of online politics; the online connections of facebook leading only to increased social divisions; the visual beauty of instagram filters slowly becoming the realisation that views experienced with our natural eye can never quite match the digitally enhanced version. And yet we’ve gone too far to quit now – too much of our lives is online.
Over the last decade too, that traditionally authentic sound of the street, gangsta rap, has come to be Auto-tuned more than any other style. But isn’t this symbolic too? The macho aggression and senseless violence depicted in gangsta rap is much more likely to be played out these days in a twitter mob or online trolling than a gang war on the streets. It’s not just the psychological violence of the “haters” either – real life gun massacres are live-streamed on social media; and military bombs are dropped by drone operators sitting at a desk in front of a screen, their targets identified by algorithms tracing mobile phone metadata.
So many of our most visceral emotions and experiences are now mediated by digital technology – love, hate, desire, heartbreak, loneliness. Emotionally, our online personas in some ways are more “real” than our physical selves. Half human, half digital avatars; it makes sense that our art too exists somewhere between these realms – from digitally touched up photos, to CGI movies, content designed specifically to fit social media metrics, text edited by AI algorithms, to the android sound of modern pop songs. Cyborg music for cyborg people, Auto-tune is the folk music of the 21st century.
See also an explanation of how Auto-Tune works
When we talk about Auto-Tune, we’re talking about two different things. There’s the intended use, which is to subtly correct pitch problems (and not just with vocalists; it’s extremely useful for horns and strings.) The ubiquity of pitch correction in the studio should be no great mystery; it’s a tremendous time-saver.
But usually when we talk about Auto-Tune, we’re talking about the “Cher Effect,” the sound you get when you set the Retune Speed setting to zero. The Cher Effect is used so often in pop music because it’s richly expressive of our emotional experience of the world: technology-saturated, alienated, unreal. My experience with Auto-Tune as a musician has felt like stepping out the door of a spaceship to explore a whole new sonic planet. Auto-Tune turns the voice into a keyboard synth, and we are only just beginning to understand its creative possibilities. (Warning: explicit lyrics throughout.)
Music is a transmission medium for emotions. A confident and definite performance comes across. When you have a singer do take after take after take in search of technical perfection, you often end up with the sound of a bored and annoyed singer. Bored and annoyed singers are a drag to listen to, no matter how accurate their pitch is. Auto-Tune makes it impossible to sing wrong notes, so you can always use first takes, when the performance is freshest. You can also take tossed-off improvisation and make it sound studio-perfect. Auto-Tune inspires fearlessness, which inspires playfulness, which produces good feeling for everyone in the room. What more could you ask from a music tool?
Auto-Tune isn’t just a source of pleasure. It can also evoke dread, as it does at 2:15 below.
For my tastes, the most musical uses of Auto-Tune come from contrasts. The extreme perfection works best when balanced by roughness and rawness elsewhere. In “20 Dollar” by M.I.A., her wordless Cher-effect melismas are balanced by loosely pitched uncorrected singing on the choruses and unpitched rapping in the verses. Also, M.I.A. layers distortion and reverb on top of the melismas to harshen them and remove their bubblegum quality.
When you give Auto-Tune an ambiguous or microtonal pitch, you get the characteristic warble between adjacent scale tones. The warble has a delightful set of qualities of its own. It introduces new rhythms into previously rhythmless sustained notes. If you add a little digital delay, the warble locks satisfyingly into the beat of the song. A quick fillip to a neighboring chord tone that would normally pass unnoticed by singer and listener alike suddenly takes on dramatic musical significance when exaggerated by Auto-Tune. Cultures that favor melismatic vocal techniques naturally find this effect to be fascinating. For example, here’s an Algerian tune called “Lkit li nebghih” by Cheba Djenet. (Hat tip to Jace Clayton for this example.)
By flattening the vertical pitch aspect of a singer’s voice, Auto-Tune draws out the horizontal qualities, the vibrato (as opposed to tremolo), the nasalness vs throatiness, the overtones and partials.
Auto-Tuned speech has remarkable qualities of its own, as rappers discovered years ago. Human speech is strongly tonal to begin with. When you automatically tune it to the closest piano-key pitches, you can more easily hear the melodies that were already present.
By quantizing and digitizing information, you make it easier to memorize and replicate it. I find myself humming phrases from the Gregory Brothers’ videos the way I hum Andrew Lloyd Webber. Digitized sound information is easier to memorize, store and copy. The subtle nuances of the Double Rainbow guy’s speech with all the pitches on a continuous spectrum are difficult to remember and imitate, but once he’s Auto-Tuned, it becomes effortless.
It’s easy to make jokes about talentless singers who can become famous using Auto-Tune. Spend some time in the studio, however, and you discover quickly that to sound good through Auto-Tune, the singer has to be good to begin with. The software can’t add emotion, tone, rhythm, or charisma. We shouldn’t have been at all surprised to discover that T-Pain sings like an angel without Auto-Tune.
Even more than T-Pain, Kanye West has come to be the avatar of Auto-Tune in hip-hop. My friend Greg Brown has some thoughts about that.
I’ve been listening to 808s and Heartbreak and Twisted Fantasy. I’m really enjoying them. Far more than I thought I would. I think Auto-Tune here is somehow protective for Kanye when he is expressing emotion in a genre where that is not really smiled on. I haven’t quite put my finger on it, but I think the dehumanizing of the human voice is somehow a foil for the expression of inner turmoil. It’s haunting… The hard part for me to wrap my head around is the fact that Auto-Tune is a filter, a dehumanizer, and it manages to make Kanye both closer and more human.
Auto Tune Effect Free
Maybe Auto-Tune heightens emotion by making the melody totally unambiguous. It gives the sung notes an organ-like clarity and distinctness, and slight pitch nuances get exaggerated into stairsteps and warbles. The filter changes the voice’s upper partials in odd ways that add to the pathos. Also, once we’ve come to expect the filtering, removing it can be a dramatic effect. Kanye’s raw singing voice is so comically bad that when you hear it unfiltered, it’s startling. In this day and age, hearing such a major pop star sing terribly is more remarkable than hearing him sing perfectly.
In his essay “Understanding Kanye: Sweet, Sweet Robot Fantasy, Baby“, Mike Barthel describes Ye as turning himself (figuratively) into a robot.
Auto Tune Effect Download Free
It wasn’t the raw emotion of humans, but the synthesis of emotional impulses and mechanical restraint, a computer’s inauthentic attempts at automatic expression which nevertheless sprung from a real human need to communicate.
Auto Tune Effect Software
Here we have it, the perfect encapsulation of what it’s like to be a feeling human being in a hypertechnological, hypercapitalist society. Auto-Tune gives that indefinable feeling a literal voice. No wonder it’s so popular.