Will AI Replace Musicians?

Last updated

by

The impact of AI in music can’t be understated. Already we’re seeing AI music, remixes, and mashups made at scale with varying degrees of human intervention.

Prompt engineering is a thing with platforms such as chatGPT and Midjourney, and it’s only a matter of time before we see something similar happen in the music space. 

It’s easy to assume that eventually AI will replace all sorts of music making. It’s also easy to assume that AI will never be able to replace human creativity and emotion because all it does is generate things based on past iterations of creative work. 

No one has a crystal ball, but what’s likely to happen is somewhere in the middle: some aspects of the music making process will be relegated to AI, while some parts will still require a human touch (if only because music today is music being made for humans to listen to!).

AI taking over music production?

Parts of music production that would be taken over by AI would be anything that has to do with organizing and retrieving (as in with samples in sample libraries) and some sound design.

For example, would you still need to buy or download an 808 kick (literally millions of these online that more or less sound similar) when an AI sound designer would be able to take all of those 808 instances and then generate one for you, complete with sliders that would allow you to specify characteristics for it (eg ADSR, saturation, EQ)

Synth patches might also have an AI component to them moving forwards: imagine prompting a soft synth like Serum that you want a soothing, ethereal pad that plays major or minor 7th chords within a particular scale while you only use root notes. 

Mixing is another part of music production that will be influenced by AI. There are already plugins that analyze audio to make it conform to a certain sound or standard (eg Guilfoss, Soothe).

While human intervention and taste are the final arbiter as to whether or not the results are to be used, it’s not hard to imagine a scenario where an AI powered plugin can EQ and compress several tracks in a song according to a user specification or preset. 

Waves’ pluginchain subscription service already does something similar to this (which it markets as you being able to get similar results to the grammy winning engineers who bear the names of their presets). iZotope also has elements in its Ozone mastering suite that use AI either for analysis or EQ / compression. 

It’s the Operator who pulls the strings

The common thread that runs through all of this AI “creativity” though, is the person operating the software. He/she is still the one making the decisions, choosing which aspects of the process to employ AI, and discarding irrelevant results as needed.

Furthermore, live shows would still require humans to be the face of the performance. The DJ industry is one example: software can perfectly mix tracks for hours, but people still look for someone DJing onstage.

Will this change when arena-level AI-generated visuals become the actual performers? It remains to be seen…

Perhaps the future of music and AI isn’t so much a total elimination of the human in the process, but more like an evolution in what it means to be a musician, an artist, and a creative.

This will most likely lead to an increase in the number of people making music, a big chunk of who will not be actual musicians or have any musical background.

Rather, they will be operators who know how to use new tools which are becoming increasingly powerful and influential in creativity across the board.