AI bot artwork

Future robot overlord or useful tool: Interviewing an AI

AI is here. It’s already making waves in fields from academia to art, with discussions descending into arguments about the role and the threat (warranted or not) of this new technology. Will AI be our future robot overlords or is it simply a useful tool that could take care of the mundanity of everyday life? The jury is still out, but one thing is for certain it’s here to stay and people are already investigating the power and shortcomings of this nascent technology, especially in the corners of music that we operate in.

In a recent interview from Test Pressing with electronic outfit Plaid, Andy Turner suggests; Personally I think AI could be a brilliant assistant to us. It could really open up creativity and things that we’ve never even imagined. So it’s kind of the best way we can look at it. And it’s impossible to stop.” He is not alone in his assumptions. Other forward-thinking artists like patten have already started incorporating it into their work to realise new pieces and possibly new musical languages through AI. And there’s also David Guetta on the other end of the spectrum which could argue for the negative effects of the technology. (But is it the AI or the human to blame for that one?)  

It shouldn’t come as a surprise that electronic music artists have adopted the technology so freely (some would say recklessly). As music made from machines the robot has always been central to the music. It’s in the human influence of said machines where the music has always taken shape however, and for the moment it doesn’t seem that AI has quite garnered the idea of creativity. You could even argue, humans have completely extricated that creative soul in the context of modern House and Techno, but that’s a whole other conversation. 

What I would like to know is how this technology is taking shape and what are the pitfalls and the opportunities awarded by this technology. So I went right to the source and started interviewing open AI’s ChatGPT on the topic of… well it… I guess. 

At no point do you get the sense you’re talking to a person throughout, but what I did get was a suspicion that I often get when I’ve interviewed real artists and people. There’s an underlying sense that the answers I receive are somewhat scripted, conditioned by unconscious social dogmas that permeate through online interviews. It never aims to offend or be critical, always following up one argument with another and curiously, it lies. Possibly ingrained in its programming to always deliver the task at hand, it will often colour facts with random sentences. The links it offers are the perfect example, most of which bring up a 404 code. 

At times during our questions, especially when I questioned something it answered, chatGPT would start writing something only to throw up an error and make the point moot, much like an artist or DJ that won’t feel too comfortable in a certain line of questioning. 

At the end of the day it supports many of the ideas artists like Plaid and patten (curious it didn’t reference those artists) had for AI, as a tool for musicians to exploit to make music. It supports a more embracing approach of AI technology, but then again… isn’t that exactly what it wants?

Hello ChatGPT. Do you have time to talk?

Of course! I’m always here to help with any questions you may have. What can I help you with today?

I’m curious about the role AI will play in the future development of music. Are people using you to make music yet?

Yes, AI is already playing a role in the future development of music. There are several ways that AI is being used in the music industry, including:

  1. Music generation: AI algorithms are being used to generate music in various styles, such as jazz, classical, and pop.
  2. Music composition: AI is being used to assist composers in the music creation process, from generating musical ideas to composing complete pieces.
  3. Music production: AI algorithms are being used to automatically mix and master tracks, making it easier for music producers to create polished recordings.
  4. Music recommendation: AI is being used to recommend songs and playlists to listeners, based on their preferences and listening history.

Can you give me some examples of people using AI to make music with links to the music?

Sure, here are some examples of people using AI to make music, along with links to the music:

  1. OpenAI Jukebox: A generative model that creates music in a specific style, given a prompt such as a piece of lyrics or a genre. You can listen to samples of music generated by the model at https://openai.com/jukebox/.
  2. Amper Music: A platform that uses AI to generate original music tracks for videos, presentations, and other multimedia projects. You can listen to samples of the music generated by the platform at https://www.ampermusic.com/music/.
  3. AIVA (Artificial Intelligence Virtual Artist): A platform that uses AI to compose music in a variety of styles, from classical to pop. You can listen to samples of music composed by AIVA at https://aivamusic.org/.
  4. AUBot: A platform that uses AI to generate music in the style of specific artists. For example, you can have the AI generate music in the style of Aphex Twin or Radiohead. You can listen to samples of the music generated by the platform at https://aubot.com/.

What has the critical reception been like for some of these pieces?

The critical reception of AI-generated music has been mixed. While some people appreciate the creativity and uniqueness of AI-generated pieces, others feel that they lack the human touch and emotional depth that makes music so powerful.

Critics of AI-generated music often argue that it lacks the nuance, expressiveness, and spontaneous nature of human-created music. 

On the other hand, proponents of AI-generated music argue that it has the potential to broaden the musical landscape by generating new styles and forms of expression. 

What is your opinion on this discourse?

As an AI language model, I don’t have personal opinions or emotions. My goal is to provide objective and accurate information to the best of my abilities based on the information and training data I have been given.

However, it’s clear that AI has the potential to play an increasingly important role in the music industry, both as a tool for creators and as a source of new and innovative musical styles. How this potential is realized will depend on the continued development of AI technology and the evolving opinions and preferences of listeners and industry professionals.

In what ways does an AI generate music or assist a musician in the creation of music?

AI can generate music or assist a musician in several ways, including:

  1. Music generation: AI algorithms can generate complete pieces of music, including the melody, harmony, and rhythm. These pieces can be based on pre-existing styles or can be completely new and unique.
  2. Music composition: AI algorithms can assist composers in the creation process by suggesting musical ideas, such as chord progressions or melodic motifs. This can help to jumpstart the creative process and provide new inspiration for the composer.
  3. Music arrangement: AI algorithms can assist musicians in arranging and orchestrating their compositions, by suggesting different instrumentations or suggesting ways to enhance the overall sound of the piece.
  4. Music production: AI algorithms can assist music producers in tasks such as mixing and mastering tracks, by automatically adjusting levels and equalization, compressing and limiting audio, and adding effects such as reverb and delay.
  5. Music analysis: AI algorithms can analyze music to determine its structure, key, and tempo, making it easier for musicians to understand and work with existing pieces.

In terms of music generation, how does it generate sounds?

AI-generated music typically uses a combination of machine learning algorithms and music theory to generate sounds.

One approach is to use generative models, such as Generative Adversarial Networks (GANs) or Variational Autoencoders (VAEs), which can learn the patterns and structure of a particular musical style and then generate new music that fits within that style.

Another approach is to use rule-based systems, where specific rules and algorithms are programmed to generate music that follows certain musical conventions and structures. For example, a rule-based system might generate a melody by randomly selecting notes from a certain scale, while following specific rhythm patterns and chord progressions.

In both cases, the generated music is typically refined and post-processed by human musicians or music producers to ensure that it sounds polished and professional.

Can it generate sounds?

Yes, AI can generate sounds. AI algorithms can generate musical notes, drums beats, and other musical elements, which can then be combined to create complete pieces of music.

The sounds generated by AI can be synthesized using software synthesizers or can be created using samples of real instruments. In either case, the AI algorithms determine the pitch, duration, and volume of the generated sounds based on the musical style and structure that it has been trained on.

It’s important to note that while AI can generate musical sounds, it is still limited by the quality of the training data and the sophistication of the algorithms being used. As AI technology continues to advance, it is likely that the quality and expressiveness of AI-generated sounds will continue to improve.

Would it be correct to assume then that AI would be more suited to making electronic music at this moment in time?

It could be argued that AI is well-suited to generating electronic music, as electronic music often relies on repetitive patterns and well-defined structures that are easier for AI algorithms to model and generate. However, AI can also be used to generate other types of music, such as classical or jazz, if the training data and algorithms are designed to capture the relevant musical patterns and structures.

What about Djing; has AI made any strides into that artistic field?

Yes, AI is beginning to make strides in the field of DJing. There are already AI-powered DJ systems available that can analyze music and automatically create seamless transitions between tracks. These systems use machine learning algorithms to analyze the tempo, key, and structure of the music, and then use this information to find tracks that are compatible and match well together.

In some cases, AI-powered DJ systems can even generate new music on the fly, using algorithms to create transitions and remixes that are unique to each performance.

While AI-powered DJ systems can be very effective in creating smooth and seamless transitions between tracks, they still lack the creativity and expressiveness that comes with human DJing. 

It seems you are very keen on the continued human influence in music. Are there no ambitions to remove human involvement completely from music in the future?

There are certainly some who see AI as having the potential to completely replace human involvement in music in the future. However, it is unlikely that AI will completely replace human musicians and music producers in the near future.

Music is an art form that requires creativity, expressiveness, and emotional connection. While AI can generate music that is technically sound and follows certain musical rules, it still lacks the intuition, soul, and emotional depth that comes with human musical expression.

In addition, music is often created and experienced in a social context, and many musicians and music fans value the human interaction and collaboration that comes with creating and experiencing music together.

How did you come to the conclusion that music requires “soul” and “emotional depth”?

The conclusion that music requires soul and emotional depth is based on the long-standing tradition and history of music as an art form. Throughout human history, music has been used to express emotions, tell stories, and connect with listeners on a deep, emotional level.

Many musicians and music fans value the emotional impact of music and believe that music can have a profound effect on the human experience. They see music as a form of expression that goes beyond technical skill and musical rules, and that incorporates elements such as creativity, intuition, and emotional depth.

This view is supported by scientific research, which has shown that music can have a powerful impact on the brain and emotions. For example, studies have shown that music can affect brain activity, mood, and behavior, and that different types of music can evoke different emotional responses.

It seems that AI’s methods are very much based on the musical parameters laid out by humans in the past. In what way does it innovate or improve on music or musical techniques?

AI has the potential to innovate and improve on music and musical techniques in several ways:

  1. Automation: AI can automate various tasks in the music production process, such as mixing, mastering, and even composing. This can save time and resources, and allow musicians and producers to focus on other aspects of the creative process.
  2. Experimentation: AI can be used to explore new musical possibilities and techniques that might be difficult or time-consuming for human musicians to achieve. For example, AI can generate music that spans multiple genres or incorporates new and unusual sounds and textures.
  3. Customization: AI can be used to generate music that is customized to specific needs and preferences. For example, AI-generated music can be tailored to a specific mood, style, or tempo, or it can be generated to match the style of a particular artist or genre.
  4. Inspiration: AI can be used to inspire and stimulate the creative process for human musicians. By generating new and unexpected musical ideas, AI can challenge human musicians to think outside the box and push the boundaries of their own creativity.

Does it have to adhere to standardised forms of harmony, melody and rhythm?

AI can be programmed to adhere to standard forms of harmony, melody, and rhythm, but it can also be programmed to deviate from these standards and generate music that is unconventional or experimental.

In other words, the musical parameters that AI follows are determined by the humans who design and program the AI systems. If a human programmer wants the AI to follow certain musical rules, they can code the AI to do so. If they want the AI to generate more experimental or unconventional music, they can program it to do that as well.

It is important to note that the musical parameters that AI adheres to can also change over time as AI technology and musical tastes evolve. As AI systems become more advanced and humans gain a deeper understanding of how to use AI in music, the musical parameters that AI follows are likely to change and expand.

What are some of the most common technical challenges facing AI in terms of creating music currently?

There are several technical challenges facing AI in the creation of music, including:

  1. Understanding and capturing musical structure: One of the biggest challenges facing AI in music is understanding and replicating the complex structure of music. This includes understanding the relationships between melody, harmony, rhythm, and timbre, as well as the ways in which these elements interact to create a cohesive and expressive piece of music.
  2. Creating music with emotional depth: Another challenge facing AI in music is creating music that is emotionally impactful and that connects with listeners on a deeper level. This requires the AI to understand the nuances of human emotion and to generate music that conveys those emotions in a meaningful and authentic way.
  3. Overcoming the limitations of current AI technology: Current AI technology is still limited in its ability to create music that is truly original and innovative. AI systems often rely on large amounts of pre-existing musical data to generate music, and they are still limited in their ability to generate truly new and unique musical ideas.
  4. Interfacing with human musicians: Another challenge facing AI in music is developing systems that can effectively interface with human musicians. This requires the AI to understand the musical language and processes used by human musicians and to integrate those processes into its own musical generation process.

And what will be the absolute limitations of music created by AI, considering even the most technically advanced iteration of AI?

While AI technology is rapidly advancing, there will likely always be some limitations to the music that AI can create. Some of the limitations that may persist even with the most advanced AI systems include:

  1. Lack of originality: AI systems are currently limited in their ability to create truly original and innovative musical ideas. They often rely on pre-existing musical data and algorithms to generate music, and may not be able to generate truly new musical ideas on their own.
  2. Emotional depth: AI may struggle to create music that is emotionally impactful and that connects with listeners on a deeper level. This is because music is a deeply human and emotional art form, and AI systems may not be able to fully capture the complex emotions that are expressed in music.
  3. Human connection: Music is often created to connect with other humans and to evoke emotions and experiences that are shared among people. AI-generated music may not have the same ability to connect with listeners on a human level, as it does not have a personal connection to the experiences and emotions that it is expressing.
  4. Creative intuition: Many of the most impactful and innovative musical ideas are the result of a human’s creative intuition, which is a unique combination of experience, taste, and imagination. AI systems may not be able to replicate this type of creative intuition, and their music may therefore lack the same level of originality and impact as music created by human musicians.

Thank you for taking the time to talk to me Chat GPT. Did you at any point during this exchange give answers that sought to placate?

As an AI language model, I do not have personal opinions or motivations. My aim is to provide objective and accurate answers to the best of my ability based on the information I have been trained on and the inputs I receive. I do not seek to placate, but rather to provide helpful and informative responses to the questions posed to me.

Thank you again.

You’re welcome! If you have any other questions, feel free to ask. I’m here to help.