- Green Screen makes hyper-realistic video animations a breeze
- Now you can turn any photo portrait into a Renaissance Masterpiece, using AI
- Neural net Jukebox generates music in a variety of genres and artist styles
- Can you recreate existing paintings by teaching AI to paint?
- Write poetry in the style of famous poets with the help of AI
- AI can now be taught to sing songs in multiple languages
- Will AI become a creative entity in its own right?
Artificial intelligence and machine learning are slowly but surely also infiltrating the world of arts and culture, and not in a small way. The power of this tech has opened up a whole new world of creativity and given rise to digital artists who rely on computer algorithms to breathe life into their creations. But does this mean that humans, going forward, will increasingly be outperformed by ‘AI artists’ creating spectacular and unique works of art, captivating music, inspirational poetry, and even realistic movie scripts? In our future societies, what will the nature of art and the role of human creativity look like? Will AI take over or will it merely be a tool to augment our creative endeavours? Whether we like it or not, technology and art seem to be hitting it off in various big ways, and creatives, art galleries, and music production studios are increasingly taking notice.
Green Screen makes hyper-realistic video animations a breeze
Runway, consisting of a small team of artists, developers, engineers, and researchers, integrates machine learning and AI with the world of art and creativity, building next-gen video editing tools. The startup recently released their web-based, real-time tool Green Screen, the first of a series of machine learning-based video creation tools to revolutionise video editing. The software enables you to create ‘synthetic content’ – or in other words, use AI algorithms to automatically generate, modify and edit audiovisual content. The current process of cutting objects from videos is known as rotoscoping, a time-consuming process of meticulously tracing the borders of an object in each frame of the video. Thereafter a mask is created that is used to either remove the object’s background or add visual effects. Green Screen’s machine learning models completely transform this painstaking process and enable you to create a top-quality mask – with only a couple of clicks on an object, and using just a few frames of the video.
Before the startup was formed, the team posted a tweet asking feedback about their potential AI video editing tool. Within 48 hours, responses from engineers at tech giants Google and Facebook, as well as interested parties from universities and the media came pouring in, indicating that the team was on the right track with their idea, and the company was formed immediately after. “We continue to create audiovisual content in the same way that we have done for decades and that makes the process unnecessarily slow, expensive and difficult. With AI algorithms anyone can create hyper-realistic animations in seconds and edit them automatically. Something that only Hollywood or large production companies and special effects have been able to do so far,” explains Cristóbal Valenzuela, one of the three Runway founders.
Now you can turn any photo portrait into a Renaissance Masterpiece, using AI
Did you know that algorithms are also becoming impressive painters? Sato, a Japanese full-stack developer, has created an AI art generator – or a so-called artist app – that transforms user-submitted photo portraits into Renaissance ‘masterpieces’. Although, what constitutes a masterpiece is obviously still in the eye of the beholder. Sato has developed the AI art generator, named AI Gahuku, because he enjoys entertaining people. “So, I decided to utilise my programming skills to create the app,” he says. He did, however, never think his work would become so popular. “I’m honestly very surprised that so many people are using it,” says Sato. One problem is that people of colour who use the app receive a light-skinned painting of their uploaded photograph. “Currently, we are confirming that the output of the AI artist has been biased, and we hope to use a wide variety of learning data and increase the diversity of output in the future,” Sato acknowledges.
Neural net Jukebox generates music in a variety of genres and artist styles
Even composing music is child’s play for AI. OpenAI has recently launched their new machine learning model Jukebox. With it, you can create new music samples produced from scratch. The AI model is impressive, with recognisable melodies and words, although the results do resemble familiar-sounding songs. The model was trained with raw audio instead of symbolic music – which often doesn’t contain voices. The team made use of neural networks to encode and compress raw audio. Then they used a transformer to generate new, compressed audio and ‘upsampled’ this to turn it back into raw audio. Jukebox also generates its own lyrics, co-written with OpenAI researchers. The models were trained on a raw dataset of 1.2 million songs, of which half in English, using metadata and lyrics from LyricWiki. While the results are certainly impressive, “there is a significant gap between these generations and human-created music. For example, while the generated songs show local musical coherence, follow traditional chord patterns, and can even feature impressive solos, we do not hear familiar larger musical structures such as choruses that repeat,” the company writes on its blog.
Can you recreate existing paintings by teaching AI to paint?
In recent years, researchers have started to develop algorithms that can even learn how to (re)create art pieces. The team members of The Timecraft project, led by Amy Zhao, a PhD student at MIT who is working on computer vision and machine learning, have written a paper called ‘Painting Many Pasts: Synthesising Time Lapse Videos of Paintings’, which describes how an existing painting may have originally been painted, detail by intricate detail. For a computer, an image is nothing more than a long list of random numbers. To make sense of all these numbers, you need a CNN – a Convolutional Neural Network – to output a feature map, or in other words, a filtered version of the original image. For their research, they trained an algorithm by feeding it with time lapse videos of artists creating paintings. This way, the system also learns to figure out the steps required to recreate an existing painting. The team was inspired by artistic style transfer, where neural networks are used to create art that’s a mash-up of different artists’ work or to create art in the style of a certain artist. The time lapse videos created were evaluated during a survey in which 158 people participated. The participants were asked to compare the Timecraft videos to the original time lapse videos. Although most participants did prefer the real videos, 50 per cent of the time, they actually confused the Timecraft videos for the real ones. This indicates that machine learning can be used to determine the individual steps with which an art piece was made.
Write poetry in the style of famous poets with the help of AI
If you’ve always wanted to write poetry but don’t quite have the right skills, the Verse by Verse tool from Google could come in very handy. The tech giant’s poetry tool can help you by offering suggestions in the style of 22 renowned American poets. To get inspired, all you have to do is choose some poets whose work you like – such as Paul Lawrence Dunbar, Edgar Allen Poe, Emily Dickinson, or William Cullan Bryant – select a poem structure, such as the rhyme of the poem, the syllable count and the poetic form. Then, the Verse by Verse tool will ask you to compose your first line, after which the AI offers various suggestions on how to carry on. The tool is developed to offer inspiration, allowing you to accept, dismiss, or alter the suggestions. Google explains that the suggestions made by the tool are obviously not taken from the original poetry, but are novel verses that resemble lines that could have been written by these renowned poets. To create the tool, Google’s engineers fed and fine-tuned their AI with an extensive selection of classic poetry and writing styles. And in order to enable the AI to generate relevant suggestions, “the system was trained to have a general semantic understanding of what lines of verse would best follow a previous line of verse. So even if you write on topics not commonly seen in classic poetry, the system will try its best to make suggestions that are relevant,” says Google engineer Dave Uthus.
AI can now be taught to sing songs in multiple languages
Australian AI-music startup Popgun, known for the YouTube videos in which the company’s AI composes and plays music, has now managed to teach AI to sing as well. “For the past year we have been teaching an AI to sing. Using text and midi as input, we generate vocal tracks in many different voices. This video shows how we can interact with the AI Singer followed by samples from recent songs created with our technology,” says the description under the latest demo video. Popgun CEO Stephen Phillips says: “We have made progress but we still have some way to go to reach the quality required for prime time. It’s going to be a new instrument that producers will use. It can play the guitar, the bass and the piano, and each one of those AIs can listen to one another and play together. And now it can sing, too.”
The Chinese have also recently created an AI model – the DeepSinger system. A team of researchers from Zhajiang University and Microsoft have generated the ‘voice’ of an AI singer using algorithms capable of predicting and controlling the pitch and duration of audio. The sound people produce when singing contains much more complex rhythms and patterns than the sound they make during speaking. Also, while there’s quite a bit of speaking training data available, this is not the case with singing training data. To overcome this challenge, the researchers developed a data pipeline that mined and transformed audio data. Singing clips were extracted from various websites, after which the singing itself was isolated from the rest of the audio, and divided into sentences. The technique is similar to the text-to-speech method that enables machines to speak.
These developments, however fascinating, do have obvious commercial implications as well. While human artists often need to come to the studio to address mistakes, changes, or additions after an initial recording session, AI-assisted voice synthesis could eliminate this need altogether, saving all involved parties time and money. This tech could also be used to create deepfakes, making it seem as though certain artists sang lyrics they never did. These developments could also put the human artists out of work. An example of potential future issues with this tech is the time when audio deepfakes put words in Jay-Z’s mouth – the rapper appeared to rap ‘We didn’t start the fire’ by Billy Joel, which he in fact had never done.
Will AI become a creative entity in its own right?
New technologies like artificial intelligence are transforming the nature of creative processes. AI is playing an increasingly important role in creative activities, such as fine arts, music, and poetry. Our computers are increasingly fulfilling the roles of canvas or brush, and even musical instruments. In the future, will technology – in particular, AI – remain merely a tool to assist or augment human creativity and make a wide range of creative skills more accessible, or will it become a creative entity in its own right?
Share via: