Luba Elliott curator – AI Art History 2015-2023 – CAS AI Image Art talk 2023 transcript

LUBA ELLIOTT – AI Creative Researcher, Curator and Historian

All material is copyright Luba Elliott, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page

Visit Luba Elliott’s Creative AI Research website.

Luba Elliott

I’m looking forward to giving a quick overview of AI art from 2015 to the present day. These are the years I’ve been active in the field and part of the recent generation that includes Anna Ridler, Jake Elwes, and Mario Klingemann.

To start off, I’ll mention a couple of projects to explain the perspective I’m coming from. I began by running a meetup in London that connected the technical research community and artists working with these tools. That led to a number of projects: curating a media art festival, organizing exhibitions at business conferences, and launching the NeurIPS Creativity Workshop, which is probably one of the projects I’m best known for. This was a workshop on creative AI at a major academic AI conference. Alongside that, I also curated an art gallery that still exists online from 2017 to 2020. The workshop now continues, currently run by Tom White, an artist from New Zealand.

If you’re interested, you can still submit work to it. I also curate the Art and AI festival in Leicester, where we exhibit work in public spaces around the city. I’ve done some work with NFTs, including exhibitions at Faro File and at Unit Gallery.

Now I’ll start the presentation, and I usually begin with Deep Dream. This was a technology that came out of Google in 2015. You’d input an image, and the algorithm would enhance certain features, producing vivid colors and strange shapes. It was one of the first developments that excited the mainstream about AI. It’s still one of my favorite projects because it’s quite creative and aesthetically interesting. Few artists continued working with it. Daniel Ambrosi is one who has, often creating landscape artworks that retain the Deep Dream aesthetic while preserving the subject matter. That’s important because many artists let the aesthetic overshadow the image itself. Ambrosi has also experimented with cubist influences to refresh his approach.

Then came style transfer, where you could take an image and apply the style of Monet or Van Gogh. This excited many AI researchers and software engineers, who saw it as a representation of art similar to what you’d find in museums. In contrast, many contemporary artists and art historians found it unappealing because today’s artists aim to create something new, whether aesthetically or conceptually. Interesting work in this area often requires broadening the definition of style beyond just artistic style. Gene Kogan, a key figure in the field, created variations of the Mona Lisa using styles like Google Maps, calligraphy, and astronomy.

Next came GANs, which gained popularity around 2014 and evolved rapidly over the next few years. By 2018 or 2019, they were producing photorealistic images. Some of my favorite works come from the earlier GAN period. Mario Klingemann created striking images exploring the human form that drew comparisons to Francis Bacon. These early works often contained visual glitches—misplaced facial features, oddly angled limbs—which became integral to the artistic expression. As GANs improved, artists had to move beyond relying on those glitches.

Scott Eaton is one example. He deeply studies human anatomy and uses GANs to combine realistic textures with slightly distorted forms familiar to those tracking GAN development. Mario Klingemann also continued experimenting. At a show I curated for Unit Gallery, we displayed two of his works: one from his 2018 “Neural Glitch” project, and another made using Stable Diffusion based on the earlier image. The newer image is more realistic, illustrating how much the technology has advanced.

Ivona Tau explores machine learning itself. One of her projects involved machine forgetting, where image quality deteriorated over time, challenging the usual goal of improvement in machine learning. Entangled Others—Sofia Crespo and Feileacan McCormick—have done fantastic work inspired by the natural world. Their recent projects combined generations of GAN images to create creatures with traits from multiple species.

Other artists have focused on how to display their work or how to engage with ecosystems. Jake Elwes, whose work is currently on show at Gazelli Art House in London, trained an AI on images of marsh birds and installed a screen in the Essex marshes. Real birds interacted with this digital bird, creating a fascinating encounter between two species.

In sculpture, Ben Snell created a project called “Dio,” where he trained an AI on sculptures from antiquity to modern times, then destroyed the computer that created the designs and used its remains to make the sculpture. Conceptually, this is far more developed than many other AI art pieces and recalls 20th-century artists who destroyed their own work.

Roman Lipski is an artist who considers datasets deeply. Though he primarily paints landscapes, he experimented with AI by photographing a scene, painting nine versions, training an AI on those, and responding to its output. His style evolved through this interaction, becoming more abstract and cooler in tone. Despite using digital tools, he continued working in physical media like paint and engraving.

Helena Sarin is known for using her own datasets and developing a distinct aesthetic. She often combines media—flowers, newspapers, photography—with GANs to create highly original work.

Normally, I talk about Anna Ridler’s tulip project, which I commissioned for the 2018 Impact Festival. Since she will be discussing it later, I’ll just mention that she made a conscious effort to highlight the human labor behind AI art. Her exhibitions often paired generated tulips with walls of hand-drawn flowers, drawing attention to the dataset—a rare approach at the time.

In more recent years, with the rise of DALL·E and CLIP, attention has shifted to text-to-image generators. These tools create images from written prompts and have changed the focus of AI art. Earlier AI artists often explored the underlying technology or its ethical implications. In contrast, much current text-to-image work is more focused on aesthetics.

Some projects still stand out. Botto, by Mario Klingemann, operates as a DAO. A community votes on which image to sell, and during the NFT boom, some pieces fetched over a million euros. Vadim Epstein has worked deeply with CLIP, developing a personal aesthetic and narrative video works. Maneki Neko, whom I curated in an NFT exhibition, creates intricate, detailed images that feel distinct from typical Stable Diffusion outputs, likely combining multiple images and heavy post-processing.

Ganbrood has found success with fantasy-themed images. Artists like Varvara and Mar use text-to-image generation to design sculptures. Controversies have emerged too. Jason Allen won first prize at a US art fair with an image made using Midjourney and Stable Diffusion. Critics questioned whether he clearly disclosed the AI’s role. He argued that refining the prompt was itself artistic labor.

At the Sony Photo Awards, Boris Eldagsen submitted a more ambiguous image that won praise from judges. He later withdrew it, aiming to spark discussion about AI’s place in such contests.

Jake Elwes’ “Closed Loop,” made in 2017, involved two neural networks: one generated images from text, the other generated text from images, creating an ongoing dialogue. It demonstrates both how far the technology has come and how conceptually rich earlier works were.

To close, I want to highlight a project by South Korean artists Shin Seung Back and Kim Yong Hun. They used facial recognition in a fine art context, asking portrait painters to disrupt the algorithm’s ability to detect faces. The results varied—some still looked like portraits, others did not. One portrait took me a long time to recognize because the face was tilted 90 degrees. It’s a brilliant example of using AI tools outside their original purpose to produce meaningful art.

That’s the end of my presentation. You can find out more about my work on my website or email me with any questions. Now I’ll pass over to Geoff for the next speaker.