LUBA ELLIOTT – AI Creative Researcher, Curator and Historian
All material is copyright Luba Elliott, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page
Visit Luba Elliott’s Creative AI Research website.
So I’m looking forward and, yes, I’m going to give a quick overview of AI art from 2015 to kind of the present day because these are the kind of the times I’ve been active in the field and also kind of this recent generation of which Anna Ridler, Jake Elwes, Mario Klingemann are all part of. And just to start off with, I’ll just mention a couple of projects about kind of the perspective I’m coming from. So I started by running a meet up in London that was kind of connecting the technical research community and also artists working with these tools. And that led to a number of different projects. So curating a media art festival, then putting up exhibitions at business conferences, and also the NeurIPS Creativity Workshop, which is probably one of the projects I’m kind of best known for. And yeah, so this was a workshop on creative AI at this really big and important academic AI conference. And alongside that, I also curated an art gallery, which kind of still exists online from 2017 to 2020. And now I think the workshop is still being continued and it’s run by Tom White, who is an artist from New Zealand.
So if you’re interested, you can still kind of submit and pay attention to that. And yeah, I also kind of run the I curate the Art and AI festival in Leicester, where we try and put some of this art in the public spaces around the city. And I’ve also done a little bit of kind of NFTs like I think many AI artists do now. So exhibitioner faro file and at Unit Gallery. And yeah, so now I’ll start kind of my presentation and I normally started with Deep Dream. So a technology that came out of Google in 2015, where you had, let’s say, an image and then kind of the algorithm looked at it and it got excited by certain features. So you got all the crazy colors and crazy shapes. And that really was one of the first things that I think excited the mainstream about kind of AI and its developments. And to me, it’s still one of my kind of favorite projects, in a way, because I do find it quite creative and I quite like the aesthetic. But there are very few artists who’ve been working with it. And Daniel Ambrosi is probably the one example who’s kind of stayed with the technology.
And so he normally does lots of landscape artworks where you can kind of see the deep, dream aesthetic, but also the subject matter, which I think is good because a lot of artists sort of let the aesthetic take away everything from kind of the image. So it was all about the aesthetic. But here you can also see the subject matter. And recently, he’s also sort of experimented a bit with kind of cubist mentiling some of this, to kind of freshen up his approach. And, yeah, then came something called style transfer, which is when you had an image and you could switch it into the style of Monet or van Gogh, one of these impressionist artists. And again, a lot of AI researchers and software engineers were really excited about it because to them, that’s what art should be like, right? It’s something you find in an old museum. And I think a lot of the artists or art historians I met were horrified because, of course, artists nowadays are trying to kind of make something new, either in terms of aesthetic or in terms of the comment on society. So, yeah, it’s tricky to find interesting projects there, I think, unless you kind of broaden the definition of style to something beyond artistic style.
And this is just an example of Gene Kogan, kind of a key artist and educator in the field doing some kind of different variations of Mona Lisa in Google Maps, Calligraphy, and kind of astronomy based style. And then came something called Began, which I think was kind of one of the more popular tools of this generation of artists. And it came out in 2014. And then there were lots of kind of different versions of it because it was a very active research topic until by, I think it was like 2018 or 19, it began making very kind of photorealistic images. And I think some of my favorite works are still from the early periods of the Gan. So Mario Klingemann has been doing amazing work, kind of looking at the human form. And a lot of these images were sort of compared to Francis Bacon. And what I like about them is that sometimes they can still show some of the features of the earlier gowns. So sometimes there’s kind of the wrong placement of facial features or, like you can see in the image in the middle, the arm is at an odd angle.
So still kind of some of the glitches and the errors of the technology that are sort of made use of in the artistic work. And I really like that. And when the gowns became very realistic, artists needed to kind of think a lot more about what they would do with their work because they could no longer rely completely on those kind of glitches. And Scott Eaton is an example of an artist who kind of really studies the human form, and he’s able to come up with these kind of images that sort of combine the realistic textures with maybe slightly odd shapes that are familiar to everyone who has been following the development of the Gan. And, yeah, Mario Klingemann was also kind of experimenting. So this is a work from the show I curated for Unit earlier this year. And there were two images. So the one on the left is from I think it’s kind of from 2018 from his project called Neural Glitch, where he kind of used a neural network to kind of generate an image of a face to the standards of that time. And the image on the right was when he used kind of I think the image on the left is a source image and then kind of used stable diffusion to come up with something with an image like that.
And you can see kind of how different it is in terms of the quality is kind of much more realistic and quite different to the earlier Gan image. And, yeah, artists like Ivona Tau kind of look at machine learning and what that means, because, of course, in machine learning, you kind of train in your network and it’s supposed to kind of improve and produce better results. And she had a project that was doing machine forgetting. So it was kind of the image I think quality got worse as you could maybe kind of see in this image. And, yeah, of course, also entangled others so that Sofia Crespo and Feileacan McCormick have been doing some amazing work looking at the natural world and all these kind of insects and sea creatures. And, yes, this work was done, I think, in the last year or so where the two artists combined various kind of generations of GAN images to make these kind of creatures that combine different characteristics from different species. And, yeah, other artists have also been thinking about kind of how they can also maybe display this work or what else can they do to kind of make their work interact more with the ecosystem.
And Jake Elwes, who actually has this work on show now at Gazelli Art House in London, so he trained an AI on images of these marsh birds and then he put up a screen in the Essex marshes. So you can see in the background there were all these kind of birds that were kind of walking around and they could interact with the screen with this digital bird. And I think that’s sort of very interesting because, yeah, you have these, like, two species kind of next to each other, and how do they kind of deal with a robotic or with this fake image? Moving on to kind of sculpture, Ben Snell did this lovely project called Dio, where he trained an AI in various designs of kind of sculpture from antiquity to modern times. And then he proceeded to destroy the computer who made all these designs and, yeah, made the sculpture out of it. And I think, conceptually, this is kind of much more evolved than a lot of other kind of AI artists because, of course, there are probably some parallels to artists from the 20th century who are destroying their work. And, yeah, I think it’s a really nice piece.
Next up, there’s also a group of artists who really think a lot about kind of the data set and Roman Lipski. He did this project quite a long time ago now, but I find it interesting because he’s been working with sort of like realistic landscapes. He’s been painting a lot, so kind of not really working with the digital realm. And then he took this photograph, made nine different paintings of it, worked with some technologists. He trained an AI to kind of generate an image based on these ones. And, yeah, this is kind of what he got from the AI. And he proceeded to paint different kind of versions in response. And you could kind of see how his style evolved as kind of he kept feeding his works into the machine and receiving a response. And I think this is kind of his final image. So if you look at the two side by side, you can see how, by working together with an AI system, he was able to really evolve his style, both in terms of, I think, how he depicts the subject matter. So it became a lot more abstract and also the color scheme, so it became a lot cooler at certain points.
He was kind of experimenting with many more colors. And, yeah, he still kind of stayed working in a physical piece. So you could still use some of these tools and continue kind of working physically. So you don’t always need to kind of rely on the purely generated image. You could still do kind of things in paint or engraving and so on. Helena Sarin is another artist who is kind of well known for using kind of data sets and having her own aesthetic. And what I really like about her work is that she frequently combines different mediums, like, in the image on the right. She’s got, I think, flowers, newspapers, and photography as a texture. So combining all these different mediums and kind of Gantt tools, she’s able to come out with these images that are very much kind of her own in terms of aesthetic. And then normally, I talk about Anna Ridler’s Tulip project, which I commissioned for Impact Festival in 2018. But I know that she will be doing that. So I guess I’ll just mention that as a curator, what I really appreciate about this project is that Anna made a conscious decision to sort of highlight the human labor that goes into this work.
So in many of her exhibitions, there would be the generated tulips together with a wall of these flowers. And that sort of really made, I think, a difference in how AI art was being perceived and being presented, because many artists, even if they would try and figure out how to kind of display their work beyond the digital screen, it wasn’t kind of very common to display the data set. And Anna’s a real pioneer in that. So, yeah, she’ll explain that after my presentation and yeah, moving on to more modern times when Dali and Clip have entered the world. Yeah, of course, I think the artistic and the audience focus have shifted to kind of these text to image generators, which is, of course, when you write a text prompt and then get an image out of it. And I think as a curator, I’ve always struggled to find interesting artworks because it felt to me that it was almost kind of a completely different medium because it’s so different from a lot of earlier AI art, which was where artists kind of thought a lot more deeply about maybe the technology itself and a lot of the kind of ethical implications on the concept, whereas a lot of the text to image generated art feels very kind of just about aesthetics, the image.
So, yeah, just including a few projects that I think kind of are more interesting to me as a curator, one of which is Botto, which is made by Mario Klingemann and operates, I think, as a DAO [digital autonomous organisation], where there’s a community of kind of audience members who vote on which of the images is going to be put up for sale. And I remember this was kind of initiated during the height of the NFT boom. And, yeah, I think a few of these images sold within a few weeks fetched over a million euros, which was kind of great news for AI art. And, yeah, Vadim Epstein is somebody who’s been working with, I think, these tools quite deeply, particularly Clip, and sort of developing his own aesthetic and then these narrative videos. Yeah, so his work is great. And Maneki Neko is an artist I curated in one of my kind of NFT exhibitions that I did. And what is special about her work is that I think this feels quite different to the stereotypical aesthetic of stable diffusion. So it’s quite intricate and, yeah, sort of very detailed. And I think she made this work by maybe combining lots of different images together and doing kind of a lot of paste processing.
But, yeah, it’s an image that you can tell has a lot of kind of unique style. And yeah, Ganbrood is. Another artist who’s been kind of very successful with text to image generators, and he’s been making these kind of epic, fantasy related images. And others, like Varvara and Mar, has kind of applied the potential of text image generators to kind of come up with different designs, to print them out and make them into a sculpture. And then, of course, there was also a little bit of kind of controversy, which is probably, like, ongoing, related to all these text image generators. And Jason Allen got first prize at an art fair in the US. And, yeah, I think people were not sure about to the extent that he highlighted the AI involvement, because, of course, this was kind of an AI generated piece using Stable Diffusion on a journey. And I think to anybody who follows AI art made using those tools, it’s very obvious that it’s something kind of made with one of them because their aesthetic is kind of very familiar. And, yeah, I remember Jason Allen was kind of defending himself, saying that he spent a while kind of perfecting the prompt to kind of get this particular result.
But, yeah, whether this was kind of clear to the judges is uncertain. And in another photography context, I think this was like the Sony Photo Awards. Earlier this year, Boris Eldagstan submitted this image, which, of course, looks much less like it was made with a text to image generator. And I think he was also kind of regarded very highly by the judges. But he pulled his piece because he used an AI, and he thought that he wanted to make a comment that maybe it’s not suitable for these competitions. And, yeah, including Jake Elwes here, again, because he has this work called Closed Loop.
That kind of links that was made in, I think, in 2017. And there’s one neural network that generates images from text and another one that generates text from the image. So it’s like a conversation between the two. And in some ways, it also helps you realize how much the technology has changed in, like, six or seven years. But on the other hand, this piece is much more interesting and much kind of more conceptually evolved than what is currently being done, I think, from my perspective. Let’s see what I have next. And I’ll just maybe show kind of one or two projects. In the interest of time, I should probably finish on this project, which I really like, and is by two artists called Shinseungback Kimyonghun [Shin Seung Back and Kim Yong Hun], who are based in South Korea. And I think this is a really cool piece because the artists are using facial recognition in a way that it was kind of never designed for. So they’re using in a fine art context. So they worked with artists who were supposed to paint a portrait together with this facial recognition system. And as soon as the system recognized the face, they had to do something to stop it being recognized as a face.
And so you got these portraits, which some of which look more like portraits than others. But I think the image on the right, it took me a very long time to figure out where there is a portrait there until I realized that it was kind of face, 90 degrees tilted. So, yeah, I think it’s a really cool example of kind of using a technology outside the most obvious kind of generative image tools to kind of still make work that is kind of within this AI art space. And I think here I will end and here are my details. You can find out more about my work on the website or email me with any questions. And now I’ll pass over to Geoff for the next speaker.
[00:16:37.170] – LUBA ELLIOTT – Curator
So I’m looking forward and, yes, I’m going to give a quick overview of AI art from 2015 to kind of the present day because these are the kind of the times I’ve been active in the field and also kind of this recent generation of which Anna Ridler, Jake Elwes, Mario Klingemann are all part of. And just to start off with, I’ll just mention a couple of projects about kind of the perspective I’m coming from. So I started by running a meet up in London that was kind of connecting the technical research community and also artists working with these tools. And that led to a number of different projects. So curating a media art festival, then putting up exhibitions at business conferences, and also the NeurIPS Creativity Workshop, which is probably one of the projects I’m kind of best known for. And yeah, so this was a workshop on creative AI at this really big and important academic AI conference. And alongside that, I also curated an art gallery, which kind of still exists online from 2017 to 2020. And now I think the workshop is still being continued and it’s run by Tom White, who is an artist from New Zealand.
So if you’re interested, you can still kind of submit and pay attention to that. And yeah, I also kind of run the I curate the Art and AI festival in Leicester, where we try and put some of this art in the public spaces around the city. And I’ve also done a little bit of kind of NFTs like I think many AI artists do now. So exhibitioner faro file and at Unit Gallery. And yeah, so now I’ll start kind of my presentation and I normally started with Deep Dream. So a technology that came out of Google in 2015, where you had, let’s say, an image and then kind of the algorithm looked at it and it got excited by certain features. So you got all the crazy colors and crazy shapes. And that really was one of the first things that I think excited the mainstream about kind of AI and its developments. And to me, it’s still one of my kind of favorite projects, in a way, because I do find it quite creative and I quite like the aesthetic. But there are very few artists who’ve been working with it. And Daniel Ambrosi is probably the one example who’s kind of stayed with the technology.
And so he normally does lots of landscape artworks where you can kind of see the deep, dream aesthetic, but also the subject matter, which I think is good because a lot of artists sort of let the aesthetic take away everything from kind of the image. So it was all about the aesthetic. But here you can also see the subject matter. And recently, he’s also sort of experimented a bit with kind of cubist mentiling some of this, to kind of freshen up his approach. And, yeah, then came something called style transfer, which is when you had an image and you could switch it into the style of Monet or van Gogh, one of these impressionist artists. And again, a lot of AI researchers and software engineers were really excited about it because to them, that’s what art should be like, right? It’s something you find in an old museum. And I think a lot of the artists or art historians I met were horrified because, of course, artists nowadays are trying to kind of make something new, either in terms of aesthetic or in terms of the comment on society. So, yeah, it’s tricky to find interesting projects there, I think, unless you kind of broaden the definition of style to something beyond artistic style.
And this is just an example of Gene Kogan, kind of a key artist and educator in the field doing some kind of different variations of Mona Lisa in Google Maps, Calligraphy, and kind of astronomy based style. And then came something called Began, which I think was kind of one of the more popular tools of this generation of artists. And it came out in 2014. And then there were lots of kind of different versions of it because it was a very active research topic until by, I think it was like 2018 or 19, it began making very kind of photorealistic images. And I think some of my favorite works are still from the early periods of the Gan. So Mario Klingemann has been doing amazing work, kind of looking at the human form. And a lot of these images were sort of compared to Francis Bacon. And what I like about them is that sometimes they can still show some of the features of the earlier gowns. So sometimes there’s kind of the wrong placement of facial features or, like you can see in the image in the middle, the arm is at an odd angle.
So still kind of some of the glitches and the errors of the technology that are sort of made use of in the artistic work. And I really like that. And when the gowns became very realistic, artists needed to kind of think a lot more about what they would do with their work because they could no longer rely completely on those kind of glitches. And Scott Eaton is an example of an artist who kind of really studies the human form, and he’s able to come up with these kind of images that sort of combine the realistic textures with maybe slightly odd shapes that are familiar to everyone who has been following the development of the Gan. And, yeah, Mario Klingemann was also kind of experimenting. So this is a work from the show I curated for Unit earlier this year. And there were two images. So the one on the left is from I think it’s kind of from 2018 from his project called Neural Glitch, where he kind of used a neural network to kind of generate an image of a face to the standards of that time. And the image on the right was when he used kind of I think the image on the left is a source image and then kind of used stable diffusion to come up with something with an image like that.
And you can see kind of how different it is in terms of the quality is kind of much more realistic and quite different to the earlier Gan image. And, yeah, artists like Ivona Tau kind of look at machine learning and what that means, because, of course, in machine learning, you kind of train in your network and it’s supposed to kind of improve and produce better results. And she had a project that was doing machine forgetting. So it was kind of the image I think quality got worse as you could maybe kind of see in this image. And, yeah, of course, also entangled others so that Sofia Crespo and Feileacan McCormick have been doing some amazing work looking at the natural world and all these kind of insects and sea creatures. And, yes, this work was done, I think, in the last year or so where the two artists combined various kind of generations of GAN images to make these kind of creatures that combine different characteristics from different species. And, yeah, other artists have also been thinking about kind of how they can also maybe display this work or what else can they do to kind of make their work interact more with the ecosystem.
And Jake Elwes, who actually has this work on show now at Gazelli Art House in London, so he trained an AI on images of these marsh birds and then he put up a screen in the Essex marshes. So you can see in the background there were all these kind of birds that were kind of walking around and they could interact with the screen with this digital bird. And I think that’s sort of very interesting because, yeah, you have these, like, two species kind of next to each other, and how do they kind of deal with a robotic or with this fake image? Moving on to kind of sculpture, Ben Snell did this lovely project called Dio, where he trained an AI in various designs of kind of sculpture from antiquity to modern times. And then he proceeded to destroy the computer who made all these designs and, yeah, made the sculpture out of it. And I think, conceptually, this is kind of much more evolved than a lot of other kind of AI artists because, of course, there are probably some parallels to artists from the 20th century who are destroying their work. And, yeah, I think it’s a really nice piece.
Next up, there’s also a group of artists who really think a lot about kind of the data set and Roman Lipski. He did this project quite a long time ago now, but I find it interesting because he’s been working with sort of like realistic landscapes. He’s been painting a lot, so kind of not really working with the digital realm. And then he took this photograph, made nine different paintings of it, worked with some technologists. He trained an AI to kind of generate an image based on these ones. And, yeah, this is kind of what he got from the AI. And he proceeded to paint different kind of versions in response. And you could kind of see how his style evolved as kind of he kept feeding his works into the machine and receiving a response. And I think this is kind of his final image. So if you look at the two side by side, you can see how, by working together with an AI system, he was able to really evolve his style, both in terms of, I think, how he depicts the subject matter. So it became a lot more abstract and also the color scheme, so it became a lot cooler at certain points.
He was kind of experimenting with many more colors. And, yeah, he still kind of stayed working in a physical piece. So you could still use some of these tools and continue kind of working physically. So you don’t always need to kind of rely on the purely generated image. You could still do kind of things in paint or engraving and so on. Helena Sarin is another artist who is kind of well known for using kind of data sets and having her own aesthetic. And what I really like about her work is that she frequently combines different mediums, like, in the image on the right. She’s got, I think, flowers, newspapers, and photography as a texture. So combining all these different mediums and kind of Gantt tools, she’s able to come out with these images that are very much kind of her own in terms of aesthetic. And then normally, I talk about Anna Ridler’s Tulip project, which I commissioned for Impact Festival in 2018. But I know that she will be doing that. So I guess I’ll just mention that as a curator, what I really appreciate about this project is that Anna made a conscious decision to sort of highlight the human labor that goes into this work.
So in many of her exhibitions, there would be the generated tulips together with a wall of these flowers. And that sort of really made, I think, a difference in how AI art was being perceived and being presented, because many artists, even if they would try and figure out how to kind of display their work beyond the digital screen, it wasn’t kind of very common to display the data set. And Anna’s a real pioneer in that. So, yeah, she’ll explain that after my presentation and yeah, moving on to more modern times when Dali and Clip have entered the world. Yeah, of course, I think the artistic and the audience focus have shifted to kind of these text to image generators, which is, of course, when you write a text prompt and then get an image out of it. And I think as a curator, I’ve always struggled to find interesting artworks because it felt to me that it was almost kind of a completely different medium because it’s so different from a lot of earlier AI art, which was where artists kind of thought a lot more deeply about maybe the technology itself and a lot of the kind of ethical implications on the concept, whereas a lot of the text to image generated art feels very kind of just about aesthetics, the image.
So, yeah, just including a few projects that I think kind of are more interesting to me as a curator, one of which is Botto, which is made by Mario Klingemann and operates, I think, as a DAO [digital autonomous organisation], where there’s a community of kind of audience members who vote on which of the images is going to be put up for sale. And I remember this was kind of initiated during the height of the NFT boom. And, yeah, I think a few of these images sold within a few weeks fetched over a million euros, which was kind of great news for AI art. And, yeah, Vadim Epstein is somebody who’s been working with, I think, these tools quite deeply, particularly Clip, and sort of developing his own aesthetic and then these narrative videos. Yeah, so his work is great. And Maneki Neko is an artist I curated in one of my kind of NFT exhibitions that I did. And what is special about her work is that I think this feels quite different to the stereotypical aesthetic of stable diffusion. So it’s quite intricate and, yeah, sort of very detailed. And I think she made this work by maybe combining lots of different images together and doing kind of a lot of paste processing.
But, yeah, it’s an image that you can tell has a lot of kind of unique style. And yeah, Ganbrood is. Another artist who’s been kind of very successful with text to image generators, and he’s been making these kind of epic, fantasy related images. And others, like Varvara and Mar, has kind of applied the potential of text image generators to kind of come up with different designs, to print them out and make them into a sculpture. And then, of course, there was also a little bit of kind of controversy, which is probably, like, ongoing, related to all these text image generators. And Jason Allen got first prize at an art fair in the US. And, yeah, I think people were not sure about to the extent that he highlighted the AI involvement, because, of course, this was kind of an AI generated piece using Stable Diffusion on a journey. And I think to anybody who follows AI art made using those tools, it’s very obvious that it’s something kind of made with one of them because their aesthetic is kind of very familiar. And, yeah, I remember Jason Allen was kind of defending himself, saying that he spent a while kind of perfecting the prompt to kind of get this particular result.
But, yeah, whether this was kind of clear to the judges is uncertain. And in another photography context, I think this was like the Sony Photo Awards. Earlier this year, Boris Eldagstan submitted this image, which, of course, looks much less like it was made with a text to image generator. And I think he was also kind of regarded very highly by the judges. But he pulled his piece because he used an AI, and he thought that he wanted to make a comment that maybe it’s not suitable for these competitions. And, yeah, including Jake Elwes here, again, because he has this work called Closed Loop.
That kind of links that was made in, I think, in 2017. And there’s one neural network that generates images from text and another one that generates text from the image. So it’s like a conversation between the two. And in some ways, it also helps you realize how much the technology has changed in, like, six or seven years. But on the other hand, this piece is much more interesting and much kind of more conceptually evolved than what is currently being done, I think, from my perspective. Let’s see what I have next. And I’ll just maybe show kind of one or two projects. In the interest of time, I should probably finish on this project, which I really like, and is by two artists called Shangsunbak Kim Yanghung, who are based in South Korea. And I think this is a really cool piece because the artists are using facial recognition in a way that it was kind of never designed for. So they’re using in a fine art context. So they worked with artists who were supposed to paint a portrait together with this facial recognition system. And as soon as the system recognized the face, they had to do something to stop it being recognized as a face.
And so you got these portraits, which some of which look more like portraits than others. But I think the image on the right, it took me a very long time to figure out where there is a portrait there until I realized that it was kind of face, 90 degrees tilted. So, yeah, I think it’s a really cool example of kind of using a technology outside the most obvious kind of generative image tools to kind of still make work that is kind of within this AI art space. And I think here I will end and here are my details. You can find out more about my work on the website or email me with any questions. And now I’ll pass over to Geoff for the next speaker.