Anna Ridler Artist – CAS AI Image Art talk 2023 transcript

ANNA RIDLER – Artist

All material is copyright Anna Ridler, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page

Visit Anna Ridler’s website 

Anna Ridler
Anna Ridler’s Myriad (Tulips)

It was really interesting hearing both of your talks [Geoff and Luba], and especially it’s always a pleasure to speak after Luba because, as she mentioned, we have been working together quite closely for the past five, six, seven years has it been? And it’s been really interesting to see how the space has evolved in that time, especially now with the recent advancements in these diffusion models and the text to image models. I’m most well known for my Tulip projects, which I will talk about. But I did also want to which I made using Gans. I want to touch on how the field has changed, because now when you talk about AI art, it does now have, I think, like if you go on Twitter or if you go on Discord, it very much is being used to refer to a very particular type of work, which is this text to image work. What I’m showing now is just some very quick examples of Dall-E Tulips that I made. I was actually very lucky, and I was one of the first people to get access to Dall-E back when it was released.

And I found it actually really difficult to make work with. And it’s taken me a long time to get my hands into things like stable diffusion and Daly and mid journey to make work with because I find that the way that it’s structured and the way that it’s designed, particularly with Daly, I found it really you have no access to the code base. You have no access to the data set. There’s no way that you can tinker with it. And you’re very reliant on an API and everything is closed off. So as an artist who’s very interested in the tools and the means of production, I found it very difficult to get into and work with. I mean, that is changing. I still think there are conceptually interesting things that you can do with it. There’s really interesting research that is coming out around how it relates to memory and how it relates to and you can do some interesting things around language and ontology, but because for the most part, it’s locked away. And even when you’re working with stable diffusion, I find that you can’t look through all of the data and you can fine tune it, but you’re always going to be working off the base of the like the massive lion data set.

But for me, it’s been a long time to get to a place where I think I can do something with it. And that being said, there’s so much being produced every day with it. It does feel like magic when you play with it. I remember the first time I typed something in and got these images out. It did feel so incredible to get it. But it does raise this real question about kind of, I think, what art is, because not every image is necessarily art. And I think that’s a debate that is now going on because so much is being produced and there is this question about where does the art sit? And I think for me, the art is very much like how it’s then displayed or the message that it contains and the experience that someone has through interacting with it. So, as Liber mentioned, I’m most well known for my data sets, the work that I do with data sets. And this isn’t something that is just explicitly linked with machine learning. It’s something that I’ve worked with for a much longer period of time.

I’ve always been interested in archives and libraries and data and information because for me, like every piece of data, every piece of information is a trace of something that once existed. In many ways, I feel like reconstructing that data or that data set is like a very human thing to do work almost like a detective building up these bits of data to produce an idea or a story or to use it in my project. I think there are lots of interesting parallels between encyclopaedias and dictionaries and the history of those with some of the issues around machine learning and data sets that I’ve explored. This project that I showed in the previous slide, and it’s playing now, which was commissioned by the Photographers Gallery, where I essentially created my own ImageNet using Victorian and Edwardian encyclopaedias. I think one of the things that’s also really important for me and part of my practice is showcasing the labor that goes into these projects and the way of working so it’s not just for me like the final output.

It’s also how I got to that output. And so a lot of the time, I will document the process of making, and that documentation will be equally as important part of the final project as the artefact. That’s something that I come to you again and again in my work project that Luba commissioned me to make back in 2017. It was a project that really took off for me, actually, which is about Chileps, where I created a piece using a GAN. It was a Singan back then, the first one, which I trained on 10,000 images of tulips that I took myself. I didn’t make the tulips myself. I was in the Netherlands at the time working. And one of the reasons why I was really interested in tulips was I was making a comparison between Chulip Mania this was the first no speculative bubble and bitcoin and also the bubble that was going on around AI at the time. So in the gan piece shall show in a bit. The gan is controlled, the price of bitcoin. And it was a really important project for me to do.

And I spent a lot of time building this data set. And it’s you have a very different relationship to the data when you’re when you’re working with it very physically. Carrying all of these tulips was very heavy. Stripping them was very heavy. And one of the reasons why I stopped at 10,000 tulips wasn’t because it’s a very nice round number, although it is it’s because tulip season ended. So even though this was a very digital project, it was driven by the rhythms of nature. After I took these, if I can go to the next slide. After I took all of these photographs, I really wanted to display the data as an artwork in and of itself, which led to a separate piece called Myriad, where I’ve taken the photographs and showed them with some of the labels that I attached to them handwritten underneath. And for me, it was part of the way to draw the attention to all of the human decision making that sits somewhere in the chain of AI. Because at the time that this was being shown back in 2018, there wasn’t yet that discussion and bias around how human, Eric can creep into these systems.

This piece, when it was shown, and you can see on some of them that you can see my handwriting where I’ve crossed things out. And the piece, when it’s shown, is huge, and it’s around 50 m². It’s only been shown its entirety twice because you need quite a large space to put it up in. And I think it also gives people a real sense of the scale of data, because 10,000, when you scroll through on a thumb drive, you don’t really understand what that means in the way that you do when your body is physically reacting to it. So it takes a long time to walk by all of these photographs, and you get a sense of how long the process of putting it all together was and the labor, effort, energy and all of those things that sits behind creating a data set. Another reference point for this project was very much the Dutch still life, like the Golden Age Dutch painter, very heavily referenced in how I composed my data set, which is also another reason why I really like doing things myself.

You can’t well, I suppose you can now with like tools like stable diffusion and things. But at the time, you can create a data set of you can Google and ask for 10,000 images of Tulips against a black background. And one of the things that I really liked about the further comparison that I liked about like these Dutch steel lifes and the way that Gans work is that these paintings, the flowers in them can all exist at once. They’re flowers from spring and summer and autumn and winter that are combined. So they’re botanical impossibilities, these bouquets. But they’re combined using all of the fragments that the artist has got of the flowers that he’s seen, sketches and memories and things like that. So rather than copying from nature, paintings are drawing in from the experience of the painter. And for me, that’s a really nice parallel to how gowns work. They’re not merely copying images from the data set and collaging them together, but creating an imagined botanical possibility through the knowledge that it’s gained through the data set.

So I think there’s, like, that nice parallel that exists there. The other reference that I like to bring out with this project is the history of floral data sets that sit inside machine learning. This is the Iris data set, which is inside psychic learn. So every time that you’re importing that into a piece of code, you’re also importing the Iris data set, which was something that it was a data set created by Ronald Fisher, which has all this different data about Irises. So there is this history, this hidden history inside machine learning of Laurel data sets, which is also something that I quite like in this project. The final piece. This is a later made two versions of it. This is the 2019 edition that I made after StyleGAN was released. It’s three screen installation. And as I mentioned, the tulips are controlled by the price of Bitcoin becoming more stripy and open as the price goes up. The title references the disease that gives Tulips their stripes, which was also made them the most valuable at the height of Tulip mania.

And it’s asking questions around value and around notions of value and like speculation and collapsing these two different moments in history. And what I also really enjoyed about this project, because it is a very complicated project, was that through working with the data set and with the Gan, I was able to explore very different things in each part of it. So the data set piece was much more explicitly about machine learning and about the issues and ethics that maybe sit inside of it. Whereas the Gan piece was much more talking about something not really related to the technology and about like wider questions around value and notions of like speculation. I said, it’s a work that still gets exhibited quite regularly in various different variety of different institutions and cultural spaces everywhere. From like public it was on like buses in a town in Germany, just as a very pretty, moving image piece through to critical overviews as to where photography is going in different museums. And then because I know we don’t have masses amount of time, I just wanted to end with something that I often end talks that I do where I often am asked about where I see like AI art sitting.

And I can only answer for myself because I’m not a curator. But I find that I look back into history to see where it might go and I find it hard to connect to some of the early algorithmic artists. But I find it very easy to see the parallels between my practice and land and environmental artists. And I often look to them for inspiration, because land and environmental art is so much about planning. It’s so much about thinking through like all the various different possibilities and then allowing something that you can predict but you can never control, to then act on that planning. And for me, that’s the same as spending all of this time building my data set. All of this time like constructing like a model and then like pressing go and then allowing something to come out of it. And then also like there is this question about where the art sets. And in London Environmental Artists, a lot of it is in the documentation. I think for AI artists it’s also the documentation.

It’s not necessarily like the model as it runs or like the insides of what’s going on it’s the images, it’s the sound, it’s the performance that comes out of it. And so, yeah, that’s like where I wanted to end it and how even like now, I’m still amazed at the possibilities that this technology can offer and how inspirational I find out on a daily basis.

 

Patrick Lichty Artist & Writer – Studio Visits Posthuman Atelier – CAS AI Image Art talk 2023 transcript

PATRICK LICHTY – Conceptual Artist, Writer

Discussion of project, “Studio Visits: In the Posthuman Atelier” before the Computer Art Society (of Britain).

All material is copyright Patrick Lichty, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page

Visit Patrick Lichty’s website here.

Patrick Lichty

Patrick Lichty

I am Patrick Lichty, an artist, curator, cultural theorist, and Assistant Professor of Creative Digital Media at Winona State University in the States.  I will talk about my project, a curatorial meta-narrative called “Studio Visits: In the Post-Human Atelier.”  Much of my AI work has yet to be widely shown in the West, as until two years ago, I had spent six years in the United Arab Emirates, primarily at Zayed University, the Federal University in Abu Dhabi.  I have been working in the New media field for about 30 years, dealing with notions of how media shape our reality.  So you can see some of my earlier work in this slide; I was part of the activist group RTMark, which became the Yes Men, the Second Life performance art group Second Front, and some of my Tapestry and algorism work.

This slide shows my 2015 solo show in New York, “Random Internet Cats,” which comprises generative plotter cat drawings.  The following slide shows some of the asemic calligraphy I fed through GAN machine learning.  I worked with Playform.IO’s machine learning system to create a personal Rorschach by looking for points of commonality in my calligraphy.  I called the project Personal Taxonomies, and other works, like my Still Lives, generated through StyleGAN and Playform.IO.  So I’ve been doing AI for about 7-8 years and new media art for almost three decades.  Let’s fast-forward to now.  I decided to go away from the PyTorch GAN machine learning models I used with my Calligraphy work and my paintings at Platform.io in the middle of last year.  Switching to VQGAN and CLIP-based diffusion engines, I worked with NightCafe for a while.  Then I found MidJourneyAI.  And at first, I was only partially satisfied with the platform as I was on the MidJourneyAI Discord server and saw people working with basic ideas.  I decided to focus on two concepts as I decided to think of what I was doing as concrete prose with code.  And then secondly, I decided to take contestational aesthetics, as my prompts would contain ideas not being used on the MidJourney Discord.

I wanted to find the concepts for my prompts that needed to be less representational than the usual visuals of a CLIP-based AI.  I did two things.  First, I ignored everything typed on the MidJourney Discord, which was almost an aesthetics of negation.  And then, I considered the latent space of the Laion-5 database that MidJourneyAI was using as an abstract space.  I decided conceptually how to deal with a conceptual space using abstract architecture.  I decided to start querying it with images like Kurt Schwitters’ Merzbau, just as a beginning, as well as Joseph Cornell.  I did about twelve series called “The Architectures of the Latent Space,” illustrated here.  They are unusual because they still refer to Schwitters but are much more sculptural and flatter.  And so these went on for about twelve series.  But this was the beginning of my work in that area, then I started finding what I felt were narratives of absence.

I have considerable differences in abstraction – multiple notions of abstraction, as I want to see what is transcendent in AI realism.  For example, I started playing with real objects in a photography studio.  This image is of a simulated photo of a four-dimensional cube, a tesseract, which isn’t supposed to be representational.  Still, it was exciting that it emerged and illuminated the space.  And so this told me that I was on a path in which I was starting to confuse the AI’s translator and that it was beginning to give results that were in between its sets of parameters, which is interesting.  One body of work where my attempts at translator confusion are evident is The Voids (Lacunae), basically brutalism and empty billboards.  It is inspired by a post that Joseph DeLappe from Scotland made on Facebook of a blank billboard.  And one of the things that I noticed that these systems tried to do is that they try to represent something.  They try to fill space.  If there’s a blank space, it tries to put something in it.

MidJourney AI tries to fill visual space with signifiers.  One of my challenges was forcing the AI engine to keep that space open.  So this resulted in experiments with empty art studios and blank billboards.  Artists were absent or had no physical form, which was the conceptual trigger.  These spaces have multiple conditions and aesthetics, with a lot of variation.  The question lingered, “How do I put these images together?”  There are numerous ways to deal with them, so I made about 150 or 200 in a series and then created a contact book.  And this gets away from this idea of choice in AI art, anxiety, and so on.  I have a book that’s ready for publication so that someone can see my entire process and they can see the whole set of images.  But in this case, what I thought was very interesting is that I wound up going into a bit of reverie around the fantasy of these artists who I’d been looking into their studios, and they weren’t in, or they didn’t exist in a physical form.

Having worked in criticism, curation, and theory, as well as being an artist, I decided to take these concepts and create a meta-structural scaffold to create a curatorial narrative based on this concept of the body of 50 artists.  When I visited their fictional studios, thinking about theoretical constructs such as Baudrillard and Benjamin’s ideas of absence and aura, I created a conceptual framework that was a catalog for a general audience but preceded the exhibition.  There’s precedent for this.  There’s Duchamp and the Boite en Valise.  I’ve done work like this before, constructing shows in informal spaces like an iPod.  Here is a work from 2009, the iPod en Valise, as the iPod is a ‘box’ (Boite) for media work.  And then I thought, why can’t I do the same with a catalog?  Why can’t I use the formal constraint of the catalog to discuss the sociology of AI and some of the social anxieties putting this into a robust conceptual framework beyond its traditional rules?  So another restriction that I have frequently encountered as a New Media curator and artist is time.

A moment in time, when technological art or a form emerges is often ephemeral.  Curating shows on handheld art, screen savers, etc., show these might have a three to six-month period of time in which art is fresh.  Studio Visits is tied formally to the MidJourneyAI 4  engine because MidJourneyAI 5 has a different aesthetic.  A key concept is where the work situates itself in society and how it’s developing in a formal sense.  And then, is there time to deploy an exhibition before the idea goes cold?  And most times, most institutions are, unless you’re dealing with a festival, planning about a year out, possibly two.  And, of course, for every essay I’m writing now, there is a disclaimer saying that this is written at such a date, such a year, such a month, and this may be obsolete or dated by the time you read this, in the case of something developing as quickly as AI, this idea of being aware of the temporal nature of the form itself.

So I decided to deploy the catalog first, as the museum show would emerge from this, and create the catalog and then exhibit.  As I said before, I’ve been making these contact books, which are reverse-engineered catalogs;  I’m almost up to 15 editions.  I’ve only mentioned about six or seven on my Instagram so far.  But in general, I’m looking at curation as an artistic scaffold.  Given this project, a curatorial frame structure creates a narrative around meta-structural conceptual ones rather than representing the images themselves.  It’s a narrative dealing with society’s anxieties about AI and culture.  What happens if we finally eliminate those annoying artists and replace them with AI as a provocation?  So here’s the structure of the piece.  The overall work is the catalog.  There is a curatorial essay, the artist’s name, statement, and the studio “photograph.”  The names derive from the names of colleagues.  So I’m reimagining in a synthetic lens my community, the studio image, as we can see through the narrative that I’ve presented.  I started generating these empty spaces and let myself run through about a few hundred.

I chose the 50 most potent synthetic studio images.  A description emerges using MidJourrneyAI’s /describe function.   The resulting/Describe prompt, a brief discussion of the artist and what they do, is fed to GPT-3, which generates a statement.  So here’s the form of an artist’s layout.  You have the name.  The following layout is the first one I did for Artificium, 334-J452, inspired by George Lucas’ THX 1138.  And the layout came from this initial image.  I took these with a description from MidJourneyAI and put it into GPT-3.  The artist’s statement is as banal as any graduate school one and reads, “As an artist, my work expresses my innermost thoughts and emotions.  I seek to capture the energy and chaos of the world around me, using bold brushstrokes and vibrant colors to convey my message.”   So these were 50 2-page spreads.  The book is  112 pages and fits very much with a catalog format.  So the name, as said before, was based on the conceptual frame of the artist I was thinking of, based on the image generated, some of the concerns I saw in the mass media, and loosely upon those of names of colleagues, family, et cetera.

In many ways, I was taking a fantasy and re-envisioning my community through a synthetic lens.  These images came first when developing across the imagined artists of diversity, identity, species, and planet.  This reimagining is interesting because I wasn’t necessarily thinking of my ethnographic sphere.  I worked in Arabia, West Asia, and Central Asia and dealt with people from Africa and the subcontinent.  So many of these people of my experience figure into this global latent space of imagined artists, not just those of the European area or even those, more specifically, North America.  And then I expanded this to species and planet, as we’ll see in a moment.  So here we have an alien sound artist.  The computer in this studio is almost cyberpunk and has a very otherworldly art studio image.  I must remember which artist this is, but it has a New England-style look.  And then third one is a Persian painter obsessed with color, Zafran Pavlavi, based on my partner, Negin Ehtesabian, who is currently coming to America from Tehran.

This slide is a rough outline of the structure of the catalog.  I take the name and the framework of the artist’s practice, and you can see here that this information went into GPT-3 reading statements almost indicative of the usual graduate school art statements.  Once again, these elements reflect some of the anxieties in the popular media.  = I’m using this as a dull mirror from a visual sociology standpoint, based on scholars like Becker.  In addition, this is a draft, but more is just a pro forma approach to the conceptual aspect.  The project catalog is available on Blurb.  It’s about $100 and still needs a few little revisions.  But, this is something that is from a materialist perspective in basically inverting many practices regarding the usual modalities of exhibition curation and execution of a show or an exhibition.  I’m also thinking about the standard mechanisms of artistic presentation within an institutional path.  So not only is this dealing with AI, but it’s using AI to talk about the sociological space, the institutional space in which these works inhabit, and how these works propagate.

Studio Visits deals with institutions, capitalism, and digital production.  So issues this project engages concerning AI exacerbate social anxieties about technology.  The deluge of images problematizes any cohesive narrative.  Using this meta-narrative through this conceptual frame, I can focus on some of the social and cultural questions about AI and the future of society and how it affects it within a fairly neat package.  Design and curatorial fictions provide solutions for cultural spaces.  Cultural institutions typically need to catch up with the speed of technology.  Then bespoke artifacts, which are problematic, can remain in place long enough for the institution to adopt them.  In other words, if you get something together and get it out there, you can have that in place, take it to the institution, and hopefully, they can explicate the work.

I’ve been asked about a sequel.  I’ve had many people ask me who these artists are.  What’s their work look like?  You can see excerpts of their work in the studios.  But people were asking me to take the conceit one step further, and I’m starting to work on that idea and show the portraits of the artists and their work.  This portrait is of the artist Zafran, who I talked about earlier.  These both continue the fiction and then humanize the story, which also problematizes it.  And so this is this project and its ongoing development in a nutshell.  I invite you to go to my Instagram at @Patlichty_art, and thank you for your time.

In closing, this is another portrait of the artist Vedran VUC1C.  And once again, this is an entirely constructed fantasy.  But once again, as Picasso said, these are lies that reveal the truth about ourselves.

 

Luba Elliott curator – AI Art History 2015-2023 – CAS AI Image Art talk 2023 transcript

LUBA ELLIOTT – AI Creative Researcher, Curator and Historian

All material is copyright Luba Elliott, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page

Visit Luba Elliott’s Creative AI Research website.

Luba Elliott

So I’m looking forward and, yes, I’m going to give a quick overview of AI art from 2015 to kind of the present day because these are the kind of the times I’ve been active in the field and also kind of this recent generation of which Anna Ridler, Jake Elwes, Mario Klingemann are all part of. And just to start off with, I’ll just mention a couple of projects about kind of the perspective I’m coming from. So I started by running a meet up in London that was kind of connecting the technical research community and also artists working with these tools. And that led to a number of different projects. So curating a media art festival, then putting up exhibitions at business conferences, and also the NeurIPS Creativity Workshop, which is probably one of the projects I’m kind of best known for. And yeah, so this was a workshop on creative AI at this really big and important academic AI conference. And alongside that, I also curated an art gallery, which kind of still exists online from 2017 to 2020. And now I think the workshop is still being continued and it’s run by Tom White, who is an artist from New Zealand.

So if you’re interested, you can still kind of submit and pay attention to that. And yeah, I also kind of run the I curate the Art and AI festival in Leicester, where we try and put some of this art in the public spaces around the city. And I’ve also done a little bit of kind of NFTs like I think many AI artists do now. So exhibitioner faro file and at Unit Gallery. And yeah, so now I’ll start kind of my presentation and I normally started with Deep Dream. So a technology that came out of Google in 2015, where you had, let’s say, an image and then kind of the algorithm looked at it and it got excited by certain features. So you got all the crazy colors and crazy shapes. And that really was one of the first things that I think excited the mainstream about kind of AI and its developments. And to me, it’s still one of my kind of favorite projects, in a way, because I do find it quite creative and I quite like the aesthetic. But there are very few artists who’ve been working with it. And Daniel Ambrosi is probably the one example who’s kind of stayed with the technology.

And so he normally does lots of landscape artworks where you can kind of see the deep, dream aesthetic, but also the subject matter, which I think is good because a lot of artists sort of let the aesthetic take away everything from kind of the image. So it was all about the aesthetic. But here you can also see the subject matter. And recently, he’s also sort of experimented a bit with kind of cubist mentiling some of this, to kind of freshen up his approach. And, yeah, then came something called style transfer, which is when you had an image and you could switch it into the style of Monet or van Gogh, one of these impressionist artists. And again, a lot of AI researchers and software engineers were really excited about it because to them, that’s what art should be like, right? It’s something you find in an old museum. And I think a lot of the artists or art historians I met were horrified because, of course, artists nowadays are trying to kind of make something new, either in terms of aesthetic or in terms of the comment on society. So, yeah, it’s tricky to find interesting projects there, I think, unless you kind of broaden the definition of style to something beyond artistic style.

And this is just an example of Gene Kogan, kind of a key artist and educator in the field doing some kind of different variations of Mona Lisa in Google Maps, Calligraphy, and kind of astronomy based style. And then came something called Began, which I think was kind of one of the more popular tools of this generation of artists. And it came out in 2014. And then there were lots of kind of different versions of it because it was a very active research topic until by, I think it was like 2018 or 19, it began making very kind of photorealistic images. And I think some of my favorite works are still from the early periods of the Gan. So Mario Klingemann has been doing amazing work, kind of looking at the human form. And a lot of these images were sort of compared to Francis Bacon. And what I like about them is that sometimes they can still show some of the features of the earlier gowns. So sometimes there’s kind of the wrong placement of facial features or, like you can see in the image in the middle, the arm is at an odd angle.

So still kind of some of the glitches and the errors of the technology that are sort of made use of in the artistic work. And I really like that. And when the gowns became very realistic, artists needed to kind of think a lot more about what they would do with their work because they could no longer rely completely on those kind of glitches. And Scott Eaton is an example of an artist who kind of really studies the human form, and he’s able to come up with these kind of images that sort of combine the realistic textures with maybe slightly odd shapes that are familiar to everyone who has been following the development of the Gan. And, yeah, Mario Klingemann was also kind of experimenting. So this is a work from the show I curated for Unit earlier this year. And there were two images. So the one on the left is from I think it’s kind of from 2018 from his project called Neural Glitch, where he kind of used a neural network to kind of generate an image of a face to the standards of that time. And the image on the right was when he used kind of I think the image on the left is a source image and then kind of used stable diffusion to come up with something with an image like that.

And you can see kind of how different it is in terms of the quality is kind of much more realistic and quite different to the earlier Gan image. And, yeah, artists like Ivona Tau kind of look at machine learning and what that means, because, of course, in machine learning, you kind of train in your network and it’s supposed to kind of improve and produce better results. And she had a project that was doing machine forgetting. So it was kind of the image I think quality got worse as you could maybe kind of see in this image. And, yeah, of course, also entangled others so that Sofia Crespo and Feileacan McCormick have been doing some amazing work looking at the natural world and all these kind of insects and sea creatures. And, yes, this work was done, I think, in the last year or so where the two artists combined various kind of generations of GAN images to make these kind of creatures that combine different characteristics from different species. And, yeah, other artists have also been thinking about kind of how they can also maybe display this work or what else can they do to kind of make their work interact more with the ecosystem.

And Jake Elwes, who actually has this work on show now at Gazelli Art House in London, so he trained an AI on images of these marsh birds and then he put up a screen in the Essex marshes. So you can see in the background there were all these kind of birds that were kind of walking around and they could interact with the screen with this digital bird. And I think that’s sort of very interesting because, yeah, you have these, like, two species kind of next to each other, and how do they kind of deal with a robotic or with this fake image? Moving on to kind of sculpture, Ben Snell did this lovely project called Dio, where he trained an AI in various designs of kind of sculpture from antiquity to modern times. And then he proceeded to destroy the computer who made all these designs and, yeah, made the sculpture out of it. And I think, conceptually, this is kind of much more evolved than a lot of other kind of AI artists because, of course, there are probably some parallels to artists from the 20th century who are destroying their work. And, yeah, I think it’s a really nice piece.

Next up, there’s also a group of artists who really think a lot about kind of the data set and Roman Lipski. He did this project quite a long time ago now, but I find it interesting because he’s been working with sort of like realistic landscapes. He’s been painting a lot, so kind of not really working with the digital realm. And then he took this photograph, made nine different paintings of it, worked with some technologists. He trained an AI to kind of generate an image based on these ones. And, yeah, this is kind of what he got from the AI. And he proceeded to paint different kind of versions in response. And you could kind of see how his style evolved as kind of he kept feeding his works into the machine and receiving a response. And I think this is kind of his final image. So if you look at the two side by side, you can see how, by working together with an AI system, he was able to really evolve his style, both in terms of, I think, how he depicts the subject matter. So it became a lot more abstract and also the color scheme, so it became a lot cooler at certain points.

He was kind of experimenting with many more colors. And, yeah, he still kind of stayed working in a physical piece. So you could still use some of these tools and continue kind of working physically. So you don’t always need to kind of rely on the purely generated image. You could still do kind of things in paint or engraving and so on. Helena Sarin is another artist who is kind of well known for using kind of data sets and having her own aesthetic. And what I really like about her work is that she frequently combines different mediums, like, in the image on the right. She’s got, I think, flowers, newspapers, and photography as a texture. So combining all these different mediums and kind of Gantt tools, she’s able to come out with these images that are very much kind of her own in terms of aesthetic. And then normally, I talk about Anna Ridler’s Tulip project, which I commissioned for Impact Festival in 2018. But I know that she will be doing that. So I guess I’ll just mention that as a curator, what I really appreciate about this project is that Anna made a conscious decision to sort of highlight the human labor that goes into this work.

So in many of her exhibitions, there would be the generated tulips together with a wall of these flowers. And that sort of really made, I think, a difference in how AI art was being perceived and being presented, because many artists, even if they would try and figure out how to kind of display their work beyond the digital screen, it wasn’t kind of very common to display the data set. And Anna’s a real pioneer in that. So, yeah, she’ll explain that after my presentation and yeah, moving on to more modern times when Dali and Clip have entered the world. Yeah, of course, I think the artistic and the audience focus have shifted to kind of these text to image generators, which is, of course, when you write a text prompt and then get an image out of it. And I think as a curator, I’ve always struggled to find interesting artworks because it felt to me that it was almost kind of a completely different medium because it’s so different from a lot of earlier AI art, which was where artists kind of thought a lot more deeply about maybe the technology itself and a lot of the kind of ethical implications on the concept, whereas a lot of the text to image generated art feels very kind of just about aesthetics, the image.

So, yeah, just including a few projects that I think kind of are more interesting to me as a curator, one of which is Botto, which is made by Mario Klingemann and operates, I think, as a DAO [digital autonomous organisation], where there’s a community of kind of audience members who vote on which of the images is going to be put up for sale. And I remember this was kind of initiated during the height of the NFT boom. And, yeah, I think a few of these images sold within a few weeks fetched over a million euros, which was kind of great news for AI art. And, yeah, Vadim Epstein is somebody who’s been working with, I think, these tools quite deeply, particularly Clip, and sort of developing his own aesthetic and then these narrative videos. Yeah, so his work is great. And Maneki Neko is an artist I curated in one of my kind of NFT exhibitions that I did. And what is special about her work is that I think this feels quite different to the stereotypical aesthetic of stable diffusion. So it’s quite intricate and, yeah, sort of very detailed. And I think she made this work by maybe combining lots of different images together and doing kind of a lot of paste processing.

But, yeah, it’s an image that you can tell has a lot of kind of unique style. And yeah, Ganbrood is. Another artist who’s been kind of very successful with text to image generators, and he’s been making these kind of epic, fantasy related images. And others, like Varvara and Mar, has kind of applied the potential of text image generators to kind of come up with different designs, to print them out and make them into a sculpture. And then, of course, there was also a little bit of kind of controversy, which is probably, like, ongoing, related to all these text image generators. And Jason Allen got first prize at an art fair in the US. And, yeah, I think people were not sure about to the extent that he highlighted the AI involvement, because, of course, this was kind of an AI generated piece using Stable Diffusion on a journey. And I think to anybody who follows AI art made using those tools, it’s very obvious that it’s something kind of made with one of them because their aesthetic is kind of very familiar. And, yeah, I remember Jason Allen was kind of defending himself, saying that he spent a while kind of perfecting the prompt to kind of get this particular result.

But, yeah, whether this was kind of clear to the judges is uncertain. And in another photography context, I think this was like the Sony Photo Awards. Earlier this year, Boris Eldagstan submitted this image, which, of course, looks much less like it was made with a text to image generator. And I think he was also kind of regarded very highly by the judges. But he pulled his piece because he used an AI, and he thought that he wanted to make a comment that maybe it’s not suitable for these competitions. And, yeah, including Jake Elwes here, again, because he has this work called Closed Loop.

That kind of links that was made in, I think, in 2017. And there’s one neural network that generates images from text and another one that generates text from the image. So it’s like a conversation between the two. And in some ways, it also helps you realize how much the technology has changed in, like, six or seven years. But on the other hand, this piece is much more interesting and much kind of more conceptually evolved than what is currently being done, I think, from my perspective. Let’s see what I have next. And I’ll just maybe show kind of one or two projects. In the interest of time, I should probably finish on this project, which I really like, and is by two artists called Shinseungback Kimyonghun [Shin Seung Back and Kim Yong Hun], who are based in South Korea. And I think this is a really cool piece because the artists are using facial recognition in a way that it was kind of never designed for. So they’re using in a fine art context. So they worked with artists who were supposed to paint a portrait together with this facial recognition system. And as soon as the system recognized the face, they had to do something to stop it being recognized as a face.

And so you got these portraits, which some of which look more like portraits than others. But I think the image on the right, it took me a very long time to figure out where there is a portrait there until I realized that it was kind of face, 90 degrees tilted. So, yeah, I think it’s a really cool example of kind of using a technology outside the most obvious kind of generative image tools to kind of still make work that is kind of within this AI art space. And I think here I will end and here are my details. You can find out more about my work on the website or email me with any questions. And now I’ll pass over to Geoff for the next speaker.

 

[00:16:37.170] – LUBA ELLIOTT – Curator

So I’m looking forward and, yes, I’m going to give a quick overview of AI art from 2015 to kind of the present day because these are the kind of the times I’ve been active in the field and also kind of this recent generation of which Anna Ridler, Jake Elwes, Mario Klingemann are all part of. And just to start off with, I’ll just mention a couple of projects about kind of the perspective I’m coming from. So I started by running a meet up in London that was kind of connecting the technical research community and also artists working with these tools. And that led to a number of different projects. So curating a media art festival, then putting up exhibitions at business conferences, and also the NeurIPS Creativity Workshop, which is probably one of the projects I’m kind of best known for. And yeah, so this was a workshop on creative AI at this really big and important academic AI conference. And alongside that, I also curated an art gallery, which kind of still exists online from 2017 to 2020. And now I think the workshop is still being continued and it’s run by Tom White, who is an artist from New Zealand.

So if you’re interested, you can still kind of submit and pay attention to that. And yeah, I also kind of run the I curate the Art and AI festival in Leicester, where we try and put some of this art in the public spaces around the city. And I’ve also done a little bit of kind of NFTs like I think many AI artists do now. So exhibitioner faro file and at Unit Gallery. And yeah, so now I’ll start kind of my presentation and I normally started with Deep Dream. So a technology that came out of Google in 2015, where you had, let’s say, an image and then kind of the algorithm looked at it and it got excited by certain features. So you got all the crazy colors and crazy shapes. And that really was one of the first things that I think excited the mainstream about kind of AI and its developments. And to me, it’s still one of my kind of favorite projects, in a way, because I do find it quite creative and I quite like the aesthetic. But there are very few artists who’ve been working with it. And Daniel Ambrosi is probably the one example who’s kind of stayed with the technology.

And so he normally does lots of landscape artworks where you can kind of see the deep, dream aesthetic, but also the subject matter, which I think is good because a lot of artists sort of let the aesthetic take away everything from kind of the image. So it was all about the aesthetic. But here you can also see the subject matter. And recently, he’s also sort of experimented a bit with kind of cubist mentiling some of this, to kind of freshen up his approach. And, yeah, then came something called style transfer, which is when you had an image and you could switch it into the style of Monet or van Gogh, one of these impressionist artists. And again, a lot of AI researchers and software engineers were really excited about it because to them, that’s what art should be like, right? It’s something you find in an old museum. And I think a lot of the artists or art historians I met were horrified because, of course, artists nowadays are trying to kind of make something new, either in terms of aesthetic or in terms of the comment on society. So, yeah, it’s tricky to find interesting projects there, I think, unless you kind of broaden the definition of style to something beyond artistic style.

And this is just an example of Gene Kogan, kind of a key artist and educator in the field doing some kind of different variations of Mona Lisa in Google Maps, Calligraphy, and kind of astronomy based style. And then came something called Began, which I think was kind of one of the more popular tools of this generation of artists. And it came out in 2014. And then there were lots of kind of different versions of it because it was a very active research topic until by, I think it was like 2018 or 19, it began making very kind of photorealistic images. And I think some of my favorite works are still from the early periods of the Gan. So Mario Klingemann has been doing amazing work, kind of looking at the human form. And a lot of these images were sort of compared to Francis Bacon. And what I like about them is that sometimes they can still show some of the features of the earlier gowns. So sometimes there’s kind of the wrong placement of facial features or, like you can see in the image in the middle, the arm is at an odd angle.

So still kind of some of the glitches and the errors of the technology that are sort of made use of in the artistic work. And I really like that. And when the gowns became very realistic, artists needed to kind of think a lot more about what they would do with their work because they could no longer rely completely on those kind of glitches. And Scott Eaton is an example of an artist who kind of really studies the human form, and he’s able to come up with these kind of images that sort of combine the realistic textures with maybe slightly odd shapes that are familiar to everyone who has been following the development of the Gan. And, yeah, Mario Klingemann was also kind of experimenting. So this is a work from the show I curated for Unit earlier this year. And there were two images. So the one on the left is from I think it’s kind of from 2018 from his project called Neural Glitch, where he kind of used a neural network to kind of generate an image of a face to the standards of that time. And the image on the right was when he used kind of I think the image on the left is a source image and then kind of used stable diffusion to come up with something with an image like that.

And you can see kind of how different it is in terms of the quality is kind of much more realistic and quite different to the earlier Gan image. And, yeah, artists like Ivona Tau kind of look at machine learning and what that means, because, of course, in machine learning, you kind of train in your network and it’s supposed to kind of improve and produce better results. And she had a project that was doing machine forgetting. So it was kind of the image I think quality got worse as you could maybe kind of see in this image. And, yeah, of course, also entangled others so that Sofia Crespo and Feileacan McCormick have been doing some amazing work looking at the natural world and all these kind of insects and sea creatures. And, yes, this work was done, I think, in the last year or so where the two artists combined various kind of generations of GAN images to make these kind of creatures that combine different characteristics from different species. And, yeah, other artists have also been thinking about kind of how they can also maybe display this work or what else can they do to kind of make their work interact more with the ecosystem.

And Jake Elwes, who actually has this work on show now at Gazelli Art House in London, so he trained an AI on images of these marsh birds and then he put up a screen in the Essex marshes. So you can see in the background there were all these kind of birds that were kind of walking around and they could interact with the screen with this digital bird. And I think that’s sort of very interesting because, yeah, you have these, like, two species kind of next to each other, and how do they kind of deal with a robotic or with this fake image? Moving on to kind of sculpture, Ben Snell did this lovely project called Dio, where he trained an AI in various designs of kind of sculpture from antiquity to modern times. And then he proceeded to destroy the computer who made all these designs and, yeah, made the sculpture out of it. And I think, conceptually, this is kind of much more evolved than a lot of other kind of AI artists because, of course, there are probably some parallels to artists from the 20th century who are destroying their work. And, yeah, I think it’s a really nice piece.

Next up, there’s also a group of artists who really think a lot about kind of the data set and Roman Lipski. He did this project quite a long time ago now, but I find it interesting because he’s been working with sort of like realistic landscapes. He’s been painting a lot, so kind of not really working with the digital realm. And then he took this photograph, made nine different paintings of it, worked with some technologists. He trained an AI to kind of generate an image based on these ones. And, yeah, this is kind of what he got from the AI. And he proceeded to paint different kind of versions in response. And you could kind of see how his style evolved as kind of he kept feeding his works into the machine and receiving a response. And I think this is kind of his final image. So if you look at the two side by side, you can see how, by working together with an AI system, he was able to really evolve his style, both in terms of, I think, how he depicts the subject matter. So it became a lot more abstract and also the color scheme, so it became a lot cooler at certain points.

He was kind of experimenting with many more colors. And, yeah, he still kind of stayed working in a physical piece. So you could still use some of these tools and continue kind of working physically. So you don’t always need to kind of rely on the purely generated image. You could still do kind of things in paint or engraving and so on. Helena Sarin is another artist who is kind of well known for using kind of data sets and having her own aesthetic. And what I really like about her work is that she frequently combines different mediums, like, in the image on the right. She’s got, I think, flowers, newspapers, and photography as a texture. So combining all these different mediums and kind of Gantt tools, she’s able to come out with these images that are very much kind of her own in terms of aesthetic. And then normally, I talk about Anna Ridler’s Tulip project, which I commissioned for Impact Festival in 2018. But I know that she will be doing that. So I guess I’ll just mention that as a curator, what I really appreciate about this project is that Anna made a conscious decision to sort of highlight the human labor that goes into this work.

So in many of her exhibitions, there would be the generated tulips together with a wall of these flowers. And that sort of really made, I think, a difference in how AI art was being perceived and being presented, because many artists, even if they would try and figure out how to kind of display their work beyond the digital screen, it wasn’t kind of very common to display the data set. And Anna’s a real pioneer in that. So, yeah, she’ll explain that after my presentation and yeah, moving on to more modern times when Dali and Clip have entered the world. Yeah, of course, I think the artistic and the audience focus have shifted to kind of these text to image generators, which is, of course, when you write a text prompt and then get an image out of it. And I think as a curator, I’ve always struggled to find interesting artworks because it felt to me that it was almost kind of a completely different medium because it’s so different from a lot of earlier AI art, which was where artists kind of thought a lot more deeply about maybe the technology itself and a lot of the kind of ethical implications on the concept, whereas a lot of the text to image generated art feels very kind of just about aesthetics, the image.

So, yeah, just including a few projects that I think kind of are more interesting to me as a curator, one of which is Botto, which is made by Mario Klingemann and operates, I think, as a DAO [digital autonomous organisation], where there’s a community of kind of audience members who vote on which of the images is going to be put up for sale. And I remember this was kind of initiated during the height of the NFT boom. And, yeah, I think a few of these images sold within a few weeks fetched over a million euros, which was kind of great news for AI art. And, yeah, Vadim Epstein is somebody who’s been working with, I think, these tools quite deeply, particularly Clip, and sort of developing his own aesthetic and then these narrative videos. Yeah, so his work is great. And Maneki Neko is an artist I curated in one of my kind of NFT exhibitions that I did. And what is special about her work is that I think this feels quite different to the stereotypical aesthetic of stable diffusion. So it’s quite intricate and, yeah, sort of very detailed. And I think she made this work by maybe combining lots of different images together and doing kind of a lot of paste processing.

But, yeah, it’s an image that you can tell has a lot of kind of unique style. And yeah, Ganbrood is. Another artist who’s been kind of very successful with text to image generators, and he’s been making these kind of epic, fantasy related images. And others, like Varvara and Mar, has kind of applied the potential of text image generators to kind of come up with different designs, to print them out and make them into a sculpture. And then, of course, there was also a little bit of kind of controversy, which is probably, like, ongoing, related to all these text image generators. And Jason Allen got first prize at an art fair in the US. And, yeah, I think people were not sure about to the extent that he highlighted the AI involvement, because, of course, this was kind of an AI generated piece using Stable Diffusion on a journey. And I think to anybody who follows AI art made using those tools, it’s very obvious that it’s something kind of made with one of them because their aesthetic is kind of very familiar. And, yeah, I remember Jason Allen was kind of defending himself, saying that he spent a while kind of perfecting the prompt to kind of get this particular result.

But, yeah, whether this was kind of clear to the judges is uncertain. And in another photography context, I think this was like the Sony Photo Awards. Earlier this year, Boris Eldagstan submitted this image, which, of course, looks much less like it was made with a text to image generator. And I think he was also kind of regarded very highly by the judges. But he pulled his piece because he used an AI, and he thought that he wanted to make a comment that maybe it’s not suitable for these competitions. And, yeah, including Jake Elwes here, again, because he has this work called Closed Loop.

That kind of links that was made in, I think, in 2017. And there’s one neural network that generates images from text and another one that generates text from the image. So it’s like a conversation between the two. And in some ways, it also helps you realize how much the technology has changed in, like, six or seven years. But on the other hand, this piece is much more interesting and much kind of more conceptually evolved than what is currently being done, I think, from my perspective. Let’s see what I have next. And I’ll just maybe show kind of one or two projects. In the interest of time, I should probably finish on this project, which I really like, and is by two artists called Shangsunbak Kim Yanghung, who are based in South Korea. And I think this is a really cool piece because the artists are using facial recognition in a way that it was kind of never designed for. So they’re using in a fine art context. So they worked with artists who were supposed to paint a portrait together with this facial recognition system. And as soon as the system recognized the face, they had to do something to stop it being recognized as a face.

And so you got these portraits, which some of which look more like portraits than others. But I think the image on the right, it took me a very long time to figure out where there is a portrait there until I realized that it was kind of face, 90 degrees tilted. So, yeah, I think it’s a really cool example of kind of using a technology outside the most obvious kind of generative image tools to kind of still make work that is kind of within this AI art space. And I think here I will end and here are my details. You can find out more about my work on the website or email me with any questions. And now I’ll pass over to Geoff for the next speaker.

Mark Webster artist – Hypertype – CAS AI Image Art talk 2023 transcript – NLP

MARK WEBSTER – Artist

All material is copyright Mark Webster, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page

Visit Mark Webster’s Area Four website.

https://areafour.xyz/

Mark Webster Hypertype London exhibition
Mark Webster Hypertype London exhibition

Thank you for inviting me. It’s wonderful. Okay, just maybe a quick introduction. So, my name is Mark Webster. I’m an artist based out in Brittany. Lovely part of France. I’ve been working with generative systems, well, computational generative art, since 2005. And it’s only recently though, that I’ve started to work on a personal body of work. And one of those projects that I’ve worked on is Hypertype, which is the project I’d like to talk to you about this evening. So, in a nutshell, Hypertype is a generative art piece that uses emotion and sentiment analysis as its main content. So I’m pulling in data using a specific technology. And that data is used as content to organise typographic, form as letters and symbols. So it’s first and foremost a textual piece. And secondly, it’s trying to draw attention to a specific field in AI specific technology, which is called Natural Language Processing or Understanding. So I’m going to talk a little bit about this. I’m going to talk a little about the ideas that came to develop Hypertype. What you’re actually seeing on the screen at the moment is two images from the program and that were exhibited in London last year in November.

So just to talk a little bit about where this all came from. A few years back now, I came across this technology. So it’s by IBM. It’s an API Natural Language Understanding. And it enables you basically, what this technology does is it enables you to give it a text and it will analyse this text in various ways. And there was one particular part of the API that interested me was the analysis, the emotional and sentiment analysis part. So what it does is you give it a text and it basically spits out keywords. And so these keywords are relevant words within the text and it will assign a sentiment score. So something that is positive, negative, or neutral to these keywords. It also assigns an emotion score and based on five basic emotions. So these five basic emotions are joy, sadness, fear, disgust, and anger. So, yeah, I came across this technology quite a few years ago and I became kind of fascinated by it. I wanted to learn a little bit more. So to do that, I did a lot of reading and I also wrote a few programs to try and explore what I could do with this technology, just try and understand it without thinking about doing anything artistic to begin with.

So what you’re seeing on screen here is on the left, you have basically just a shoot screen of some data. So this is a JSON file. This is typically the kind of data that you get out of the API. So I’m here. I’m just underlying the keyword. This is one keyword, which is a person. So it’s Douglas Hofstadter, a very well known philosopher in the world of AI. And as you can see, there are five emotions with a score that go from zero to one that are given to Douglas. And there’s a sentiment score. It’s neutral. On the right, what you’re seeing is probably something that you’ve seen before. This is a different technology, but it’s very much linked with textual analysis. What you’re seeing on the right is facial recognition technology. And in fact, you are seeing as well the photo of the face of a certain Paul Ekman. Now, Paul Ekman is quite famous because he was, along with his team, one of the people in the kind of bring up this theory that emotions are universal and they can be measured and they can be detected. And this was a theory that was used in part to develop facial recognition, emotion recognition, but also textual.

So, as I said, as I learned about this technology, I wrote a lot of programs. What I have here is a shoot screen of an application that I wrote in Java, basically enabling me to kind of visualise the data. Very simple. Actually. What you’re seeing is an article on Paul Ekman’s research, which is quite funny. On the right, you can see that there are a list of keywords. And for each keyword, there is a sentiment score. And on the left, there’s a little graphic, and there’s a general sentiment score. And then I can go in through the list of each keyword, and I can see kind of information about the various five emotions. So there’s a score for each emotion joy, sadness, disgust, anger, and fear. So I built quite a few programs. I also made a little Twitter bot that enabled me to kind of… because you can get so much data from this technology, it was really important for me to kind of get an understanding of how it not just works, but what it was giving back. So I wrote a little Twitter bot that basically would take an article from The Guardian every morning, and then it would analyse this and it would publish the results on Twitter.

And this just enabled me to kind of observe things. But at the end of the day, at some point, there was the burning question about, well, how does this actual technology work? How does a computer go about labelling text with an emotion? And so I went off and I did a lot of reading about this, not just kind of general articles in the press, but I tried to learn about it technically, and I came across a lot of academic articles. Now, I’m not a computational linguist, and very quickly I came across things that were very, very technical, but something else very different happened. And it was quite interesting. While I was trying to learn about the technical side, about how this computer was labelling text with emotion, another question came about, which was, well, what is an emotion? What is a human emotion? And that was really interesting because at the time, I was reading a lot of Oliver Sacks. You may have heard of Oliver Sacks. He’s written a lot of books. He was as a neuroscientist, and although he never really touched upon emotion, his books kind of opened up a door. And I started to read and learn about other people who were working in the field of neurobiology.

And there were a few people so there was Antonio Damasio and there was Robert Sapolsky, two people who are very well known in the field of neurobiology and touching on questions, not just emotion, but also consciousness. And a lot of their texts can be quite technical, yet they were very, very interesting. And then there was another person that came along – which I’ll show the book cover – of a certain Dr. Lisa Feldman Barrett, also based out in the States, who’s doing wonderful research and written a number of books, one of them here called The How Emotions Are Made. Now, it was really with Lisa Feldman Barrett’s book that something kind of clicked, because in it she started to talk about Paul Ekman, and in a way, she kind of really pulled his whole theory apart. Now, Dr. Barrett is doing a lot of research in this field, so she’s working in a lab and she’s doing contemporary research. What she says in the book really kind of debunks the whole theory of Paul Ekman. That is to say that human emotions are not universal, that they are not innate, we’re not born with them. And this very interesting quote that I put here, emotions are not reactions to the world, but rather they are constructions. So her whole theory kind of drives towards this idea that, in fact, emotions are concepts, and they’re things that we do on the fly in terms of our context, in terms of our experience, and in terms of also our cultures.

And so this was really kind of an eye opener for me. And it also, in a way, kind of made me think, well, again, how does a computer label words and try and infer any kind of emotional content from that from what I was reading from these people, Lisa Feldman Barrett, Antonio Damasio, Robert Sapolsky, they added the whole kind of body side to things. We tend to think that everything is in the brain, but emotions or experiences are very much a bodily experience. And so at the end of the day, I basically came up with the conclusion that, well, a computer is not going to really infer any kind of emotional content from text. So from that point of view, I thought, well, it’s an interesting technology and it’d be nice to do something artistic with it. So how am I going to go about this? So that was the next stage. I did all this kind of research and I basically came to the conclusion that the data is there.

There’s lots of data. How do I use it? Let’s use it as a sculptor might use clay or a painter will use paint. Let’s use that data to do something artistic with. And at the time, I actually didn’t do anything visual. I did actually did a sound piece to begin with which was published back in 2020 called The Drone Machine. And this particular project was pulling in data from IBM Emotion Data Analysis and using that to drive a generative sound oscillators. I basically built a custom made synthesiser, digital synthesiser that was bringing in this data and it was generating sound. And I can share the project perhaps later on with a link. It was published and went out on the radio. It’s 30 minutes long, so I’m not going to play it here. This was the first project I did. What was interesting is that I chose sound because I found that sound was a medium that probably kind of spoke to me more in terms of emotion. But the next stage was indeed to do something visual. And this is where Hypertype came along. And again, the idea was not at all to do a visualisation.

Again, the data for me just didn’t compute use that word. So for me, the data was purely just a material with which I could just play with. So here what you’re seeing on the screen are just very first initial kind of prototypes, visual prototypes for Hypertype in which I was just pulling in the data and I was just putting it on the screen. There were two basic constraints I gave myself for this project. The first one was that it had to be textual, okay? That was it. It had to be based on text, so everything is a font. And secondly, the context that is the content should come that’s what I said to myself. It should come from the articles that I read, all the research I’d done about the technology. So those were the two kind of constraints. And from there I basically started to develop. I’ll probably pass through a few slides because I’m sure we’re running out of time, but here I’m just showing you a number of iterations of the development of the project. So I brought in color. Of course, colour was an interesting parameter to play. With because you could probably think that with emotion that you want to assign a certain color to an emotion or anything.

But that, again, I completely dismissed. In fact, a lot of the colour was generated randomly from various colour palettes that I wrote. Here’s a close up of one of the pieces. Here are some more refined iterations. For those people who work with kind of generative programs, you get a lot of results. And I think as an artist, what is really, really difficult to do when you’re working with a generative piece is to try and find the right balance between something that has visual coherency across the whole series, yet has enough variety. Here these two images, obviously, they’re very different, one of them being quite minimal, the other being a little bit more charged, yet you notice that they have a certain coherency visually. So as a generative artist, you create a lot of these. I usually print out a lot. I usually create websites where I just list all the images so I can see them. And then eventually, yes, there was the exhibition. So for the exhibition, there were five pieces that I curated. So five pieces I chose from the program. And then the program was also went live for people to mint in an NFT environment again. So these were the last pieces, so maybe I’ll stop there.

 

AI & Image Art CAS Talk 1 June 2023 – video & transcripts online

AI & Image Art CAS Talk 1st June 2023

The talk included Geoff Davis (host and Introduction), Luba Elliot (curator) with a history of AI Art, and the artists Anna Ridler, Mark Webster and Patrick Lichty.

Transcripts are below the video. With thanks to CAS and Sean Clark.

AI and Text talk is also online, please see the AI & Text Transcript page which has the video link, or visit the Computer Arts Society Talks page

The AI & Image Art CAS talk video:

TRANSCRIPTS 

Geoff Davis – Introduction – AI Researcher at UAL CCI London, Artist

Luba Elliott – Curator, Creative AI Researcher, Historian

Anna Ridler – Artist

Mark Webster – Artist

Patrick Lichty – Artist, Writer

A Mountain of Pent Up Tears – children’s stories generated texts

Children’s stories this is part of the classic literature series on this blog, experiments in hybrid fiction. 

I turned out so fast, I barely looked back. (I turned to look, only to find the figure had gone.) What has gone before was the dark room.

In the far left corner lay a pile of books. The ones on the page, all full of life’s lessons; the black ones, I thought of the colorless, the dark books, my soul had never seen red in my life. These weren’t books of old stories. These were children’s stories told by old schoolboys, and children’s books have been around ever since:

This was written in the 1840s when young, red dwarfs lived in the forest. They wore their clothes brown and brown and purple. At the close of the story, the author makes the child a dwarf, the woods were too far for them to go, so they are made brown. His book is told like many children’s stories.

“This is where the forest is.”

“The forest, where do I go then?”

“In the woods. In the woods, as you said.”

“I, too, am brown.”

“Yes, I am. I am in the forest. It is the way of the forest.”

“And me?”

“Yes, too, me too.”

“And you too?”

“Yes, too.”

The forest is the only place in which these children can know what things are true. They see the forest as their only home. If they go away, then they see them as an enemy who must be conquered. But these children have all travelled over, and gone past. They’ve seen it all over again.

They know what the forest is; how it works, how things go. They can see everything. And for some reason they know that they are all in the same boat, that they can never cross over to other paths. So they know, even in that boat, what the forest means, and why it is there.

What’s behind the green, silver-tinted door. Where’s the white door? That’s not the forest, is it? The white door has never been there. What was behind that door, the door to the land?

They do not know why they were born into the woods. They think that the forest is God’s gift for them. But what’s there to be gifted for? They are the only ones, after all.

When asked how the sun comes out, and when asked why the sun comes out.

Answer: There is no one here who knows.

And they need not have seen the sun.

 


GD: the first 68 words of the generated text (above) was next used as the prompt to generate the text (below). The 68 words are shown in italics.


I turned out so fast, I barely looked back. (I turned to look, only to find the figure had gone.) What has gone before was the dark room.

In the far left corner lay a pile of books. The ones on the page, all full of life’s lessons; the black ones, I thought of the colorless, the dark books, my soul had never seen red in my life.  

There, I thought, was the door leading to what was yet to come. But in the darkness, an unseen figure, one I couldn’t yet recognize, looked from the left to the right, slowly. As it followed, my eyes scanned the walls, but I couldn’t find the same room. I reached out to touch the door, but the figure stayed, a blank shadow across the room, it never left the door. It seemed to vanish into nothingness.

Then came an accelerating sound, as the figure dropped, and it moved slowly back into the room.

“What have you done?” I cried, the words begins to rise up like a mountain of pent up tears. There, I found, was the door. It turned, and my body seemed to lift it, before the figure itself left to my side.

A smile appeared on my face as I looked at the room, the books filled a giant library with life and lessons, the black ones I had hoped would be my own in my own time, the colorless ones, the ones I’ve already read, the ones I’ve been looking for. They were to tell me what I would need to know, how to survive this night, to know the unknown. They were a path, a map.

In the darkest hall, the room I saw the figure enter, a pale shadow in the light. The place was dark, it was cold. He took me in, and  I felt he was going to kill me, he was taking me inside. The sound woke me, and I opened my eyes to find another shadow, a figure standing behind me. I didn’t know what he was, but I was in my own home.

That man walked towards me, eyes glow green, his eyes were gloomy, the healers he wore, his hair was matted with grey, it was grey that faded in between them. He was big, at least, if a little tall. At least I had grown the right size for this thing.  I don’t think his arms were big, so that is probably just his arms, or perhaps his face was too short for his eyes. His fingers were lank, the seated fingers were black, and  in the dimmest corner of his head was a hood, with no lights.

It was very dark, there was an open door beyond it. He held my hand like a…well, for I was small. I could tell the size of his hand. He had long grey hair in his hands. He held my hand, and I saw red, it was black, as if it was a red colour.

I thought, and in that moment, my skin was soft and shiny. In between my knees. So, I moved over and looked, in my hands, and there was a dark green hand, with red hands, I knew this was a shadow, but that doesn’t mean anything to him.

The first hand, the first hand was big, its face was black, it was large, for I was small. The black was green, its color was darker than red. I touched it, and it was cold.

The second hand was big, but the hands were dark, and had big fingers, the first hand was long and dark green, it was like a purse, I didn’t know how it was, but it was cold, in that moment, I froze.

I could only watch.

 


You can guess the original story from the image (below).

Little Red Riding Hood and Wolf

 

Photo credit https://www.publicdomainpictures.net/