Anna Ridler Artist – CAS AI Image Art talk 2023 transcript

ANNA RIDLER – Artist

All material is copyright Anna Ridler, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page

Visit Anna Ridler’s website 

Anna Ridler
Anna Ridler’s Myriad (Tulips)

It was really interesting hearing both of your talks [Geoff and Luba], and especially it’s always a pleasure to speak after Luba because, as she mentioned, we have been working together quite closely for the past five, six, seven years has it been? And it’s been really interesting to see how the space has evolved in that time, especially now with the recent advancements in these diffusion models and the text to image models. I’m most well known for my Tulip projects, which I will talk about. But I did also want to which I made using Gans. I want to touch on how the field has changed, because now when you talk about AI art, it does now have, I think, like if you go on Twitter or if you go on Discord, it very much is being used to refer to a very particular type of work, which is this text to image work. What I’m showing now is just some very quick examples of Dall-E Tulips that I made. I was actually very lucky, and I was one of the first people to get access to Dall-E back when it was released.

And I found it actually really difficult to make work with. And it’s taken me a long time to get my hands into things like stable diffusion and Daly and mid journey to make work with because I find that the way that it’s structured and the way that it’s designed, particularly with Daly, I found it really you have no access to the code base. You have no access to the data set. There’s no way that you can tinker with it. And you’re very reliant on an API and everything is closed off. So as an artist who’s very interested in the tools and the means of production, I found it very difficult to get into and work with. I mean, that is changing. I still think there are conceptually interesting things that you can do with it. There’s really interesting research that is coming out around how it relates to memory and how it relates to and you can do some interesting things around language and ontology, but because for the most part, it’s locked away. And even when you’re working with stable diffusion, I find that you can’t look through all of the data and you can fine tune it, but you’re always going to be working off the base of the like the massive lion data set.

But for me, it’s been a long time to get to a place where I think I can do something with it. And that being said, there’s so much being produced every day with it. It does feel like magic when you play with it. I remember the first time I typed something in and got these images out. It did feel so incredible to get it. But it does raise this real question about kind of, I think, what art is, because not every image is necessarily art. And I think that’s a debate that is now going on because so much is being produced and there is this question about where does the art sit? And I think for me, the art is very much like how it’s then displayed or the message that it contains and the experience that someone has through interacting with it. So, as Liber mentioned, I’m most well known for my data sets, the work that I do with data sets. And this isn’t something that is just explicitly linked with machine learning. It’s something that I’ve worked with for a much longer period of time.

I’ve always been interested in archives and libraries and data and information because for me, like every piece of data, every piece of information is a trace of something that once existed. In many ways, I feel like reconstructing that data or that data set is like a very human thing to do work almost like a detective building up these bits of data to produce an idea or a story or to use it in my project. I think there are lots of interesting parallels between encyclopaedias and dictionaries and the history of those with some of the issues around machine learning and data sets that I’ve explored. This project that I showed in the previous slide, and it’s playing now, which was commissioned by the Photographers Gallery, where I essentially created my own ImageNet using Victorian and Edwardian encyclopaedias. I think one of the things that’s also really important for me and part of my practice is showcasing the labor that goes into these projects and the way of working so it’s not just for me like the final output.

It’s also how I got to that output. And so a lot of the time, I will document the process of making, and that documentation will be equally as important part of the final project as the artefact. That’s something that I come to you again and again in my work project that Luba commissioned me to make back in 2017. It was a project that really took off for me, actually, which is about Chileps, where I created a piece using a GAN. It was a Singan back then, the first one, which I trained on 10,000 images of tulips that I took myself. I didn’t make the tulips myself. I was in the Netherlands at the time working. And one of the reasons why I was really interested in tulips was I was making a comparison between Chulip Mania this was the first no speculative bubble and bitcoin and also the bubble that was going on around AI at the time. So in the gan piece shall show in a bit. The gan is controlled, the price of bitcoin. And it was a really important project for me to do.

And I spent a lot of time building this data set. And it’s you have a very different relationship to the data when you’re when you’re working with it very physically. Carrying all of these tulips was very heavy. Stripping them was very heavy. And one of the reasons why I stopped at 10,000 tulips wasn’t because it’s a very nice round number, although it is it’s because tulip season ended. So even though this was a very digital project, it was driven by the rhythms of nature. After I took these, if I can go to the next slide. After I took all of these photographs, I really wanted to display the data as an artwork in and of itself, which led to a separate piece called Myriad, where I’ve taken the photographs and showed them with some of the labels that I attached to them handwritten underneath. And for me, it was part of the way to draw the attention to all of the human decision making that sits somewhere in the chain of AI. Because at the time that this was being shown back in 2018, there wasn’t yet that discussion and bias around how human, Eric can creep into these systems.

This piece, when it was shown, and you can see on some of them that you can see my handwriting where I’ve crossed things out. And the piece, when it’s shown, is huge, and it’s around 50 m². It’s only been shown its entirety twice because you need quite a large space to put it up in. And I think it also gives people a real sense of the scale of data, because 10,000, when you scroll through on a thumb drive, you don’t really understand what that means in the way that you do when your body is physically reacting to it. So it takes a long time to walk by all of these photographs, and you get a sense of how long the process of putting it all together was and the labor, effort, energy and all of those things that sits behind creating a data set. Another reference point for this project was very much the Dutch still life, like the Golden Age Dutch painter, very heavily referenced in how I composed my data set, which is also another reason why I really like doing things myself.

You can’t well, I suppose you can now with like tools like stable diffusion and things. But at the time, you can create a data set of you can Google and ask for 10,000 images of Tulips against a black background. And one of the things that I really liked about the further comparison that I liked about like these Dutch steel lifes and the way that Gans work is that these paintings, the flowers in them can all exist at once. They’re flowers from spring and summer and autumn and winter that are combined. So they’re botanical impossibilities, these bouquets. But they’re combined using all of the fragments that the artist has got of the flowers that he’s seen, sketches and memories and things like that. So rather than copying from nature, paintings are drawing in from the experience of the painter. And for me, that’s a really nice parallel to how gowns work. They’re not merely copying images from the data set and collaging them together, but creating an imagined botanical possibility through the knowledge that it’s gained through the data set.

So I think there’s, like, that nice parallel that exists there. The other reference that I like to bring out with this project is the history of floral data sets that sit inside machine learning. This is the Iris data set, which is inside psychic learn. So every time that you’re importing that into a piece of code, you’re also importing the Iris data set, which was something that it was a data set created by Ronald Fisher, which has all this different data about Irises. So there is this history, this hidden history inside machine learning of Laurel data sets, which is also something that I quite like in this project. The final piece. This is a later made two versions of it. This is the 2019 edition that I made after StyleGAN was released. It’s three screen installation. And as I mentioned, the tulips are controlled by the price of Bitcoin becoming more stripy and open as the price goes up. The title references the disease that gives Tulips their stripes, which was also made them the most valuable at the height of Tulip mania.

And it’s asking questions around value and around notions of value and like speculation and collapsing these two different moments in history. And what I also really enjoyed about this project, because it is a very complicated project, was that through working with the data set and with the Gan, I was able to explore very different things in each part of it. So the data set piece was much more explicitly about machine learning and about the issues and ethics that maybe sit inside of it. Whereas the Gan piece was much more talking about something not really related to the technology and about like wider questions around value and notions of like speculation. I said, it’s a work that still gets exhibited quite regularly in various different variety of different institutions and cultural spaces everywhere. From like public it was on like buses in a town in Germany, just as a very pretty, moving image piece through to critical overviews as to where photography is going in different museums. And then because I know we don’t have masses amount of time, I just wanted to end with something that I often end talks that I do where I often am asked about where I see like AI art sitting.

And I can only answer for myself because I’m not a curator. But I find that I look back into history to see where it might go and I find it hard to connect to some of the early algorithmic artists. But I find it very easy to see the parallels between my practice and land and environmental artists. And I often look to them for inspiration, because land and environmental art is so much about planning. It’s so much about thinking through like all the various different possibilities and then allowing something that you can predict but you can never control, to then act on that planning. And for me, that’s the same as spending all of this time building my data set. All of this time like constructing like a model and then like pressing go and then allowing something to come out of it. And then also like there is this question about where the art sets. And in London Environmental Artists, a lot of it is in the documentation. I think for AI artists it’s also the documentation.

It’s not necessarily like the model as it runs or like the insides of what’s going on it’s the images, it’s the sound, it’s the performance that comes out of it. And so, yeah, that’s like where I wanted to end it and how even like now, I’m still amazed at the possibilities that this technology can offer and how inspirational I find out on a daily basis.

 

Patrick Lichty Artist & Writer – Studio Visits Posthuman Atelier – CAS AI Image Art talk 2023 transcript

PATRICK LICHTY – Conceptual Artist, Writer

Discussion of project, “Studio Visits: In the Posthuman Atelier” before the Computer Art Society (of Britain).

All material is copyright Patrick Lichty, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page

Visit Patrick Lichty’s website here.

Patrick Lichty

Patrick Lichty

I am Patrick Lichty, an artist, curator, cultural theorist, and Assistant Professor of Creative Digital Media at Winona State University in the States.  I will talk about my project, a curatorial meta-narrative called “Studio Visits: In the Post-Human Atelier.”  Much of my AI work has yet to be widely shown in the West, as until two years ago, I had spent six years in the United Arab Emirates, primarily at Zayed University, the Federal University in Abu Dhabi.  I have been working in the New media field for about 30 years, dealing with notions of how media shape our reality.  So you can see some of my earlier work in this slide; I was part of the activist group RTMark, which became the Yes Men, the Second Life performance art group Second Front, and some of my Tapestry and algorism work.

This slide shows my 2015 solo show in New York, “Random Internet Cats,” which comprises generative plotter cat drawings.  The following slide shows some of the asemic calligraphy I fed through GAN machine learning.  I worked with Playform.IO’s machine learning system to create a personal Rorschach by looking for points of commonality in my calligraphy.  I called the project Personal Taxonomies, and other works, like my Still Lives, generated through StyleGAN and Playform.IO.  So I’ve been doing AI for about 7-8 years and new media art for almost three decades.  Let’s fast-forward to now.  I decided to go away from the PyTorch GAN machine learning models I used with my Calligraphy work and my paintings at Platform.io in the middle of last year.  Switching to VQGAN and CLIP-based diffusion engines, I worked with NightCafe for a while.  Then I found MidJourneyAI.  And at first, I was only partially satisfied with the platform as I was on the MidJourneyAI Discord server and saw people working with basic ideas.  I decided to focus on two concepts as I decided to think of what I was doing as concrete prose with code.  And then secondly, I decided to take contestational aesthetics, as my prompts would contain ideas not being used on the MidJourney Discord.

I wanted to find the concepts for my prompts that needed to be less representational than the usual visuals of a CLIP-based AI.  I did two things.  First, I ignored everything typed on the MidJourney Discord, which was almost an aesthetics of negation.  And then, I considered the latent space of the Laion-5 database that MidJourneyAI was using as an abstract space.  I decided conceptually how to deal with a conceptual space using abstract architecture.  I decided to start querying it with images like Kurt Schwitters’ Merzbau, just as a beginning, as well as Joseph Cornell.  I did about twelve series called “The Architectures of the Latent Space,” illustrated here.  They are unusual because they still refer to Schwitters but are much more sculptural and flatter.  And so these went on for about twelve series.  But this was the beginning of my work in that area, then I started finding what I felt were narratives of absence.

I have considerable differences in abstraction – multiple notions of abstraction, as I want to see what is transcendent in AI realism.  For example, I started playing with real objects in a photography studio.  This image is of a simulated photo of a four-dimensional cube, a tesseract, which isn’t supposed to be representational.  Still, it was exciting that it emerged and illuminated the space.  And so this told me that I was on a path in which I was starting to confuse the AI’s translator and that it was beginning to give results that were in between its sets of parameters, which is interesting.  One body of work where my attempts at translator confusion are evident is The Voids (Lacunae), basically brutalism and empty billboards.  It is inspired by a post that Joseph DeLappe from Scotland made on Facebook of a blank billboard.  And one of the things that I noticed that these systems tried to do is that they try to represent something.  They try to fill space.  If there’s a blank space, it tries to put something in it.

MidJourney AI tries to fill visual space with signifiers.  One of my challenges was forcing the AI engine to keep that space open.  So this resulted in experiments with empty art studios and blank billboards.  Artists were absent or had no physical form, which was the conceptual trigger.  These spaces have multiple conditions and aesthetics, with a lot of variation.  The question lingered, “How do I put these images together?”  There are numerous ways to deal with them, so I made about 150 or 200 in a series and then created a contact book.  And this gets away from this idea of choice in AI art, anxiety, and so on.  I have a book that’s ready for publication so that someone can see my entire process and they can see the whole set of images.  But in this case, what I thought was very interesting is that I wound up going into a bit of reverie around the fantasy of these artists who I’d been looking into their studios, and they weren’t in, or they didn’t exist in a physical form.

Having worked in criticism, curation, and theory, as well as being an artist, I decided to take these concepts and create a meta-structural scaffold to create a curatorial narrative based on this concept of the body of 50 artists.  When I visited their fictional studios, thinking about theoretical constructs such as Baudrillard and Benjamin’s ideas of absence and aura, I created a conceptual framework that was a catalog for a general audience but preceded the exhibition.  There’s precedent for this.  There’s Duchamp and the Boite en Valise.  I’ve done work like this before, constructing shows in informal spaces like an iPod.  Here is a work from 2009, the iPod en Valise, as the iPod is a ‘box’ (Boite) for media work.  And then I thought, why can’t I do the same with a catalog?  Why can’t I use the formal constraint of the catalog to discuss the sociology of AI and some of the social anxieties putting this into a robust conceptual framework beyond its traditional rules?  So another restriction that I have frequently encountered as a New Media curator and artist is time.

A moment in time, when technological art or a form emerges is often ephemeral.  Curating shows on handheld art, screen savers, etc., show these might have a three to six-month period of time in which art is fresh.  Studio Visits is tied formally to the MidJourneyAI 4  engine because MidJourneyAI 5 has a different aesthetic.  A key concept is where the work situates itself in society and how it’s developing in a formal sense.  And then, is there time to deploy an exhibition before the idea goes cold?  And most times, most institutions are, unless you’re dealing with a festival, planning about a year out, possibly two.  And, of course, for every essay I’m writing now, there is a disclaimer saying that this is written at such a date, such a year, such a month, and this may be obsolete or dated by the time you read this, in the case of something developing as quickly as AI, this idea of being aware of the temporal nature of the form itself.

So I decided to deploy the catalog first, as the museum show would emerge from this, and create the catalog and then exhibit.  As I said before, I’ve been making these contact books, which are reverse-engineered catalogs;  I’m almost up to 15 editions.  I’ve only mentioned about six or seven on my Instagram so far.  But in general, I’m looking at curation as an artistic scaffold.  Given this project, a curatorial frame structure creates a narrative around meta-structural conceptual ones rather than representing the images themselves.  It’s a narrative dealing with society’s anxieties about AI and culture.  What happens if we finally eliminate those annoying artists and replace them with AI as a provocation?  So here’s the structure of the piece.  The overall work is the catalog.  There is a curatorial essay, the artist’s name, statement, and the studio “photograph.”  The names derive from the names of colleagues.  So I’m reimagining in a synthetic lens my community, the studio image, as we can see through the narrative that I’ve presented.  I started generating these empty spaces and let myself run through about a few hundred.

I chose the 50 most potent synthetic studio images.  A description emerges using MidJourrneyAI’s /describe function.   The resulting/Describe prompt, a brief discussion of the artist and what they do, is fed to GPT-3, which generates a statement.  So here’s the form of an artist’s layout.  You have the name.  The following layout is the first one I did for Artificium, 334-J452, inspired by George Lucas’ THX 1138.  And the layout came from this initial image.  I took these with a description from MidJourneyAI and put it into GPT-3.  The artist’s statement is as banal as any graduate school one and reads, “As an artist, my work expresses my innermost thoughts and emotions.  I seek to capture the energy and chaos of the world around me, using bold brushstrokes and vibrant colors to convey my message.”   So these were 50 2-page spreads.  The book is  112 pages and fits very much with a catalog format.  So the name, as said before, was based on the conceptual frame of the artist I was thinking of, based on the image generated, some of the concerns I saw in the mass media, and loosely upon those of names of colleagues, family, et cetera.

In many ways, I was taking a fantasy and re-envisioning my community through a synthetic lens.  These images came first when developing across the imagined artists of diversity, identity, species, and planet.  This reimagining is interesting because I wasn’t necessarily thinking of my ethnographic sphere.  I worked in Arabia, West Asia, and Central Asia and dealt with people from Africa and the subcontinent.  So many of these people of my experience figure into this global latent space of imagined artists, not just those of the European area or even those, more specifically, North America.  And then I expanded this to species and planet, as we’ll see in a moment.  So here we have an alien sound artist.  The computer in this studio is almost cyberpunk and has a very otherworldly art studio image.  I must remember which artist this is, but it has a New England-style look.  And then third one is a Persian painter obsessed with color, Zafran Pavlavi, based on my partner, Negin Ehtesabian, who is currently coming to America from Tehran.

This slide is a rough outline of the structure of the catalog.  I take the name and the framework of the artist’s practice, and you can see here that this information went into GPT-3 reading statements almost indicative of the usual graduate school art statements.  Once again, these elements reflect some of the anxieties in the popular media.  = I’m using this as a dull mirror from a visual sociology standpoint, based on scholars like Becker.  In addition, this is a draft, but more is just a pro forma approach to the conceptual aspect.  The project catalog is available on Blurb.  It’s about $100 and still needs a few little revisions.  But, this is something that is from a materialist perspective in basically inverting many practices regarding the usual modalities of exhibition curation and execution of a show or an exhibition.  I’m also thinking about the standard mechanisms of artistic presentation within an institutional path.  So not only is this dealing with AI, but it’s using AI to talk about the sociological space, the institutional space in which these works inhabit, and how these works propagate.

Studio Visits deals with institutions, capitalism, and digital production.  So issues this project engages concerning AI exacerbate social anxieties about technology.  The deluge of images problematizes any cohesive narrative.  Using this meta-narrative through this conceptual frame, I can focus on some of the social and cultural questions about AI and the future of society and how it affects it within a fairly neat package.  Design and curatorial fictions provide solutions for cultural spaces.  Cultural institutions typically need to catch up with the speed of technology.  Then bespoke artifacts, which are problematic, can remain in place long enough for the institution to adopt them.  In other words, if you get something together and get it out there, you can have that in place, take it to the institution, and hopefully, they can explicate the work.

I’ve been asked about a sequel.  I’ve had many people ask me who these artists are.  What’s their work look like?  You can see excerpts of their work in the studios.  But people were asking me to take the conceit one step further, and I’m starting to work on that idea and show the portraits of the artists and their work.  This portrait is of the artist Zafran, who I talked about earlier.  These both continue the fiction and then humanize the story, which also problematizes it.  And so this is this project and its ongoing development in a nutshell.  I invite you to go to my Instagram at @Patlichty_art, and thank you for your time.

In closing, this is another portrait of the artist Vedran VUC1C.  And once again, this is an entirely constructed fantasy.  But once again, as Picasso said, these are lies that reveal the truth about ourselves.

 

Luba Elliott curator – AI Art History 2015-2023 – CAS AI Image Art talk 2023 transcript

LUBA ELLIOTT – AI Creative Researcher, Curator and Historian

All material is copyright Luba Elliott, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page

Visit Luba Elliott’s Creative AI Research website.

Luba Elliott

So I’m looking forward and, yes, I’m going to give a quick overview of AI art from 2015 to kind of the present day because these are the kind of the times I’ve been active in the field and also kind of this recent generation of which Anna Ridler, Jake Elwes, Mario Klingemann are all part of. And just to start off with, I’ll just mention a couple of projects about kind of the perspective I’m coming from. So I started by running a meet up in London that was kind of connecting the technical research community and also artists working with these tools. And that led to a number of different projects. So curating a media art festival, then putting up exhibitions at business conferences, and also the NeurIPS Creativity Workshop, which is probably one of the projects I’m kind of best known for. And yeah, so this was a workshop on creative AI at this really big and important academic AI conference. And alongside that, I also curated an art gallery, which kind of still exists online from 2017 to 2020. And now I think the workshop is still being continued and it’s run by Tom White, who is an artist from New Zealand.

So if you’re interested, you can still kind of submit and pay attention to that. And yeah, I also kind of run the I curate the Art and AI festival in Leicester, where we try and put some of this art in the public spaces around the city. And I’ve also done a little bit of kind of NFTs like I think many AI artists do now. So exhibitioner faro file and at Unit Gallery. And yeah, so now I’ll start kind of my presentation and I normally started with Deep Dream. So a technology that came out of Google in 2015, where you had, let’s say, an image and then kind of the algorithm looked at it and it got excited by certain features. So you got all the crazy colors and crazy shapes. And that really was one of the first things that I think excited the mainstream about kind of AI and its developments. And to me, it’s still one of my kind of favorite projects, in a way, because I do find it quite creative and I quite like the aesthetic. But there are very few artists who’ve been working with it. And Daniel Ambrosi is probably the one example who’s kind of stayed with the technology.

And so he normally does lots of landscape artworks where you can kind of see the deep, dream aesthetic, but also the subject matter, which I think is good because a lot of artists sort of let the aesthetic take away everything from kind of the image. So it was all about the aesthetic. But here you can also see the subject matter. And recently, he’s also sort of experimented a bit with kind of cubist mentiling some of this, to kind of freshen up his approach. And, yeah, then came something called style transfer, which is when you had an image and you could switch it into the style of Monet or van Gogh, one of these impressionist artists. And again, a lot of AI researchers and software engineers were really excited about it because to them, that’s what art should be like, right? It’s something you find in an old museum. And I think a lot of the artists or art historians I met were horrified because, of course, artists nowadays are trying to kind of make something new, either in terms of aesthetic or in terms of the comment on society. So, yeah, it’s tricky to find interesting projects there, I think, unless you kind of broaden the definition of style to something beyond artistic style.

And this is just an example of Gene Kogan, kind of a key artist and educator in the field doing some kind of different variations of Mona Lisa in Google Maps, Calligraphy, and kind of astronomy based style. And then came something called Began, which I think was kind of one of the more popular tools of this generation of artists. And it came out in 2014. And then there were lots of kind of different versions of it because it was a very active research topic until by, I think it was like 2018 or 19, it began making very kind of photorealistic images. And I think some of my favorite works are still from the early periods of the Gan. So Mario Klingemann has been doing amazing work, kind of looking at the human form. And a lot of these images were sort of compared to Francis Bacon. And what I like about them is that sometimes they can still show some of the features of the earlier gowns. So sometimes there’s kind of the wrong placement of facial features or, like you can see in the image in the middle, the arm is at an odd angle.

So still kind of some of the glitches and the errors of the technology that are sort of made use of in the artistic work. And I really like that. And when the gowns became very realistic, artists needed to kind of think a lot more about what they would do with their work because they could no longer rely completely on those kind of glitches. And Scott Eaton is an example of an artist who kind of really studies the human form, and he’s able to come up with these kind of images that sort of combine the realistic textures with maybe slightly odd shapes that are familiar to everyone who has been following the development of the Gan. And, yeah, Mario Klingemann was also kind of experimenting. So this is a work from the show I curated for Unit earlier this year. And there were two images. So the one on the left is from I think it’s kind of from 2018 from his project called Neural Glitch, where he kind of used a neural network to kind of generate an image of a face to the standards of that time. And the image on the right was when he used kind of I think the image on the left is a source image and then kind of used stable diffusion to come up with something with an image like that.

And you can see kind of how different it is in terms of the quality is kind of much more realistic and quite different to the earlier Gan image. And, yeah, artists like Ivona Tau kind of look at machine learning and what that means, because, of course, in machine learning, you kind of train in your network and it’s supposed to kind of improve and produce better results. And she had a project that was doing machine forgetting. So it was kind of the image I think quality got worse as you could maybe kind of see in this image. And, yeah, of course, also entangled others so that Sofia Crespo and Feileacan McCormick have been doing some amazing work looking at the natural world and all these kind of insects and sea creatures. And, yes, this work was done, I think, in the last year or so where the two artists combined various kind of generations of GAN images to make these kind of creatures that combine different characteristics from different species. And, yeah, other artists have also been thinking about kind of how they can also maybe display this work or what else can they do to kind of make their work interact more with the ecosystem.

And Jake Elwes, who actually has this work on show now at Gazelli Art House in London, so he trained an AI on images of these marsh birds and then he put up a screen in the Essex marshes. So you can see in the background there were all these kind of birds that were kind of walking around and they could interact with the screen with this digital bird. And I think that’s sort of very interesting because, yeah, you have these, like, two species kind of next to each other, and how do they kind of deal with a robotic or with this fake image? Moving on to kind of sculpture, Ben Snell did this lovely project called Dio, where he trained an AI in various designs of kind of sculpture from antiquity to modern times. And then he proceeded to destroy the computer who made all these designs and, yeah, made the sculpture out of it. And I think, conceptually, this is kind of much more evolved than a lot of other kind of AI artists because, of course, there are probably some parallels to artists from the 20th century who are destroying their work. And, yeah, I think it’s a really nice piece.

Next up, there’s also a group of artists who really think a lot about kind of the data set and Roman Lipski. He did this project quite a long time ago now, but I find it interesting because he’s been working with sort of like realistic landscapes. He’s been painting a lot, so kind of not really working with the digital realm. And then he took this photograph, made nine different paintings of it, worked with some technologists. He trained an AI to kind of generate an image based on these ones. And, yeah, this is kind of what he got from the AI. And he proceeded to paint different kind of versions in response. And you could kind of see how his style evolved as kind of he kept feeding his works into the machine and receiving a response. And I think this is kind of his final image. So if you look at the two side by side, you can see how, by working together with an AI system, he was able to really evolve his style, both in terms of, I think, how he depicts the subject matter. So it became a lot more abstract and also the color scheme, so it became a lot cooler at certain points.

He was kind of experimenting with many more colors. And, yeah, he still kind of stayed working in a physical piece. So you could still use some of these tools and continue kind of working physically. So you don’t always need to kind of rely on the purely generated image. You could still do kind of things in paint or engraving and so on. Helena Sarin is another artist who is kind of well known for using kind of data sets and having her own aesthetic. And what I really like about her work is that she frequently combines different mediums, like, in the image on the right. She’s got, I think, flowers, newspapers, and photography as a texture. So combining all these different mediums and kind of Gantt tools, she’s able to come out with these images that are very much kind of her own in terms of aesthetic. And then normally, I talk about Anna Ridler’s Tulip project, which I commissioned for Impact Festival in 2018. But I know that she will be doing that. So I guess I’ll just mention that as a curator, what I really appreciate about this project is that Anna made a conscious decision to sort of highlight the human labor that goes into this work.

So in many of her exhibitions, there would be the generated tulips together with a wall of these flowers. And that sort of really made, I think, a difference in how AI art was being perceived and being presented, because many artists, even if they would try and figure out how to kind of display their work beyond the digital screen, it wasn’t kind of very common to display the data set. And Anna’s a real pioneer in that. So, yeah, she’ll explain that after my presentation and yeah, moving on to more modern times when Dali and Clip have entered the world. Yeah, of course, I think the artistic and the audience focus have shifted to kind of these text to image generators, which is, of course, when you write a text prompt and then get an image out of it. And I think as a curator, I’ve always struggled to find interesting artworks because it felt to me that it was almost kind of a completely different medium because it’s so different from a lot of earlier AI art, which was where artists kind of thought a lot more deeply about maybe the technology itself and a lot of the kind of ethical implications on the concept, whereas a lot of the text to image generated art feels very kind of just about aesthetics, the image.

So, yeah, just including a few projects that I think kind of are more interesting to me as a curator, one of which is Botto, which is made by Mario Klingemann and operates, I think, as a DAO [digital autonomous organisation], where there’s a community of kind of audience members who vote on which of the images is going to be put up for sale. And I remember this was kind of initiated during the height of the NFT boom. And, yeah, I think a few of these images sold within a few weeks fetched over a million euros, which was kind of great news for AI art. And, yeah, Vadim Epstein is somebody who’s been working with, I think, these tools quite deeply, particularly Clip, and sort of developing his own aesthetic and then these narrative videos. Yeah, so his work is great. And Maneki Neko is an artist I curated in one of my kind of NFT exhibitions that I did. And what is special about her work is that I think this feels quite different to the stereotypical aesthetic of stable diffusion. So it’s quite intricate and, yeah, sort of very detailed. And I think she made this work by maybe combining lots of different images together and doing kind of a lot of paste processing.

But, yeah, it’s an image that you can tell has a lot of kind of unique style. And yeah, Ganbrood is. Another artist who’s been kind of very successful with text to image generators, and he’s been making these kind of epic, fantasy related images. And others, like Varvara and Mar, has kind of applied the potential of text image generators to kind of come up with different designs, to print them out and make them into a sculpture. And then, of course, there was also a little bit of kind of controversy, which is probably, like, ongoing, related to all these text image generators. And Jason Allen got first prize at an art fair in the US. And, yeah, I think people were not sure about to the extent that he highlighted the AI involvement, because, of course, this was kind of an AI generated piece using Stable Diffusion on a journey. And I think to anybody who follows AI art made using those tools, it’s very obvious that it’s something kind of made with one of them because their aesthetic is kind of very familiar. And, yeah, I remember Jason Allen was kind of defending himself, saying that he spent a while kind of perfecting the prompt to kind of get this particular result.

But, yeah, whether this was kind of clear to the judges is uncertain. And in another photography context, I think this was like the Sony Photo Awards. Earlier this year, Boris Eldagstan submitted this image, which, of course, looks much less like it was made with a text to image generator. And I think he was also kind of regarded very highly by the judges. But he pulled his piece because he used an AI, and he thought that he wanted to make a comment that maybe it’s not suitable for these competitions. And, yeah, including Jake Elwes here, again, because he has this work called Closed Loop.

That kind of links that was made in, I think, in 2017. And there’s one neural network that generates images from text and another one that generates text from the image. So it’s like a conversation between the two. And in some ways, it also helps you realize how much the technology has changed in, like, six or seven years. But on the other hand, this piece is much more interesting and much kind of more conceptually evolved than what is currently being done, I think, from my perspective. Let’s see what I have next. And I’ll just maybe show kind of one or two projects. In the interest of time, I should probably finish on this project, which I really like, and is by two artists called Shinseungback Kimyonghun [Shin Seung Back and Kim Yong Hun], who are based in South Korea. And I think this is a really cool piece because the artists are using facial recognition in a way that it was kind of never designed for. So they’re using in a fine art context. So they worked with artists who were supposed to paint a portrait together with this facial recognition system. And as soon as the system recognized the face, they had to do something to stop it being recognized as a face.

And so you got these portraits, which some of which look more like portraits than others. But I think the image on the right, it took me a very long time to figure out where there is a portrait there until I realized that it was kind of face, 90 degrees tilted. So, yeah, I think it’s a really cool example of kind of using a technology outside the most obvious kind of generative image tools to kind of still make work that is kind of within this AI art space. And I think here I will end and here are my details. You can find out more about my work on the website or email me with any questions. And now I’ll pass over to Geoff for the next speaker.

 

[00:16:37.170] – LUBA ELLIOTT – Curator

So I’m looking forward and, yes, I’m going to give a quick overview of AI art from 2015 to kind of the present day because these are the kind of the times I’ve been active in the field and also kind of this recent generation of which Anna Ridler, Jake Elwes, Mario Klingemann are all part of. And just to start off with, I’ll just mention a couple of projects about kind of the perspective I’m coming from. So I started by running a meet up in London that was kind of connecting the technical research community and also artists working with these tools. And that led to a number of different projects. So curating a media art festival, then putting up exhibitions at business conferences, and also the NeurIPS Creativity Workshop, which is probably one of the projects I’m kind of best known for. And yeah, so this was a workshop on creative AI at this really big and important academic AI conference. And alongside that, I also curated an art gallery, which kind of still exists online from 2017 to 2020. And now I think the workshop is still being continued and it’s run by Tom White, who is an artist from New Zealand.

So if you’re interested, you can still kind of submit and pay attention to that. And yeah, I also kind of run the I curate the Art and AI festival in Leicester, where we try and put some of this art in the public spaces around the city. And I’ve also done a little bit of kind of NFTs like I think many AI artists do now. So exhibitioner faro file and at Unit Gallery. And yeah, so now I’ll start kind of my presentation and I normally started with Deep Dream. So a technology that came out of Google in 2015, where you had, let’s say, an image and then kind of the algorithm looked at it and it got excited by certain features. So you got all the crazy colors and crazy shapes. And that really was one of the first things that I think excited the mainstream about kind of AI and its developments. And to me, it’s still one of my kind of favorite projects, in a way, because I do find it quite creative and I quite like the aesthetic. But there are very few artists who’ve been working with it. And Daniel Ambrosi is probably the one example who’s kind of stayed with the technology.

And so he normally does lots of landscape artworks where you can kind of see the deep, dream aesthetic, but also the subject matter, which I think is good because a lot of artists sort of let the aesthetic take away everything from kind of the image. So it was all about the aesthetic. But here you can also see the subject matter. And recently, he’s also sort of experimented a bit with kind of cubist mentiling some of this, to kind of freshen up his approach. And, yeah, then came something called style transfer, which is when you had an image and you could switch it into the style of Monet or van Gogh, one of these impressionist artists. And again, a lot of AI researchers and software engineers were really excited about it because to them, that’s what art should be like, right? It’s something you find in an old museum. And I think a lot of the artists or art historians I met were horrified because, of course, artists nowadays are trying to kind of make something new, either in terms of aesthetic or in terms of the comment on society. So, yeah, it’s tricky to find interesting projects there, I think, unless you kind of broaden the definition of style to something beyond artistic style.

And this is just an example of Gene Kogan, kind of a key artist and educator in the field doing some kind of different variations of Mona Lisa in Google Maps, Calligraphy, and kind of astronomy based style. And then came something called Began, which I think was kind of one of the more popular tools of this generation of artists. And it came out in 2014. And then there were lots of kind of different versions of it because it was a very active research topic until by, I think it was like 2018 or 19, it began making very kind of photorealistic images. And I think some of my favorite works are still from the early periods of the Gan. So Mario Klingemann has been doing amazing work, kind of looking at the human form. And a lot of these images were sort of compared to Francis Bacon. And what I like about them is that sometimes they can still show some of the features of the earlier gowns. So sometimes there’s kind of the wrong placement of facial features or, like you can see in the image in the middle, the arm is at an odd angle.

So still kind of some of the glitches and the errors of the technology that are sort of made use of in the artistic work. And I really like that. And when the gowns became very realistic, artists needed to kind of think a lot more about what they would do with their work because they could no longer rely completely on those kind of glitches. And Scott Eaton is an example of an artist who kind of really studies the human form, and he’s able to come up with these kind of images that sort of combine the realistic textures with maybe slightly odd shapes that are familiar to everyone who has been following the development of the Gan. And, yeah, Mario Klingemann was also kind of experimenting. So this is a work from the show I curated for Unit earlier this year. And there were two images. So the one on the left is from I think it’s kind of from 2018 from his project called Neural Glitch, where he kind of used a neural network to kind of generate an image of a face to the standards of that time. And the image on the right was when he used kind of I think the image on the left is a source image and then kind of used stable diffusion to come up with something with an image like that.

And you can see kind of how different it is in terms of the quality is kind of much more realistic and quite different to the earlier Gan image. And, yeah, artists like Ivona Tau kind of look at machine learning and what that means, because, of course, in machine learning, you kind of train in your network and it’s supposed to kind of improve and produce better results. And she had a project that was doing machine forgetting. So it was kind of the image I think quality got worse as you could maybe kind of see in this image. And, yeah, of course, also entangled others so that Sofia Crespo and Feileacan McCormick have been doing some amazing work looking at the natural world and all these kind of insects and sea creatures. And, yes, this work was done, I think, in the last year or so where the two artists combined various kind of generations of GAN images to make these kind of creatures that combine different characteristics from different species. And, yeah, other artists have also been thinking about kind of how they can also maybe display this work or what else can they do to kind of make their work interact more with the ecosystem.

And Jake Elwes, who actually has this work on show now at Gazelli Art House in London, so he trained an AI on images of these marsh birds and then he put up a screen in the Essex marshes. So you can see in the background there were all these kind of birds that were kind of walking around and they could interact with the screen with this digital bird. And I think that’s sort of very interesting because, yeah, you have these, like, two species kind of next to each other, and how do they kind of deal with a robotic or with this fake image? Moving on to kind of sculpture, Ben Snell did this lovely project called Dio, where he trained an AI in various designs of kind of sculpture from antiquity to modern times. And then he proceeded to destroy the computer who made all these designs and, yeah, made the sculpture out of it. And I think, conceptually, this is kind of much more evolved than a lot of other kind of AI artists because, of course, there are probably some parallels to artists from the 20th century who are destroying their work. And, yeah, I think it’s a really nice piece.

Next up, there’s also a group of artists who really think a lot about kind of the data set and Roman Lipski. He did this project quite a long time ago now, but I find it interesting because he’s been working with sort of like realistic landscapes. He’s been painting a lot, so kind of not really working with the digital realm. And then he took this photograph, made nine different paintings of it, worked with some technologists. He trained an AI to kind of generate an image based on these ones. And, yeah, this is kind of what he got from the AI. And he proceeded to paint different kind of versions in response. And you could kind of see how his style evolved as kind of he kept feeding his works into the machine and receiving a response. And I think this is kind of his final image. So if you look at the two side by side, you can see how, by working together with an AI system, he was able to really evolve his style, both in terms of, I think, how he depicts the subject matter. So it became a lot more abstract and also the color scheme, so it became a lot cooler at certain points.

He was kind of experimenting with many more colors. And, yeah, he still kind of stayed working in a physical piece. So you could still use some of these tools and continue kind of working physically. So you don’t always need to kind of rely on the purely generated image. You could still do kind of things in paint or engraving and so on. Helena Sarin is another artist who is kind of well known for using kind of data sets and having her own aesthetic. And what I really like about her work is that she frequently combines different mediums, like, in the image on the right. She’s got, I think, flowers, newspapers, and photography as a texture. So combining all these different mediums and kind of Gantt tools, she’s able to come out with these images that are very much kind of her own in terms of aesthetic. And then normally, I talk about Anna Ridler’s Tulip project, which I commissioned for Impact Festival in 2018. But I know that she will be doing that. So I guess I’ll just mention that as a curator, what I really appreciate about this project is that Anna made a conscious decision to sort of highlight the human labor that goes into this work.

So in many of her exhibitions, there would be the generated tulips together with a wall of these flowers. And that sort of really made, I think, a difference in how AI art was being perceived and being presented, because many artists, even if they would try and figure out how to kind of display their work beyond the digital screen, it wasn’t kind of very common to display the data set. And Anna’s a real pioneer in that. So, yeah, she’ll explain that after my presentation and yeah, moving on to more modern times when Dali and Clip have entered the world. Yeah, of course, I think the artistic and the audience focus have shifted to kind of these text to image generators, which is, of course, when you write a text prompt and then get an image out of it. And I think as a curator, I’ve always struggled to find interesting artworks because it felt to me that it was almost kind of a completely different medium because it’s so different from a lot of earlier AI art, which was where artists kind of thought a lot more deeply about maybe the technology itself and a lot of the kind of ethical implications on the concept, whereas a lot of the text to image generated art feels very kind of just about aesthetics, the image.

So, yeah, just including a few projects that I think kind of are more interesting to me as a curator, one of which is Botto, which is made by Mario Klingemann and operates, I think, as a DAO [digital autonomous organisation], where there’s a community of kind of audience members who vote on which of the images is going to be put up for sale. And I remember this was kind of initiated during the height of the NFT boom. And, yeah, I think a few of these images sold within a few weeks fetched over a million euros, which was kind of great news for AI art. And, yeah, Vadim Epstein is somebody who’s been working with, I think, these tools quite deeply, particularly Clip, and sort of developing his own aesthetic and then these narrative videos. Yeah, so his work is great. And Maneki Neko is an artist I curated in one of my kind of NFT exhibitions that I did. And what is special about her work is that I think this feels quite different to the stereotypical aesthetic of stable diffusion. So it’s quite intricate and, yeah, sort of very detailed. And I think she made this work by maybe combining lots of different images together and doing kind of a lot of paste processing.

But, yeah, it’s an image that you can tell has a lot of kind of unique style. And yeah, Ganbrood is. Another artist who’s been kind of very successful with text to image generators, and he’s been making these kind of epic, fantasy related images. And others, like Varvara and Mar, has kind of applied the potential of text image generators to kind of come up with different designs, to print them out and make them into a sculpture. And then, of course, there was also a little bit of kind of controversy, which is probably, like, ongoing, related to all these text image generators. And Jason Allen got first prize at an art fair in the US. And, yeah, I think people were not sure about to the extent that he highlighted the AI involvement, because, of course, this was kind of an AI generated piece using Stable Diffusion on a journey. And I think to anybody who follows AI art made using those tools, it’s very obvious that it’s something kind of made with one of them because their aesthetic is kind of very familiar. And, yeah, I remember Jason Allen was kind of defending himself, saying that he spent a while kind of perfecting the prompt to kind of get this particular result.

But, yeah, whether this was kind of clear to the judges is uncertain. And in another photography context, I think this was like the Sony Photo Awards. Earlier this year, Boris Eldagstan submitted this image, which, of course, looks much less like it was made with a text to image generator. And I think he was also kind of regarded very highly by the judges. But he pulled his piece because he used an AI, and he thought that he wanted to make a comment that maybe it’s not suitable for these competitions. And, yeah, including Jake Elwes here, again, because he has this work called Closed Loop.

That kind of links that was made in, I think, in 2017. And there’s one neural network that generates images from text and another one that generates text from the image. So it’s like a conversation between the two. And in some ways, it also helps you realize how much the technology has changed in, like, six or seven years. But on the other hand, this piece is much more interesting and much kind of more conceptually evolved than what is currently being done, I think, from my perspective. Let’s see what I have next. And I’ll just maybe show kind of one or two projects. In the interest of time, I should probably finish on this project, which I really like, and is by two artists called Shangsunbak Kim Yanghung, who are based in South Korea. And I think this is a really cool piece because the artists are using facial recognition in a way that it was kind of never designed for. So they’re using in a fine art context. So they worked with artists who were supposed to paint a portrait together with this facial recognition system. And as soon as the system recognized the face, they had to do something to stop it being recognized as a face.

And so you got these portraits, which some of which look more like portraits than others. But I think the image on the right, it took me a very long time to figure out where there is a portrait there until I realized that it was kind of face, 90 degrees tilted. So, yeah, I think it’s a really cool example of kind of using a technology outside the most obvious kind of generative image tools to kind of still make work that is kind of within this AI art space. And I think here I will end and here are my details. You can find out more about my work on the website or email me with any questions. And now I’ll pass over to Geoff for the next speaker.

Mark Webster artist – Hypertype – CAS AI Image Art talk 2023 transcript – NLP

MARK WEBSTER – Artist

All material is copyright Mark Webster, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page

Visit Mark Webster’s Area Four website.

https://areafour.xyz/

Mark Webster Hypertype London exhibition
Mark Webster Hypertype London exhibition

Thank you for inviting me. It’s wonderful. Okay, just maybe a quick introduction. So, my name is Mark Webster. I’m an artist based out in Brittany. Lovely part of France. I’ve been working with generative systems, well, computational generative art, since 2005. And it’s only recently though, that I’ve started to work on a personal body of work. And one of those projects that I’ve worked on is Hypertype, which is the project I’d like to talk to you about this evening. So, in a nutshell, Hypertype is a generative art piece that uses emotion and sentiment analysis as its main content. So I’m pulling in data using a specific technology. And that data is used as content to organise typographic, form as letters and symbols. So it’s first and foremost a textual piece. And secondly, it’s trying to draw attention to a specific field in AI specific technology, which is called Natural Language Processing or Understanding. So I’m going to talk a little bit about this. I’m going to talk a little about the ideas that came to develop Hypertype. What you’re actually seeing on the screen at the moment is two images from the program and that were exhibited in London last year in November.

So just to talk a little bit about where this all came from. A few years back now, I came across this technology. So it’s by IBM. It’s an API Natural Language Understanding. And it enables you basically, what this technology does is it enables you to give it a text and it will analyse this text in various ways. And there was one particular part of the API that interested me was the analysis, the emotional and sentiment analysis part. So what it does is you give it a text and it basically spits out keywords. And so these keywords are relevant words within the text and it will assign a sentiment score. So something that is positive, negative, or neutral to these keywords. It also assigns an emotion score and based on five basic emotions. So these five basic emotions are joy, sadness, fear, disgust, and anger. So, yeah, I came across this technology quite a few years ago and I became kind of fascinated by it. I wanted to learn a little bit more. So to do that, I did a lot of reading and I also wrote a few programs to try and explore what I could do with this technology, just try and understand it without thinking about doing anything artistic to begin with.

So what you’re seeing on screen here is on the left, you have basically just a shoot screen of some data. So this is a JSON file. This is typically the kind of data that you get out of the API. So I’m here. I’m just underlying the keyword. This is one keyword, which is a person. So it’s Douglas Hofstadter, a very well known philosopher in the world of AI. And as you can see, there are five emotions with a score that go from zero to one that are given to Douglas. And there’s a sentiment score. It’s neutral. On the right, what you’re seeing is probably something that you’ve seen before. This is a different technology, but it’s very much linked with textual analysis. What you’re seeing on the right is facial recognition technology. And in fact, you are seeing as well the photo of the face of a certain Paul Ekman. Now, Paul Ekman is quite famous because he was, along with his team, one of the people in the kind of bring up this theory that emotions are universal and they can be measured and they can be detected. And this was a theory that was used in part to develop facial recognition, emotion recognition, but also textual.

So, as I said, as I learned about this technology, I wrote a lot of programs. What I have here is a shoot screen of an application that I wrote in Java, basically enabling me to kind of visualise the data. Very simple. Actually. What you’re seeing is an article on Paul Ekman’s research, which is quite funny. On the right, you can see that there are a list of keywords. And for each keyword, there is a sentiment score. And on the left, there’s a little graphic, and there’s a general sentiment score. And then I can go in through the list of each keyword, and I can see kind of information about the various five emotions. So there’s a score for each emotion joy, sadness, disgust, anger, and fear. So I built quite a few programs. I also made a little Twitter bot that enabled me to kind of… because you can get so much data from this technology, it was really important for me to kind of get an understanding of how it not just works, but what it was giving back. So I wrote a little Twitter bot that basically would take an article from The Guardian every morning, and then it would analyse this and it would publish the results on Twitter.

And this just enabled me to kind of observe things. But at the end of the day, at some point, there was the burning question about, well, how does this actual technology work? How does a computer go about labelling text with an emotion? And so I went off and I did a lot of reading about this, not just kind of general articles in the press, but I tried to learn about it technically, and I came across a lot of academic articles. Now, I’m not a computational linguist, and very quickly I came across things that were very, very technical, but something else very different happened. And it was quite interesting. While I was trying to learn about the technical side, about how this computer was labelling text with emotion, another question came about, which was, well, what is an emotion? What is a human emotion? And that was really interesting because at the time, I was reading a lot of Oliver Sacks. You may have heard of Oliver Sacks. He’s written a lot of books. He was as a neuroscientist, and although he never really touched upon emotion, his books kind of opened up a door. And I started to read and learn about other people who were working in the field of neurobiology.

And there were a few people so there was Antonio Damasio and there was Robert Sapolsky, two people who are very well known in the field of neurobiology and touching on questions, not just emotion, but also consciousness. And a lot of their texts can be quite technical, yet they were very, very interesting. And then there was another person that came along – which I’ll show the book cover – of a certain Dr. Lisa Feldman Barrett, also based out in the States, who’s doing wonderful research and written a number of books, one of them here called The How Emotions Are Made. Now, it was really with Lisa Feldman Barrett’s book that something kind of clicked, because in it she started to talk about Paul Ekman, and in a way, she kind of really pulled his whole theory apart. Now, Dr. Barrett is doing a lot of research in this field, so she’s working in a lab and she’s doing contemporary research. What she says in the book really kind of debunks the whole theory of Paul Ekman. That is to say that human emotions are not universal, that they are not innate, we’re not born with them. And this very interesting quote that I put here, emotions are not reactions to the world, but rather they are constructions. So her whole theory kind of drives towards this idea that, in fact, emotions are concepts, and they’re things that we do on the fly in terms of our context, in terms of our experience, and in terms of also our cultures.

And so this was really kind of an eye opener for me. And it also, in a way, kind of made me think, well, again, how does a computer label words and try and infer any kind of emotional content from that from what I was reading from these people, Lisa Feldman Barrett, Antonio Damasio, Robert Sapolsky, they added the whole kind of body side to things. We tend to think that everything is in the brain, but emotions or experiences are very much a bodily experience. And so at the end of the day, I basically came up with the conclusion that, well, a computer is not going to really infer any kind of emotional content from text. So from that point of view, I thought, well, it’s an interesting technology and it’d be nice to do something artistic with it. So how am I going to go about this? So that was the next stage. I did all this kind of research and I basically came to the conclusion that the data is there.

There’s lots of data. How do I use it? Let’s use it as a sculptor might use clay or a painter will use paint. Let’s use that data to do something artistic with. And at the time, I actually didn’t do anything visual. I did actually did a sound piece to begin with which was published back in 2020 called The Drone Machine. And this particular project was pulling in data from IBM Emotion Data Analysis and using that to drive a generative sound oscillators. I basically built a custom made synthesiser, digital synthesiser that was bringing in this data and it was generating sound. And I can share the project perhaps later on with a link. It was published and went out on the radio. It’s 30 minutes long, so I’m not going to play it here. This was the first project I did. What was interesting is that I chose sound because I found that sound was a medium that probably kind of spoke to me more in terms of emotion. But the next stage was indeed to do something visual. And this is where Hypertype came along. And again, the idea was not at all to do a visualisation.

Again, the data for me just didn’t compute use that word. So for me, the data was purely just a material with which I could just play with. So here what you’re seeing on the screen are just very first initial kind of prototypes, visual prototypes for Hypertype in which I was just pulling in the data and I was just putting it on the screen. There were two basic constraints I gave myself for this project. The first one was that it had to be textual, okay? That was it. It had to be based on text, so everything is a font. And secondly, the context that is the content should come that’s what I said to myself. It should come from the articles that I read, all the research I’d done about the technology. So those were the two kind of constraints. And from there I basically started to develop. I’ll probably pass through a few slides because I’m sure we’re running out of time, but here I’m just showing you a number of iterations of the development of the project. So I brought in color. Of course, colour was an interesting parameter to play. With because you could probably think that with emotion that you want to assign a certain color to an emotion or anything.

But that, again, I completely dismissed. In fact, a lot of the colour was generated randomly from various colour palettes that I wrote. Here’s a close up of one of the pieces. Here are some more refined iterations. For those people who work with kind of generative programs, you get a lot of results. And I think as an artist, what is really, really difficult to do when you’re working with a generative piece is to try and find the right balance between something that has visual coherency across the whole series, yet has enough variety. Here these two images, obviously, they’re very different, one of them being quite minimal, the other being a little bit more charged, yet you notice that they have a certain coherency visually. So as a generative artist, you create a lot of these. I usually print out a lot. I usually create websites where I just list all the images so I can see them. And then eventually, yes, there was the exhibition. So for the exhibition, there were five pieces that I curated. So five pieces I chose from the program. And then the program was also went live for people to mint in an NFT environment again. So these were the last pieces, so maybe I’ll stop there.

 

AI & Image Art CAS Talk 1 June 2023 – video & transcripts online

AI & Image Art CAS Talk 1st June 2023

The talk included Geoff Davis (host and Introduction), Luba Elliot (curator) with a history of AI Art, and the artists Anna Ridler, Mark Webster and Patrick Lichty.

Transcripts are below the video. With thanks to CAS and Sean Clark.

AI and Text talk is also online, please see the AI & Text Transcript page which has the video link, or visit the Computer Arts Society Talks page

The AI & Image Art CAS talk video:

TRANSCRIPTS 

Geoff Davis – Introduction – AI Researcher at UAL CCI London, Artist

Luba Elliott – Curator, Creative AI Researcher, Historian

Anna Ridler – Artist

Mark Webster – Artist

Patrick Lichty – Artist, Writer

AI and Image Art Talk – Geoff Davis – 1 June 2023 – Computer Arts Society CAS

AI DOOM BANDWAGON

See below for the transcript of the whole evening.

Introduction from Geoff Davis followed by the four speakers from the book,

    • Luba Elliott
    • Anna Ridler
    • Patrick Lichty
    • Mark Webster

This first part is EXTRAS to the Talk, which were not in the live talk.

To get to the actual talk, scroll or search down down for Live Talk.

I edited these out to reduce time, as the slot was not that long. I mean, this is a lot to cover.

DOOM BANDWAGON

Social media has huge and unpredictable social and political effects, but regulation only started twenty years after it appeared. The precursor to Facebook was MySpace and was founded in 2003. The UK’s Online Safety Bill has arrived in 2023, but is not law yet.

My AI research is in interaction bias, and I have a new paper out soon. There are many pressing problems with AI, rather than possible dystopian scenarios.

It’s obvious that there will be economic changes from more automation, but the so-called existential threats from AI are threats that we already have now, but more so, such as more fake news, more surveillance, robots helping dictatorships run amok with even more appalling weapons. These headline terrors make great news, rather than pragmatic and current problems like racial and gender bias, accuracy, and so on. AI bias can have a subtle and insidious effect, without anyone noticing

ARTIFICIAL POSTURE – AI FILM MOVIE MAKING

Already AI video generation is becoming mainstream, liberating film-making from being an exciting team process, where people interact and learn new skills and have amazing experiences, to a deskbound headache-producing solo computer graphics marathon.

THE MEDIA OF AI

With this very positive response, and the fastest ever take up of the new AI art and text tools, would this media fear of AI be so intense, if there had been no Terminator films, no pandemic, and a war wasn’t raging in the middle of Europe? And if middle-class journalistic jobs weren’t threatened, as I mentioned in my last talk?

HUMAN ENFEEBLEMENT

The fear of enfeeblement, such as if computers replaced human creativity, is like fearing that mechanical devices would reduce human power or effectiveness. Now, great strength is mainly celebrated in sport rather than the work place.

The art world has parallels with professional sport, with A grade and B list artists, and a huge number of amateurs who do it for fun. Stellar art celebrities, with huge amounts of money at the top, little or no money below, causing competitiveness and status anxiety. Both sport and art are group activities.

Perhaps it is the origin and separation of AI Art from the art group or art world that is causing unease, and AI art’s ambiguous position. Some call for AI art to be a new art category, with its own exhibitions, others think AI will be just another tool.

OVERPRODUCTION

With a saturation of AI images and words, humans might give up trying to become professional authors or artists, instead creating ambient works for their own personal or social amusement.

This is apparent in Amazon’s huge book self-publishing market, which has been around for many years, and has not quite destroyed traditional publishing, which still continues. More indie publishers exist now than before this change. The mainstream art generators have community forums and awards.

Serious artists are usually driven individuals who would do art anyway. So it is unlikely that AI will affect the art world too much. AI will be absorbed as another art-historical trend, or influence.

With AI being in the news, and technology prevalent, artists will experiment and produce reflective art.

Our panel of speakers will illuminate these topics.

ART ROBOTS JOKE

Ai-Da The Art Robot has praised Sasha Stiles, the author of the AI book “Technelegy”, saying “Sasha’s beautiful poetry evokes the experience of an intimate social gathering, with views on life that make me feel I’m there.” This is not actually an android talking by the way. It would have said, “I’d like to go to that party but someone has bolted my feet to the floor!”

CLASSIC AI ART

Classic AI art was covered in the recent AI History talk with innovative work from Paul Brown, Ernest Edmonds and Steve Bell amongst others. Their work was more rooted in artistic considerations of human-computer interactions and the physical characteristics of the hardware of the time. Systems art was an inspiration. This talk is online, please visit the Computer Arts Society website.

ORIGINS

During the early days, making computer art was not respected, and some saw it as reactionary and irrelevant. The 1960s and 1970s were full of intense political strife, and passionate art movements which included Fluxus, Situationism, Performance Art, Land Art, Psychedelic Art, and many other anti-establishment approaches.

Computer art emerged from military-industrial-academic research labs, complete with their unknown, baffling and expensive mainframe computers, which had been developed only twenty years previously during the Second World War to create the atomic bomb.

“Nearly every computer artist tells a similar story, a tale in which their computer art is accepted on its merits, only to be rejected once the curators discovered it was generated on a computer. Computer artists were regularly rebuked and insulted by gallery directors. Such was the stigma attached to computers that artists, such as Paul Brown, have used the expression “kiss of death” to describe the act of using computers in art.”
When The Machine Made Art, Grant D. Taylor 2014.  https://www.bloomsburycollections.com/book/when-the-machine-made-art-the-troubled-history-of-computer-art/notes

Many professional artists don’t like to be labelled ‘computer artists’, even if their art installations, video or sound design are completely dependent on computers. The tools are absorbed, as in music production, where everything is now recorded using DAWs or digital audio workstations, which now include AI tools.

WHAT HAPPENS WHEN AI DOES ALL THE JOBS

Plus starvation when all jobs disappear. This could cause the adoption of Universal Basic Income UBI systems, which have been promoted for decades. Then everyone can be an artist.

(Of course this was a desired society back ages ago. Now it is feared, as no-one expects much help from the State, apart from subsidence level hand-outs to stop riots.)

COMPUTER PERFORMANCE

In AI art and text, the ‘uncanny valley’ effect was often mentioned as a flaw of all human – AI interactions. This was because the outputs of the generators had an unreal tone perceived as spooky or uncanny. This was not actually due to the accuracy of human perception as claimed, but due to the low performance of the generators, which were still being developed.

MUSIC SOFTWARE TOOLS – DEATH BY DAW

In the music world, every few years there were new music styles, but once the digital audio workstation or DAW appeared, they blended together into today’s hybrid pop music. Genres such as Techno and Jungle or Drum and Bass predated the general use of a digital audio workstation, and used a mix of analogue and digital equipment.  Digital audio workstations have led to a homogenisation of music using preset styles and a disconnect from musical society. Sound familiar?

Critics of Digital Audio Workstations (DAWs) have raised several concerns and highlighted potential negative effects associated with their use. Here are some of the criticisms:

Over-reliance on presets: DAWs often come with a vast library of pre-made sounds, loops, and effects. Critics argue that this can lead to an over-reliance on these presets, limiting the creativity and originality of the music produced. It may discourage musicians from exploring unique sounds and experimenting with different techniques.

Homogenization of music: With the widespread availability of DAWs, it becomes easier for anyone to create music. Critics argue that this has led to a saturation of generic and formulaic music. The ease of use and access to pre-made elements can result in a lack of innovation and artistic diversity.

Loss of human touch: DAWs offer precise editing tools and the ability to fix imperfections in recordings. However, critics argue that this level of control can lead to an overemphasis on perfection, resulting in sterile and overly polished music. The natural variations and imperfections that can add character to a performance may be eliminated, leading to a loss of the human touch.

Disconnect from the traditional recording process: Critics contend that the ease and convenience of DAWs have contributed to a detachment from the traditional process of recording music. The speed and efficiency of digital production can undermine the organic and collaborative nature of music creation, potentially affecting the dynamics between musicians and the overall quality of the music.

Potential for overproduction: DAWs offer an extensive range of plugins, effects, and editing capabilities, which can lead to excessive tinkering during the production process. Critics argue that this overproduction can result in music that sounds overworked, cluttered, or lacking authenticity. It may prioritize technical perfection over the emotional impact of the music.

High learning curve and complexity: While DAWs provide powerful tools, critics argue that their complexity can be overwhelming for newcomers. Learning to use these software programs effectively requires time, patience, and technical skills. This learning curve can discourage some individuals from fully harnessing the potential of DAWs or even deter them from pursuing music production altogether.

It’s important to note that these criticisms represent perspectives from certain individuals or communities, and opinions may vary. DAWs also have numerous advantages and have revolutionized music production by making it more accessible and democratized.

Music technology provides some insights. The effect of computer workstations on music production was to create more homogenised, preset and socially disconnected music. Electronic music genres with dedicated audiences, like Techno and Drum and Bass, predated the use of digital audio workstations or DAWs.

Since art and music are socially situated activities, the simplistic use of AI art generators is outside of the normal methodology.

So it will be interesting to see what happens with the use of AI art generators, since art is not only about the image.

LABELS

In 2018 the French collective Obvious hit the headlines with “Edmond de Belamy”, an AI print on canvas, which sold for a very high price at Christies. This was a success as they positioned AI art firmly in classic art history, as a new AI twist on an old style, thus becoming marketable. The label became the art.

LIVE AI & ART IMAGE TALK:

WHAT WAS PRESENTED

1st June 2023 (transcript)

AI SAFETY STATEMENT

“Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war.”

It was on the newsstands yesterday. Headline we’re all going to die, basically. Now, the ethics of AI is a large I could read some of this stuff. Now, the ethics of AI is a large and complex area. The speakers might have something to say about it through their artwork, which people do these days, especially the newer artists working in a more social way, or they might not, of course, since it’s not an obligation for artists to have ethical attitudes unless they’re applying for grants. So there’s always that that an artist is supposed to be free of these constraints. Now, this statement was from the center for AI Safety who got some them and a bunch of AI experts and presidents of various things made this statement. They didn’t put much detail in because they wanted it to be simple and to the point. Now this is a really good idea, obviously, but these fears have been around for a while. This book, ‘Superintelligence’ by Nick Bostrom, 2014, that’s ten years ago. This is a good one to read if you want to get up to date with what’s being talked about now. But nobody took any notice of this, until now.

I mean there were but arguments about bias and so on. So there has been a debate, but it’s been more in the industry than outside the industry as now it’s all gone mainstream public. There’s no structure for any sort of overreach organization at the moment. Governments aren’t very good at this sort of thing. There’s only now some regulation coming in for social media and MySpace started in 2003, so that’s 20 years you’ve had social media with hardly any regulation at all. And I think the online safety bill, which is the British government’s attempt, is still in the law. It’s still in Parliament, so we don’t expect anything very soon on that front. Now then, a quick one.

ARTIFICAL POSTURE

Perhaps the biggest immediate threat from AI will be postural as it promotes even more desk bound computer or smartphone work, leading to an increase in back and neck aches, eye strain, headaches, repetitive strain injury. David Byrne, who is in the band Talking Heads and writes on music and culture, has commented that all recent technologies from recording music, especially in his case obviously, and AI now eliminates humans and makes people more isolated. This is a good point.

So the health aspect in physical and mental health is quite important with the growth of these isolating technologies rather than the kind of terminator myth we have, which is an edipor construct saying that AI is going to turn into like killer robots. I know that isn’t all that the center of AI safety are talking about, but it’s definitely an aspect.

THE JOY OF AI

Most of the responses to it have been very positive. I mean, I’m an AI researcher, and I’ve been researching on how creatives use AI. And so I’ve done a paper and I’ve got a new paper on now. And the reaction we get from creatives using generators, text generators, or image generators for the first time, Joy, is their most I mean, in the words of their comments, they say, Joy, it’s amazing. And then you do a sentiment analysis on all of the comments and the sentiment joy comes up as the highest scoring term. Now. We had these results from the studies. We had 82 writers using AI, most of them for the first time. This was a couple of years ago and they were amazed. They couldn’t believe how much fun it was.

And the same applies to the image generators that are around now. People just really like using them, which is why the takeup has been so great, so rapid. I think certainly Chat GPT is the fastest ever app to get to 100 million users, much faster than Facebook and so on, because it read spread by word of mouth as well. It wasn’t promoted when it came out, wasn’t advertised or anything. Generative and AI arts now I’m moving on more to the art aspects rather than ethics or professional use of these systems and how people feel about them. Generative art is not the same as AI art. They often get mixed up. I mean, people in the industry, in the art business or whatever know about this, but it’s worth mentioning. Generative art uses computer programs to produce repetitive variations, usually to make abstract images or sequences. You have people like Zancan now who does more artistic figurative images, but they’re still created abstractly from algorithms, whereas AI programs attempt to formulate output that mimics human responses, as in games, robot painting systems, I get onto that later. And modern image and text generators.

I mean, I did all this in Micro Arts Group years ago. We had a lot of art generators, but also one program that was a simple kind of expert system using language to produce stories. So I did both aspects, and that was a while ago. Another thing, Tyler Hobbs, who’s a very famous generative artist, prolific, moving into galleries now. He’s had a show at Unit London recently. He doesn’t do AI, but he just produces all this generative art. And some of the AI artists produce, most of them, in fact, figurative art. Organizations like Botto, which is democratic, produce figurative art. So even though they’re AI and they’re related to these sort of things, they still produce what looks like traditional art.

AI ART ROBOTS – OLD and NEW

Okay, now the next slide. This is a fun one. We’ve got Harold Cohen, who gets to mention again, this is in the 70s. He had an AI system, which was an expert system he’d coded to be like himself. And it would draw on using a turtle, which is a kind of small robot that drew on the floor on a piece of giant paper on the floor. So you get an outline and then he’d fill them in. So you can see in this picture, you can see the outline and then he’d go and fill them in.

And I think later versions had color as well, but the device drew lines. Now, the other one on the right is AiI-Da, the art robot, who has been around since 2019. Now, it’s got a chrome hand with a brush painting mechanism, which you can see in this picture. This art robot is ignored by the fine art and academic communities, but it’s well known popular art figure. So I thought I’d mention it because it gets in the press. It’s like a publicly known AI artist. It’s presented as an android, but it’s not really. It’s just a drawing machine that’s kind of dressed up, does a bit of speaking, but I’m not convinced by that. But last month in the Design Museum in London, which is a really big institutional kind of place, it was on for a three day show in a really big auditorium. Huge crowds arrived to watch the robot, AiI-Da do thing. It produced, I think, only one painting a day. It’s quite a slow process with this big, funny arm. And also she gave a talk at the House of Lords in Britain on AI and art, which is kind of interesting. And also she was arrested in Egypt when they went there for a tour, which I thought was hilarious. And apparently they thought she was some kind of spying device. So they just stuck her in a cell for a while. So she gets to places that AI art doesn’t normally get to, including the public consciousness, really. So I think she’s worth a mention, even though she’s a bit weird in a lot of ways. And there’s a big debate about whether you should have these anthropomorphic robots, especially pretending to be women and pretending to be attractive artists. There’s something a bit funny about it.

AI ART TOOLS

But anyway, moving swiftly on, we’ve got AI art tools now. I use Photoshop a lot. I’ve got the Adobe suite. But obviously there’s alternatives to Adobe now. There’s all sorts of new plugins in Photoshop. So it’s going even more mainstream now into design and into general production. So generative tools are in and generative editing tools are in these programs, the Paint programs. Now, I mean, we were talking about Quantel [a CAS talk in Leicester]. That was kind of simple Paint program that did a huge amount of new work when it came out. Photo editing and so on. So things have moved on quite a bit. The other thing to mention about all these tools is that these art generators are more or less free for most people. Or you can pay a few pounds and you get much more use of them. But I remember when certainly music and art software cost hundreds of pounds to get one program. So another reason for the rapid take-up is that they’re so cheap or free so everybody will have a play with them. I mean, a friend of mine was saying at their school they did AI art using, I think, MidJourney as a competition for the children where they’d write prompts and then the teacher would produce these art images from their prompts and then they print them out and it’s all good fun. It’s like a workshop using it.

Now, I’ll quickly go into these mainstream image generators because this is the thing that’s made it such a publicly known area. Image generation development accelerated last year from last year. The field has gone from half working systems, which we’ve been looking at for a few years now, to similar to human level production when they’re very cheap or free.

CHOICE ANXIETY

AI computer art from a year or two ago is experimental. All those blobs and psychedelic effects that we’ve been looking at for a while from transitional systems so from like nothing to now there was this period of a few years where there was a lot of quite interesting work being done because artists are experimental. They prefer it like that anyway.

Now, these generators enable people with artistic urges or requirements for presentation graphics, say, but no training to produce images of whatever takes their fancy. They’re very useful for presentation and illustrations as I’ve used them in these slides. The difficulty for an artist is the tremendous overproduction of images. And this is a feature of AI. It can make one thing, but it can make a million things. So how do you decide? So that creates choice anxiety and befuddlement it’s just too much. The way to avoid choice anxiety is not to be a perfectionist and live with uncertainty, which is difficult for an artist if you’re working on something and then you’re presented with a huge amount of possibilities. Maybe the opposite of what you want see this. I’ve got here, I did something called Micro Arts Group years ago.

MAINSTREAM IMAGE GENERATORS

Now, if you win Micro Arts into Google, you get lots of people making tiny things, making sculptures inside the eye of a needle. This type of activity should definitely give you bad eyes. But I did a generation on Micro Arts and it came up with a mixture of monitors with things going on inside them. So you’ve got group activities, smallness and like computer monitors. So that’s quite a good one. The one in the middle is me having a chat with Tyler Hobbes because I did have a chat with him and I didn’t take a selfie, so I just not made an image for fun. And there it is. And there’s me looking at myself, which is very odd, what these generators come up with sometimes. And the one on the right is some abstract stuff I’m working on now, which is more like drawn or repetitive drawn, generative graphics, which is really interesting area to use them to create what you use a plotter or a drawing device to make before they could just do it for you. And of course, they’re only making the output, not the process. So they’re lots of fun, really.

OUTSIDER or NAIVE ART

They do open up the possibility for more diverse art production. There’s this thing, outsider or naive art, which is obviously not in the mainstream gallery system, but there’s an awful lot of it around, an awful lot of amateur artists who would like to use these kind of things and maybe explore aspects of what they’re working on. And also it creates a community and people communicate with each other. So there’s a big online scene now for this type of work. It’s this origin and separation of AI art from more traditional artists and the art world. So it’s kind of a separate area. People later will talk about this a lot, but there is a separation of this mainstream image generator work from the traditional gallery system. So this puts in a slightly ambiguous position. Some call for AI art to be a separate art category and have separate exhibitions for AI art. And other people think it would just become integrated in the way that it already has, with tools in Photoshop and so on. But, I mean, this is a whole area that we’re talking about. So we’ll be discussing this through the artists and through the discussion at the end. And it’s all very new as well, and nobody really knows how it’s going to work out.

AI VIDEO

Also, video can be generated. Now, having a mobile phone opened up a whole area of phone filming, making films, using only a phone. Whereas a few years ago and I’ve done a bit of filmmaking, ages ago, this was a huge team involved and it costs a fortune. You got to have all this expensive equipment. But now you can win the toner prize with an iPhone film and you can produce really interesting stuff that people want to see on a phone. So this having AI video would just add features to that and maybe you’ll get animations coming out of it and so on. And all this is kind of starting to happen now, but it’s still fairly early. Okay.

FOUND ART

The other thing I was going to mention, which we don’t really have the time for, is found art because a generator is working within a huge data space of images of all sorts, photographic art, medical, anything you can think of so it will produce from that data set. So you could use that to create a kind of found art.

Rather than know what you want with a descriptive prompt, you could go off investigating the data space.

CAS AI and Text Talk 26 April 2023 Transcript

AI and Text Talk – Geoff Davis – 26 April 2023 – Computer Arts Society CAS

See below for the transcript of the whole evening, the

    • introduction from Geoff Davis,
    • the four speakers from the book,
    • Tivon Rice,
    • Ray LC,
    • Maria Cecilia Reyes and
    • Shu Wan.
  • Mentioned in Geoff’s introduction as successful contemporary gallery text-art artists:
    • Sasha Stiles – Technelegy
    • Mark Webster – Hypertype 

Many references are at the end.

Click here for the full Talk video on Youtube.

AI Creative Writing Anthology

The invited speakers in the Talk are in this new book and spke about their work with AI, art and education.

AI Creative Writing Anthology Geoff Davis 2023 GPT-4 ChatGPT
AI Creative Writing Anthology Geoff Davis 2023

Geoff Davis is the editor of the AI Creative Writing Anthology (Leopard Print London) which has 20 stimulating entries and a lot of extra material.

Each author describes their process and feelings abut using the generators so it provides an insight into using AI text. The book is on Amazon and most other sites and has a large free sample.

Please visit: http://leopardprintpublishing.com/2023/03/25/ai-creative-writing-anthology-20-authors-share-how-to-use-ai/

Computer Arts Society AI and Text Talk

This is a longer version of the Computer Arts Society AI and Text (part of the BCS) talk 26 April 2023. This talk and my next one on 1 June 2023 about AI and Images will be collected into a short document in June 2023.

Text-based AI 

26th April 2023

INTRODUCTION

Hello everyone. Thanks to everyone at the Computer Arts Society,  and Kerry and Maria of the Community Team at the British Computer Society for hosting this talk.

Thanks for Sean Clark for organising the talks series and looking after the Zoom.

This evening we will start with a short introduction, explaining how and why these new AI text generators appeared and came to be so controversial and exciting.

I will show some current AI text-art from Sasha Stiles and Mark Webster, along with a quick look at my own Story Generator.

I will provide an overview of the uses of text to writers and creatives, the technology, ethics, and artificial general intelligence.

Then we will go on to the four Speakers.

Please note that there is a longer version of this talk available, please see the references at the end of the talk. Some areas like technology and ethics are expanded.

I will not be dealing with text used to generate images, in systems such as Stable Diffusion and Midjourney, as there is an upcoming AI and Image talk on 1st June. Please subscribe for that if you have not already done so

 The Speakers

Tivon Rice from DXARTS at the University of Washington

Models of Environmental Literacy

Ray LC  from City University of Hong Kong

Immortals poems, which are generated reformulations, and

Designing for Narrative Influence with the Drizzle project –Machine Learning and Twitter communications

Maria Cecilia Reyes from Universidad del Norte in Colombia

Using AI as a co-writer for fiction and poetry and

its possible applications in immersive and interactive storytelling

Shu Wan from Univeristy of Buffalo

Aspects of Generated Text in Education for Lecturer and Student.

So we are covering quite a range here tonight.

This will be followed by audience questions and discussion.

Introduction

A quick note on terminology: ‘AI’ will be used as shorthand for the computer generators. Current systems are not ‘Artificial’ (who decides what is natural) or ‘Intelligent’ (a divisive term with no settled definition; also dualist). But everyone uses the term.

I assume that most people here have used ChatGPT or other similar systems. Yet in my 2021 AI research into how professional writers use generators, which provided a text generator and editor, only 8 out of 82 writers had used them before – 10%.

At that time the public systems like Talk to Transformer and GPT-2 were not widely available. And even if heard of, were looked upon with bafflement and some suspicion. This immediately changed to positive regard after using them for creative tasks (Geoff Davis research ref.).

I will first run through the various ways in which generators are useful to writers and creatives, before we get into more detail.

  1. Idea Generation: generators can be a powerful brainstorming partner
  2. Collaborative Writing: generators can suggest sentences, paragraphs, or entire stories based on your prompt input. This is useful if you have writer’s block, or for ambient literature and art (also known as personal content creation
  3. Style Transfer and Text Transformation: adapt your writing to the style of a famous author, or turn your prose into poetry
  4. Editing and Proofreading: AI-driven tools can assist you in polishing your work.
  5. Visual Storytelling: text-to-image AI models can generate illustrations, concept art, or even comic panels based on your written descriptions.
  6. Cross-disciplinary Collaboration: By facilitating communication between diverse artistic fields, AI can inspire innovative, interdisciplinary projects and collaborations.
  7. Personalized Storytelling: AI can help create tailored content for specific audiences or individuals, this is good for diversity
  8. Adaptive Storytelling: AI can be used to create dynamic narratives that evolve and change based on user input, choices, or actions. This can lead to non-linear storytelling, and interactive narratives. This is often used in games.
  9. Sentiment Analysis for Creative Feedback: AI can analyze the emotional tone of a text, providing valuable feedback on the effectiveness of your writing.
  10. Creativity as Data: AI can analyze large volumes of creative works to identify patterns, trends, and insights that can inform your own artistic practice
  11. Ethical Considerations: By engaging in thoughtful and responsible AI practices, you can contribute to the development of a more inclusive, diverse, and ethical creative landscape.

Style transfer is still popular, and much easier to do in the modern generators ChatGPT, GPT-4 and OpenAssistant.

In the AI Creative Writing Anthology book, I have two examples, using a famous optimistic poem about technology, “All Watched Over by Machines of Loving Grace”  by Richard Brautigan published in 1967. I simply asked the generator to rewrite the poem in the style of Scottish writer Irvine Welsh, and cyberpunk inventor William Gibson. These came out really well. If you want to take a look, the Irvine Welsh one is on the front page of my website. Previously to do this you’d have to fine-tune the generator by loading new training data.

Demos

Geoff Davis

This is a procedural story generator or PSG – simple but effective.

Claimed by digital art curator Georg Bak to be a ‘precursor of ChatGPT’.

MA4 Story Generator 

Mark WebsterHypertype generative series, using IBM Watson Sentiment Analysis.

Selected several texts and papers on AI and emotion, then analysed them with IBM Watson Sentiment Analyser, to produce a set of texts for use in a generated art series – 300 different images.

Sasha Stiles is very well known, she founded “theverseVerse” online poetry site. Wrote the Technelegy book shot, makes videos, and much more. Sold artworks at Christies recently.

All references are at the end of the document.

I wanted to show these artists as it shows it is possible to have a successful career doing text art, it’s not just a hobbyist thing.

[Edit: two interesting writers on AI and text are Janelle Shane who wrote You Look Like a Thing and I Love You, and does the AI Weirdness blog; and Gwern Branwen who runs gwern.net, please search for them.]

Now I will discuss some of the background.

A Brief History Of Modern Generators

Earlier AI systems used a rule-based approach to create so-called expert systems, which attempted to encode all the actions in a domain, and then reproduce it accurately. This works well for limited domains like those of robots in factories, which have to work accurately with no errors, or help systems with a series of set actions depending on input events. Expert systems did not scale and were not transferable to other domains.

For instance, BRUTUS (BRUTUS ref.) was a ‘Storytelling Machine’ or fiction writing expert system, from 1999, which was a tremendous research effort in conjunction between academic researchers  and IBM, AT&T, and Apple Computer, but wrote hardly anything of any general use, and was never developed further.

All of this historical work paved the way for a new approach to AI called machine learning, using so-called neural nets modelled on the brain. These did not attempt to manually describe each step in a program of actions.

The famous artist Harold Cohen programmed a system called AARON in the 1970s to generate drawings in his own style. This is an example of a personal generative expert system. He joked that he’d be the first artist in history to have a posthumous exhibition of new work.

The first neural net was the Perceptron, invented in 1943 but not built until 1958. It was a hardware system for image identification.

In the last ten years, once computer speed and power increased along with a huge increase in the amount of data for training available from the internet, Natural Language Processing and Machine Learning could really increase in usefulness.

Advances in machine learning from data sets continued with Recurrent Neural Nets or RNNs (RNN ref.) and other architectures, and received a significant boost with the invention of the Transformer architecture at Google Brain in 2017.

GPT stands for Generative Pre-Trained Transformer, which means the system is pre-trained on text data and then generates more text based on the statistical likelihood of the next text token. The more text data and the more training (nodes and layers of the neural net) the more realistic the text will be. (GPT ref.)

These systems are generally known as Large Language Models or LLMs (LLM ref.).

A quick list of the main areas of AI and text is:

  1. Natural Language Processing (NLP)
  2. Machine Learning (ML)
  3. Deep Learning (DL, also Deep Neural Nets DNN): A subfield of ML

This is all used to make

  1. Chatbots and Virtual Assistants – ChatGPT, Google Assistant, Alexa, Siri etc.
  2. Text Generation and Summarization
  3. Translation and Language Understanding
  4. Search
  5. Transcription from speech
  6. In our area, Creative Support Tools CST and Copy Writing Assistants
  7. Many new tools that arrive daily

This is a technical area we won’t go into in this talk. I have provided references, see below.

Because of the adaptable and unpredictable nature of the new generators, they still produce errors and can fail, or can give false answers. The generators  are created to give answers, and will always give an answer if asked, so if the data is lacking they will just make something up. These errors are known as hallucinations. The wrong answers are presented convincingly, and can fool naive users, and limit their usefulness in critical areas.

The latest GPT-4 system from OpenAI is far better at world knowledge than earlier versions, even ChatGPT, and gives much more accurate answers.  It’s worth mentioning that Open Source systems such as OpenAssistant (OA ref.) are now comparable with commercial systems.

A world model (to prevent nonsensical answers) is thought to be created internally, on the fly, in order to more efficiently process an answer from the vast amount of data in the neural net. Researchers are now seriously claiming actual intelligence for the latest systems, rather than only production of statistically likely simulations (AGI Sparks ref).

General Intelligence

To quote Sebastien Bubeck of Microsoft,“the new wave of AI systems, ChatGPT and its more powerful successors, exhibit extraordinary capabilities across a broad swath of domains. In light of this, we [can] discuss whether artificial INTELLIGENCE has arrived.” (Capitals in original.)  This is known as AGI or Artificial General Intelligence, and within Microsoft, Bubeck’s team is devoted to the “Physics of Artificial General Intelligence” (AGI Sparks ref).

It’s worth noting that consciousness is not the same as intelligence.

Intelligence is a goal-orientated ability, such as calculating numbers, playing chess, controlling a robot or car, writing some text, making an image. No mental awareness is required.

In AI research, achievements are denied as the goalposts keep on moving in the definition of intelligence. Now, people say text generators have no intelligence as they are only statistically selecting the next words to output, despite their superhuman abilities in text production.

I won’t even mention the Turing Test.

By the way, non-Western cultures have a different approcah to robots and AI. Japan, Asia, do not have this constant Western fear of impending destruction (which is from intense state competion i.e. war across history I guess).

[Edit, 1 May 2023: Geoffrey Hinton, the top AI researcher at Google, has resigned saying the AI is getting too powerful. Several researchers and observers including Elon Musk, who cofounded OpenAI, have called for a 6 month ban on updating the AI generators, as AI risks are not fully understood. But the main companies and open source groups are in a race to develop bigger and better AI models, so speed of development will increase.]

Ethics

The ethical dimension applies to AI generally, from text to search, control systems and everything else, and is a very hot topic, with many in the AI industry predicting existential disaster, with humans becoming obsolete in a few years. Fear of the robot overlord is also common, based largely on fear of a superhuman intelligence having human emotions and drives such as power mania, need to dominate, and status anxiety (very common in academia and the arts).  This scenario is shown in the Terminator movie series.

Actually, no-one knows what might happen, but so far there has been an explosion of creative use in image generators and text uses in literary experiments. Since AI is already here, one would already expect disaster to be encroaching, but there is no sign of this.  I prefer to take a more pragmatic approach to AI as a tool, perhaps a very significant one as Bill Gates says: “AI is the most important tech advance in decades.”

The Alignment problem is whether AI values are aligned with human values.

Fake news, propaganda and so on were around a long time before AI arrived. Detecting AI text is hard, and automatic detectors like GPT-Zero are easy to trick so other methods are arriving to separate falsehoods from truth. But since both sides of any political debate claim the truth, this is obviously more of a human problem than a technical one.

Crimes using AI voice mimicry such as virtual kidnapping scams, as well as deepfake videos, are on the rise.

The issue of gender and race bias is also a big topic, although more recent generators such as ChatGPT have guardrails (software protections) in place to reduce or remove this sort of bias. Some commentators object to a left wing libertarian bias in the current generators, such that a new ‘right wing AI’ is being developed, and Elon Musk has also joined in the race with his new company X.AI.

I am currently researching the topic of AI emotional tone at UAL, with a new academic paper due soon. Links for all this are in the references.

Data Dignity and Copyright

The basic creation of large language models is controversial as the text data is taken from the internet. In the case of GPT-4 researchers say quite seriously that the entire internet has been used to train the model. But no-one has ever been paid. The concept of Data Dignity has appeared, which is that all the originators of this text should be paid, by a forensic process examining where it all came from. This is unlikely as it is very hard to separate out data sources even at the level of training the model.

Reddit and Stack Overflow are going to charge for data, which was previously freely available. These companies receive less revenue if AIs take over computing support, discussion and code help roles, using all their original data to do so (Lanier ref.).

In the field of text to image, some copyright court cases are in progress regarding art images used to train text to image generators, as individual artist styles are easily identifiable (Loizos ref.).

Objections

Some artists object to AI in principle, for high energy usage, or because it might reduce authorship or creativity. However, in my recent study, only 8% of 82 professional writers felt they did not own the generated text. Many creatives already use code for generative art, or use programmed filters in tools such as Photoshop.

There is a huge grey area as AI is already part of the infrastructure, used in editors, search, predictions, translation, learning and so on. Most see AI as advantageous in terms of productivity increase, extension of creative skills, new art forms, and practical uses in medical and legal support in poor countries with limited professional resources for ordinary people.

There is also some debate about the amount of energy used to train the systems, but these costs are low compared to general power use. The computing industry uses a huge amount of energy for general activities such as entertainment, administration, etc., so a little used for next generation technology is a drop in the ocean. However newer models are designed to use less energy.

Class and Progress

Most mainstream articles about AI start with an alarming ‘end of civilisation’ quote, often from someone like Elon Musk. In the case of computing, the new technology has been changing working-class jobs for decades, but that has been promoted by the media as thrilling progress. Now that typical middle-class jobs like journalist, academic or lawyer might be affected, existential dread is promoted instead.  More academic studies of potential threats from AI include the influential book Superintelligence by Nick Bostrom (Bostrom ref).

Generating Computer Code

Warning: Can produce insecure code. Always tell it to make secure code. Check if you don’t know.

Visit

https://www.malwarebytes.com/blog/news/2023/04/chatgpt-creates-not-so-secure-code-study-finds

https://arxiv.org/abs/2304.09655

Using generators for code is also a big area now, with GPT-4 scoring 10/10 on programming employment tests at expert level. Although it still needs guidance and reviewing, generators code clearly and also add comments, and can explain their steps, which makes it useful to beginners and experts alike. It speeds up code production from hours to minutes.

In the AI Creative Writing Anthology which I recently edited, Brian Reffin Smith, the winner of the many electronic art prizes and a member of Computer Arts Society, has an imaginary chat between ELIZA, the first computer therapist, and Karl Marx’s statements in the Communist Manifesto. He then later asked ChatGPT to produce a new version of the ELIZA code with many extensions for art assessment, which is now available to view only, on Facebook (Brian Reffin Smith ref.).

I have used GPT-4 to code modern versions of my old generative art, originally written in BASIC.

AI code assistants like Copilot from Microsoft using GPT3 can be run with Python and other languages. Already up to half of Python code is produced by the Copilot assistant. (Copilot ref.) If this is true of course.

Before we go onto the speakers, I will run through some other areas.

Ambient Art – Personalized Content Creation

There is a field of ambient art and literature where people generate for fun, as a pastime, with no intention to create a final product. Interesting outputs such as generated images, memes or poems and lyrics might be deleted after creation, kept private or shared on social media (Ambient Art ref.).

Games

AI can also be used in game environments to create dialogue, descriptions, control characters, generate scenes, and control plot lines (Games ref.)

Creativity of Older  Generative Systems

Many commentators have commented that the older generators such as GPT-2 or the Open Source GPT-J 6B are more useful for stimulating creativity, as the outputs are more randomised, which was the point of old experimental systems such as Dada’s ‘Exquisite Corpse’ or William Burroughs ‘Cut-ups’. The older generators are also easier to fine-tune with one’s own work.

However with newer systems from ChapGPT onwards, you can just ask it to rewrite text in the style of James Joyce, or Irvine Welsh, etc., or put in a sample of your own prose for it to copy the style from.

The newer models are usually controlled with something called prompt programming to get a specific output.

 Creativity Assistance &  Creativity Support Tools

It is worth noting that nowadays many artists use AI for part of their work process rather than all, perhaps for creative ideas or copy writing. The generators can be seen as expert assistants ready to do any task, a bit like a ghostwriter or art team.

I have a hybrid generator and editor Story Live, which I made along with Fabrice Bellard using his Open Source Text Synth. This is freely available, and also has a Text to Image generator and Translation (Story Software ref.).

There are many attempts to create a novel-writing AI system, but we do not examine those here. Some of these are in the references.

There are many attempts to create a novel-writing AI system, but we do not examine those here. Some of these are in the references.

I have researched and developed creativity support tools CSTs as they are known, with a zooming storyboard tool called Story Software still available in various versions.

I also have a hybrid generator and editor Story Live, which I made along with Fabrice Bellard with his Open Source Text Synth. This was made for research purposes but is now freely available, and also has a Stable Diffusion text to art generator and Translation (Story Software ref.).

Speakers

Tivon Rice

RAY LC

Maria Reyes

Shu Wan

Geoff Davis

So I think we’re kind of ready for the speakers now.Then there’s a question and answer discussion period at the end of the speakers. So hello, Tivon.

Tivon Rice

I’m Tivon Rice, joining you from Seattle, and I really appreciate the opportunity to share what I’ve been working on lately. I am an artist and an educator teaching at the University of Washington’s department of Digital Arts and Experimental Media. Quick overview. For the past probably about six or seven years, I’ve been working with computer generated texts like this one we see on the screen generally to accompany my visual and sort of time based projects. And my work with machine learning or AI or whatever you want to call it, tends to oscillate between working with these systems in my studio in a purely creative mode, but then also sort of switching and critically examining how these emerging systems function in society or imagining how they will function in future society. And I explore these ideas through my teaching, through workshops, conferences, and the like. So I want to start by showing you part of a trailer for an experimental film that combines digital animation, photogrammetry, and a number of AI generated narratives. These are specifically fine tuned GPT-2 models, as Jeff sort of described sort of from an earlier era.

But let’s take a glimpse of this video and then I’ll further describe the project. So here we go. We’ll watch about a minute and a half of this.

VIDEO (spoken word and animations)

Models for environmental literacy.

Video Speaker 1

What is an island? Is it a graveyard? In what sense is it an island? Does an island denote a sense of spatial absence?

Video Speaker 2

I’m afraid of the absence. It seems to be forming on me. Its dark forms disturb me. The whole place silent. The stones are blur.

Video Speaker 3

All of this anxiety and worry goes hand in hand with an ongoing process of change. A change that, although not explicitly recognized by this program, has become obvious to all.

Video Speaker 2

This, for me, is the essential paradox of our stories. In finding our way within the world, we humans invent or modify or redefine reality in ever larger and more incomprehensible ways. So our words are not only a vehicle for revealing the world to the reader, but also a vehicle for revealing the reader.

VIDEO sample ends.

Tivon Rice

Okay, so the rest of the trailer and actually the entire 35 minutes minute film are linked on my website and demo pages. So I’ll leave that to you.

But I’d like to give some background to the research project that sort of spawned this about three years ago, actually, when I was living in Europe at the time. And I guess reflecting on the time that I was living in Holland and presenting my recent projects at exhibitions, conferences, and workshops, I found that many of the conversations that arose around AI and other digital techniques technologies asked how we could apply these ways of seeing and understanding large systems to the topic of the environment. And I think these questions are, of course, a product of the times.

But I also came to understand how the Dutch landscape, which may be one of the most highly engineered landscapes in the world, also provokes this kind of critical attention about the environments. So I was asked to participate in a number of, like, artistic research projects that sought to apply emerging forms of image production, mapping, world building and storytelling to the topic of the environment. We were sort of focused on very specific environments, like the barrier islands on the North Sea coast of the Netherlands.

Or more broadly, how can we begin to imagine ecology and the environments from new perspectives, possibly even and non human perspectives? Non-human centered perspectives. So this project, Models for Environmental Literacy, brings together three areas of research, one being field trips and workshops in which groups of artists, ecologists, authors, and musicians sort of visit these sites at the boundaries of human involvement and environmental change.

The second area would be natural language processing research. Earlier, I was working with a tool called Neural Storyteller, a very early RNN NLP model. But seeking to update these systems. How can they be made more accessible to broader creative communities like my students and those participating in my workshops? So I focused on the idea of fine tuning GPT-2 models.

The final area of work would be to develop a series of films, which we saw the intro right there. How could these films creatively reflect on the research as a whole, combining digital image that were collected at these sites, these virtual environments, with these narratives generated through fine tuned GPT-2 models? I’ve already touched on the field work that went into this. So because this work preceded Chat GPT or even GPT-3, I used the medium version of GPT-2 as a starting point. And this model is okay, maybe the glitches are kind of charming, but it’s also small enough that it can be fine tuned on a conventional computer that has a GPU. So I was able to fine tune three new language models of my own. And these are the voices for the film.

A number of different ecologically concerned AIs, one trained on eco philosophy, one on ecofiction, and one on very recent scientific reports about the current environmental crisis. And this is a good starting point for describing the position that I take when co-authoring with AI, and I identify at least three absolutely critical areas of knowledge that need to be built around working creatively with these systems.

First of all, we should know as much as possible about the language models data set. And so in this project, because I collected and trained or fine tuned my own data sets, that we see sort of brainstormed here, I believe it’s a lot more transparent and deliberate than working with black box models like GPT-3. But even so, moving forward and working with these large language models, I think it’s important to think about the very large data sets of arbitrary text scraped from the web, as Geoff described. How do we understand the tone and content, the truthfulness and the bias inherent in language that’s evolved on the Internet and that was ultimately used to train Chat GPT?

The second body of knowledge or skill set that I place importance on involves inference. How do we prompt these models to create text that is meaningful or suggestive or provocative to us? If we ask boring or superficial or unstructured questions, we’re going to get the same. In return, we’ll get this kind of like autocomplete behavior that outputs an endless, banal sameness. So how can we deliberately craft prompts that are more likely to produce interesting outputs? With these films, I thought about the logic of prompting and how GPT-2’s outputs could be used successfully successively as the following prompt it’s kind of like a feedback loop. So the logic in the dialogue of the first chapter is actually circular. One model is asked a very simple question tell me what you see. And the response to the output becomes the prompt, the input for the next model. So in this dialogue, the entire paragraph from the Scientist becomes the prompt for the philosopher, and the philosopher’s output is then the prompt for the author, and so on. And visually, the first chapter is focused on this strange Dutch island that’s called the Isolo, the Eye of the ISIL River. It’s like this perfectly circular manmade island designed to isolate toxic sediment generated during the engineering of the farmlands in the north of the Netherlands.

So the film’s animation sort of studies this uncanny, circular landscape and the surrounding waters in stark black and white. The second chapter is called Whisper Poems, and the logic of that dialogue is also circular, but only two or three words from the previous output is considered for the following input. So, like a whisper poem or a game of telephone, there’s some continuity, but also a greater chance of misunderstanding between the different models as the story evolves. That chapter is focused on the small islands surrounding Helsinki in the northern Baltic Sea. Sort of paradoxically. While the Baltic rises at 1.5 year, the islands in that area also rise twice as fast 3 year as a result of postglacial rebound. So again, these small bits of land presents a sort of paradoxical image in terms of human time frames, a sort of anthropocentric time frame, or perception of time or history or geo sort of temporal situations. So this film’s animation studies the sort of isolation and invisibility of number of these small, rocky islands and the surrounding waters. The final chapter is called Echo Chambers, and the logic of this dialogue is a feedback loop in which one model’s output becomes its own input over and over again.

So thus, each character has a much longer monologue without being in dialogue or being interrupted by the other voices. And each text is prompt with a very simple question where are we? This chapter is visually focused on a field trip we took studying toxic algae growth in Zayland, southwest Netherlands. And these blooms are caused by, like, excessive chemical use in agriculture. The over engineering of the Netherlands delta works after the 1953 flood. So the film’s animation studies the strange color and sort of dense texture of algae surrounding another number of islands in this estuary, which is essentially a dead sea. I said earlier that there are, like, three areas of knowledge that I feel need to be built around working collaboratively with NLP systems like these, understanding the underlying data sets, carefully crafting our prompts during inference. And the final idea, I believe, is deliberate co authorship of the outputs. So it’s important to be honest and say that I do step in and give the final form to these texts. And this decision to edit or gently reorganize the output from the machine learning system is my response to all these questions surrounding the artistic agency of machines.

Should we initiate these systems and then step back and take whatever they say to be the final raw output to sort of demonstrate that the technology works? Or are there moments where we can reinsert ourselves into the process and make decisions about how the output resonates with our own poetics? So, for me, in cultivating this kind of collaborative relationship with AI, what I’ve found is that as I observe a machine learning system develop some kind of understanding about language, that my own understanding of language begins to shift as well. And I can see from sort of like third person perspective how language evokes images and narrative decisions in my work. I’ll leave it there. Look forward to hearing from the other presenters. Once again, thank you for having me, and I hope we can chat about some of these ideas. Thank you.

Geoff Davis

Thanks a lot Tivon. That’s a really good piece. It’s good to see you explain it because I’ve only seen some of the text before, so that’s excellent. We’re having a discussion at the end.

So Ray, if you want to start.

Ray LC

Hi  everybody. How are you? Fascinating work by Tivan.

What I do, I’m an assistant professor of Creative Media at City University of Hong Kong, and I’ve been there since 2021. I have a neuroscience background. I got a PhD in neuroscience, but basically since about 2017, I’ve been working in Creative Media arts design this area. So I’ve also been using GPT-2, and because it’s got these quirky ways of expressing itself, kind of helpful for generating poetry, which is what I’m going to show you today.

I  run the studio for Narrative Spaces, which involves using basically looking at humans interaction with machines and with AI as our starting points for investigation. But we work with basically interdisciplinary, pretty much anti-disciplinary crowd of people, like roboticists and performers. We work a lot with dancers, et cetera, et cetera. So this is our page if you’re interested in taking a look.  [See references.]

The project that I will share today mostly will be about this work called Imitations of Immortality. It’s also a book, by the way, that was published, I didn’t bring that up. This is also a book that was published by Floating Projects Press. And so it looks like this classic poetry book written by two people, basically, me and GPT-2. Although I also want to be frank, is that I also curate GPT-2 as well, just like what Tivon was doing. This web page basically is kind of a web version of the book.

So what’s the workflow here? The inspiration here is that some of you might know William Wardsworth. Well, you probably all know William Wordsworth’s collection Intimations of immortality. So what I want to do is actually create a bunch of poems that were modeled on famous poems. A lot of them are British poems, actually. So, for example, there’s Dylan Thomas. Do Not Go Gentle Into That Good Night. There’s Alan Ginsburg’s Howl, for example. So what I did first was I wanted to make a kind of a variation on that classic poem. So today I don’t have time to show everything, so I’ll try to be interactive a little bit. So I’m going to show you my version of Elizabeth Bishop’s poem, which some of you know is called One Art.

“The art of losing isn’t hard to master;

so many things seem filled with the intent

to be lost that their loss is no disaster. “

So it’s a villanelle form, by the way. It has a particular rhyming scheme. So what I wanted to look at was what happened if I try to write a variation on this poem. And then I’m going to ask GPT-2 in this case to write a variation of this poem, with my help, of course, because I’m curating the text by giving it the Elizabeth Bishop voice. Basically, I gave a bunch of Elizabeth Bishop poems try to find that voice and then also to give it a starting primer. That’s how the GPT stuff was working back in those days. So I give it the first stanza from One Art and have it try to generate the rest of it. So in the interest of being in-depth versus in-breadth, I will show you my version of the poem. By the way, this is kind of interactive website, so I know things are starting to disappear because they’re getting forgotten. I’m sorry about that.

[Shows poem.]

Okay, so anyway, so it says the science of forgetting is not inscrutable. So much information pokes at our brain for attention that to forget is it is forgivable. So you can kind of see like this variation of the poem actually tries to be kind of a micro version of trying to write a variation. I try to follow all the structure, et cetera, et cetera, even the things where Bishop does parentheses with exclamation marks. For example, here the joking voice, a gesture I love. Or write it. That’s kind of the one main point of that poem. So I try to kind of reproduce those things in my way of writing it as well. Okay, so anyway, so you can go and watch this, including the ending. I kind of give that blah, blah, blah is unforgivable. So I kind of try to vary it that way, too. So, of course, the GPT version will not be that way. I’ll just show you because this is what this talk is about. Here’s what the GPT version comes up with. And just keep in mind it is curated, but I try to do it so that each stanza is what GPT wrote.

I choose the stanza, but I’m not going to rewrite this thing for GPT, right? Because that would be pretty unfair. But I did have to insert some line breaks here or there to make it work out. So again, this is the original poem. The art of losing is an art to master. This right here I’m showing you right now, hopefully you can see, is the GPT version, right?

[Shows pages from book.]

So it starts with the original stanza, and then it says the music is part of the song, though it be sung. The other hand is silent till the other side finds its master. The man that gives knows all. So as you can see, it kind of sort of has a voice, but it’s not really following Elizabeth Bishop it’s not following that logic, but it kind of keeps some of these interesting things. Like, for example, in the middle there, you can see thus close the truth. Parentheses, grab it and run. Exclamation mark, parentheses. How come no one has seen such a site for years? The stranger the disaster, the farther a word. It’s learning some form of this poem and kind of like regurgitating it in a kind of a funny kind of way, funny to us, quirky kind of a way.

And actually, if you see on the bottom there, they actually use this kind of disaster thing here as well. So it’s kind of learning to say things from the poem in this kind of fresh new style. So this is what the book of poetry is about. So in the book itself, you can basically well, you can actually pick up a copy if you’re in Hong Kong, but we’re trying to make it available in the future. Actually, I have it with me. I’m sorry, this is kind of random, but I do actually have it with me and just to show the form of the poem a little bit. So once you have the book in front of you, it’s also kind of a nice kind of thing. And the way I kind of you can’t really see it because I blurred out my screen. But the way I also design a book is also I basically don’t say who it’s by. So I put my poem and the poem by GPT-2 on opposite pages. So people are kind of forced to just read it without knowing kind of the authorship. Now, this was also published into a paper at RTech, an ACM conference.

And in that paper, basically just the short answer is that I basically asked, I gave particular stanzas of the poem to people who are reading it for the first time, like kind of on a survey. So what I found basically from that study is that, first of all, if I just give them stuff from my poems and from the GBD two, one more or less the whole corresponding stanza. If I give them those things, they’re not able to distinguish which one is human made versus which one is machine made, right? If you ask them, one of these is like the machine generated which one? It’s like 50 50. They can’t tell which ones are mine, which ones were the machine. But if you ask them which ones are more expressive, so you don’t tell them that the stanza was by me or by the machine. But you just ask them how expressive it is, right? You just ask them to judge how expressive they are. And then you go back and figure out which poems have a score of one versus a score of seven. Then you actually can see a significant difference where the poems I wrote are more expressive.

Without them knowing the identity, they found that the poems that I wrote were more expressive. So what that tells us is that actually, I believe that the readers have this kind of unconscious knowledge of who’s writing the poem, right? Because unconsciously, they cannot tell you who wrote it. But then there’s something about that poem that still strikes them differently. There’s still something about the way I wrote the poem that was more expressive to them in certain ways. Anyway, so that was kind of my take away from this. I had a great time with this project.

I know that Geoff was asking me to show you about some comics and things like that so you can take a look at this. Comics for climate change. Actually, I wanted to also echo Tivon’s work because I also work with climate change, and we used Twitter to get our feeds, our data to generate climate action text as well. So those of you who are interested, you can check out that work, which is in Drizzle and also in this Tamagotchi game that we created that also speaks in machine learning generated language. And this game in particular, you can actually also play online. So it’s like this Tamagotchi game that you can kind of have fun with and play with.

Anyway, so thanks so much and looking forward to discussing more.

Geoff Davis

Thanks a lot, Ray. That’s a really interesting talk. Very good to see the work. Apart from the poems, the Drizzle and the Tamagotchi are really interesting. So maybe we’ll do something else some other time. But thanks a lot. I’m going to put the next speaker on, Maria Cecilia Reyes from Colombia.

Maria Cecilia Reyes

Thank you so much for the invitation. I want to share my personal journey collaborating with artificial intelligence systems on co-generating worlds, fictional worlds, universes with words and images, but also what have been my thoughts about this collaboration. And I realized that there is something that is not artificial in this relationship with artificial intelligence. So my journey started not so long ago in 2018, at my work in the National Research Council of Italy, I was doing research about conversational agents in the field of education. At that time we were developing a tutor bot to help students that were engaged in massive online open courses. However, working with technologies and experts in the field, I couldn’t refuse to think about creative applications for conversational agents, especially vocal interfaces.

So for my creative life, I wanted to use a conversational agent to help people to edit a film made by pieces of videos from YouTube. I also started to work on an immersive space in a womb of a mother in which we could have a conversation with that mother by being physically immersed in that type of womb.

So we started to work on a mock-up and get projects and ideas that start, but then they have their own time to evolve. At the time I also had the opportunity to advise a couple of master thesis that were using chat bots. One of those was for helping elders through a device called Kibi, an intelligent device that were alerting, caregivers about the state, the physical state of the elders, and also had a vocal interface for the elders themselves to remember them when to take some pills or give them some advice, remember them to call someone. And another thesis was the development of a chatbot for a museum so users, visitors of the museum could have a previous experience before going into the Museum of the Electric Technique in Torino in Italy. But then I received a message in an email in 2019 from a friend and he says I’ve been writing poems since I’m a little kid and I don’t share very often my poems, but I share with personal friends. And this friend wrote me this. I put the two first lines of your poem to use the voice to communicate love. It’s not about putting words together into this new website, Talk to Transformer, which is GPT-2.

And the results were surprisingly good, you can do it again and again with different results each time. I had some good laughs thinking up the beginning lines of some surreal stories and seeing what the AI comes up with. But with your poem, the results were more artistic and not pure comedy. So I was very intrigued by this message.

I started to use Talk to Transformer at the time to give some prompts and some first lines of my real poems and see how the machine would react and what it would do with it later. So yeah, as a hidden poet, I started to show my poems to the machine, to any machine that I would come across with. So one of those was Talk to Transform and then Story Generator in 2020. Then when Night Cafe came out, I also started to give some lines of poems to Night Cafe to see what kind of images it would create and then MidJourney more recently. So the only poem that I saved from all these iterations was the poem that was published in the volume that was just published, AI Creative Writing Anthology, thanks to Geoff. I kept two images prompted by two of my poems.

[Shows tow images.]

So why these two images? I don’t know why in these nights when you stay awake and just experiment, sometimes some of that work just has a very short life but leaves deeper thoughts. So one of the images that I saved was this image of Genoa. It was a time in which I wrote a very angry text about the city. As an immigrant, I was going through many personal battles by giving this prompt, this idea of a Genoa in flames that the machine created this representation that stood to me and represented that feeling that I was going through at that time.

The other image is a bit more even more deep and personal. The first one was NightCafe. And this is MidJourney instead. And it’s poem about a moment in which two people that never got married in life while they were alive, they get together in heaven after passing away. And this is images really got a big emotional develop a big emotional connection with myself. But then in those moments and I think most people or the reactions that I get from my colleagues and friends experimenting with AI, I feel it’s sometimes very similar to what happens with Horoscopes.

Sometimes the confirmation bias that you kind of expect to have an idea of what would you like to get from the machine and then from any of those phrases or images that you get, then we as humans, we make our own conclusions and adapt those information to our lives. So what is that kind of prediction that sometimes AI chatbots can have for the use of common people that want to get that information into their own lives? So this is a question about where are we standing when we ask something to the machine?

I’m a PhD in digital humanities and I have been working in interactive digital narratives, especially in immersive interactive films. That’s my field of studies. So as a future research interest, I’m very interested in imagining AI generated immersive and interactive narratives. So I imagine being immersed in the movie or in the film, in the narrative and leave the control, the creative control to the machine to create the visual environments in which we are immersed, but also to create different outcomes of the story while we are interacting with that story in real time. So the machine and this is just provoking some ideas based on our cognitive data.

Some choices, conscious choices that we can make during the journey, the narrative. We can maybe talk to the machine. Maybe we can enter some text. Maybe we can just make some choices. And then the machine will create new story plots or continuation for the story. But also what if we interact with the machine and with the fictional universe, with our biological data and our breathing or our heart information and from there the artificial intelligence can take over and make the story more stressful or give us a break or take us into a more fantastic world.

But I’m concerned about two main aspects. One is coherence, that was another issue that I faced experimenting with some GPT-2 tools. Sometimes I wanted to just let the machine take over, but the coherence was not fully there. And the other one, especially when we talk about interactive narratives, is the dramatic progression. We want to make sure as creators, as storytellers, that our spectator or user is going somewhere, is going to experience a climax, a moment of euphoria in which everything makes sense and the narrative experience gives you that reward of a climax. So one very famous interactive drama is Facade by Michael Mateas from 2007. It is a conversation with this couple in their apartment and he uses an AI system. As users we interact with them through text and then the story develops in separate ways.

There are some plot points that are fixed and somehow guide the story so it doesn’t go everywhere. Another interesting project is Nothing for Dinner computer based interactive drama from 2015, and the group of Interactive Drama Tension or ID Tension from Switzerland that have been working on AI and interactive drama for some time now.

Just as a final thought, I think we humans are the ones that generate meaning that we make sense of what the machines are proposing and we are the ones that feel the effects of that material that is produced by the arbitrariness of the machine. So that’s why I think there is nothing artificial here in this relationship between us as creative and the artificial intelligence.

Thank you so much for your time.

Geoff Davis

Thanks Maria, that’s very interesting.

Now we’re going to move on to our last speaker, Shu Wan, who’s from the University of Buffalo.

Shu Wan

Hi, how are you guys? I’m Shu Wan. As you can see, the title of my piece today is Chatbots. During scholarship, how do I teach GPT in a history course? As Geoff said, I’m a professor in history at the the University of Buffalo in New York, USA. So I’m very happy to present my pedagogical research on how to protect students from using ChatGPT, and help students to use it. But not in violation of academic integrity. That’s the big issue.

So now I’ve seen American higher education. Last December 2022, I was assigned to teach an Asian History class in the term of January 2023. Well, a lot of time I wanted to use ChatGPT and it became a big issue. Everybody talked about that. That was the controversial issue of introducing or allowing the use of ChatGPT in a classroom. But once I decided to introduce it to my students in class by designing ChatGPT sessions, and adding it to my syllabus. So, on the first day of my class on January 4, I demonstrated magic of ChatGPT to my students.

After showing this slide [IMAGE] and playing a video of Canadian philosopher Jordan Peterson’s quarter of ChatGPT, I enter an example of prompt real lilm Large Emperor into the interface into the interface which output a brief introduction of the film as follows.

You can see here on the slide. And then I told students to use a high tech tool ethically, such that it must help them when they encounter difficulty employee powering, for exam.

However, I told students it’s not a good idea to replace your brain with a machine when doing a review essay. The reason I mentioned the film essays is because in my class students were assigned to write a film review about a film related to Asian history. And then finally at the end of class, I told students it certainly violates academic integrity if you use ChatGPT in completing your writing assignment. So please don’t do that.

And so in the following class demonstration, I instructed students to complete the Human Machine Collaboration Writing Assignment class, which consists of two sections. The first one, play with the text generation interface to produce machine made text and then manually, I mean by hand, compose a reflection on the influence of AI task generation technology on academic integrity.

The reason I chose to use Open Source things instead of ChatGPT is because it’s because of my intense concern about student’s privacy. I still remember the first time I logged into ChatGPT , I felt  very uncomfortable because it wanted me to provide my phone number.

I would say when I design these assignments and this class practice, I think okay, I need to take seriously students’ privacy. I don’t want students to do it because they need to complete assignments, they have to in a certain sense sell their privacy and information to ChatGPT or any other big company.

I appreciate you giving me the opportunity to participate in the writing projects in the AI Anthology. And from the project I learned how to utilize generative textings.

In my class at the end of the project, students require a combination of output of tech genearators and their own refresher on projects.

As you can see on this slide, students’ answers are very promising. Moreover, this platform permits students to read their classmates comments, and advise each other about how to deal with the issue.

So I want to say to instructors of history and scholars and professors, of English literature, etc.,  may be worried about students’ abuse of text generation technology. But it is my contention that there’s nothing to fear but fear itself, besides bribery. Face the threat.

Instead of avoiding any talk about it, we should avoid censors as well. With the emergence of some technology like a GBT Zero, they may detect their misuse of GPT or whatever. GPT  4 or in the future GPT 5.

The competition between the technology helping academic misconduct and its prevention is still ongoing. Along with the proliferation of detectors like GPT Zero. New detectors will be created constantly. So this issue brought by the technology could be solved by advancements of technology itself.

It’s more important to instruct students to maintain their academic integrity rather than just say, okay, you cannot use ChatGPT in a class for completing assignments. Well, we know students will do that. You just assume, okay, those students, they are so honest. They don’t do that. No, they must do that.

I want to talk about my prospect or my thoughts about the future of ChatGBT as it’s used in a classroom. So I want to say at least, you can see the priorities I demonstrate here.

I just want to say, okay, let’s try this in the class ‘Tragedy’. That’s fancy stuff. We can try that. But in the future, in the coming summer, I will teach another undergraduate class about Chinese history. I’m thinking at this moment, I’m designing my class.

I’m thinking about creating what I call a human computer co-authorship assignment. I will allow, not required, but I will encourage students to try to work with ChatGPT or whatever, text generation technology, to complete the assignment.

But for this assignment, I will require student to provide some acknowledgment statement in which you need to tell me which kind of tool, which kind of algorithm you use in the class in an assignment and which part of the assignment is considered by yourself, human. Which part is co-authored by you, or the computer. You work together, but which part is just composed by the computer.

So I want to say, I want students to acknowledge the authorship of a computer if they utilize ChatGPT or GBT-4 or even newer generations of text generation technology in the future.

So that’s my thoughts about pedagogical use of text generation technology now, and the near future. Thank you.

Geoff Davis

Thank you, Shu. That’s a really interesting approach because often with lecturers and students, they just ban it. I know, I’ve got teenage children, and the schools just ban it completely. But I think with younger children, they have to learn things more than just use a generator, whereas in higher education the students are more mature and they can understand that they have to co-author things. I read a statistic somewhere that lecturers use the generators more than the students, for producing coursework. So it’s kind of unfair in some ways, but I think the students have to be mature enough to work sensibly on it. So there is a problem there, definitely.

I think your approach is very good to get people to co-author and then acknowledge. So yeah, that’s very good.

Well, I think we’ve come to the end of the speaker section now, so we can go into a discussion section now where if anybody has any questions.

DISCUSSION

Sean Clark (Zoom)

“David’s iPad” has a question.

“David’s iPad”

I’m finding it really deeply fascinating to play around with ChatGPT as a poetry writing tool. And I’m really working on a lot of prompt engineering with regard to the poetry, such as change this into the style of so and so, where I’ve gotten to is that the state of the art right now is that a lot of sonnets have been written by ChatGPT and it seems to be quite good at haiku and generally things like that.

What I’m interested in is what sort of prompts the panel is interested in exploring, in particular the fact that there’s no longer a need to actually build the language model. In the way that that very interesting work from the University of Washington [Tivon Rice], but it’s sort of a more organic relationship with ChatGPT. And I just wonder if there’s any interesting lines of inquiry the panel feel, especially in the area of poetry.

Tivon Rice

Can I jump in, just give my two cents. I think that for prompt engineering, a couple of the things that I’ve found to be really useful are to ask ChatGPT to give multiple versions of something. Give me ten of this thing. Of course, you’re going to provide your own context for it, but then I always typically end my prompts with in the style of as well. And so instead of fine tuning at that point, you can steer it towards whatever paranoid fiction, or towards surrealism, or towards absurdity or these sort of things. This becomes a replacement for actually fine tuning.

I would also argue that if you have the opportunity, Chat GPT is very accessible and very easy to play around with. But if you sign up for OpenAI’s API, you can pretty much access the exact same models GPT-3, different versions of GPT-3, and now even GPT-4 through their API. Not necessarily through a code line interface, but through their sort of playground. And it’s going to behave much like ChatGPT, but you get access to things like temperature, which modulates the weirdness or the sort of normalcy of the language. You can modulate things like you can penalize it for repeating itself and these sort of things, even though you’re not fine tuning a model, they can get you closer to having that kind of authorship and directorial, it’s sort of like the next level for prompt engineering.

So whether or not you go that direction, just keep in mind that that’s what’s up to you when you use OpenAI’s playground rather than the basic ChatGPT.

“David’s iPad”

But, I mean, bottom line is I’m kind of onto something here because there’s computers, there’s coding, there’s structure, there’s poetry, there’s definitely something there. Isn’t that? I feel it in my poem.

Tivon Rice

Yeah. No, I agree. I think that, as I mentioned earlier with Maria, there’s a lot of people individually using this equipment, kind of at home doing this thing called personal content generation or ambient literature, where nobody’s really thinking they’re a poet and they will be published, they’re just doing it because it’s so absorbing and so interesting. And the same applies to image generation. One thing I’d say, I’ve used the older hand built systems and I used to kind of load in all I write, fiction have been published, not recently, but have been. So I put a lot of my own fiction in and used that as the training data. And it’s amazing how it can follow the style. Now, that can also be done in the new systems just by putting in a big section of your own writing and then telling GPT Four or Chat GPT to use that as a style. Copy that style from a text you’ve given it and then it will carry on with new material, which could be your personal style or a style rather than saying in the style of James Joyce or in the style of Welsh or somebody famous, which it will know.

Geoff Davis

Just put your own work in and it’s a way of doing the fine tuning without having to do any coding, which it used to be in the old days. So the bigger models now are kind of flexible in this way, and I think it gives bigger access to people that don’t do any coding whatsoever, which is obviously the vast majority of the population. Maybe not the vast majority of the population here, but outside there, most poets don’t know how to code at all or use an interface like an API. They wouldn’t know what to do, but they can understand the idea of putting their work in and then copying it. So, yeah, very interesting, this whole area, I think.

Any more questions?

Sean Clark

Okay, well, I think that’s been a really interesting evening. I’m still trying to make sense myself of these AI tools, and I find the best way to try and make sense is to try and use them. But I think even with the experts here, I’m sure you’ll agree that where you’ll be in a few years time is probably going to be very different to where you are now because the technology changes so quickly. So I think it’s important for all of us, certainly those interested in using technology in a creative way, that we do try to understand this technology and we do generate or get some experience in doing it. So thanks very much for giving us some pointers. I also thought, Geoff, you should mention that book again, actually.

Oh, the book again. It’s called the AI Creative Writing Anthology. It’s the first of its type I think. It’s from Leopard Print Publishing in London, which is an indie press. But if you put in AI creative Writing anthology into Amazon or wherever you get your ebooks from, it will turn up and there’s quite a big sample, you can read for free.

The book has got 20 authors and artists in, including the four people tonight. I’ve put a few things in and I’ve written the introduction and I put some pieces at the end about the background and so on. It’s got lots of references, lots of interesting work, but there’s also a lot of other material.

One of the most interesting things in it is that each writer, I asked them a whole set of questions about how they felt about using the generators and how it affected their work and so on. And all of that is published with each story.

I think probably the first time you get creative work plus the writer’s explanation of how they did it, how they felt about it. And this provides really interesting insights into using generators. And that’s all in this Anthology. So it’s not just a collection of stories, it actually has all this kind of meta level discussion of how they did it. So it’s worth having a look at even just for it.

Certainly it hasn’t been out very long and the paperback isn’t out yet, so it might have a few slight changes. But think the most interesting educational side of it apart from the entertainment, the art value, is the fact that you can get an insight into how people thought about using the generators because this is such a new area. Now, I think the writers, the people here in the group all know this because they are in the book with their comments as well as the story or piece.

Sean Clark

Well, I think if you could make sure we’ve got a link and likewise your speakers, if you collect links and contact information from them. I’ll put all of that information on the CAS website.

Geoff Davis

Yeah, that’s great. If you go to the geoffdavis.org website, which is my name, GeoffDavis.org, if you look at the most recent blog, it’s got this talk in it, plus the references are in there. So everything should be in there just to checking and then we get it up on CAS website as well. And if anybody wants to contact me directly for further help, then that’s fine, obviously. So, yeah, excellent.

Sean Clark

Thank you very much. And next time. I was going to say next month, but it’s slightly more than a month. It’s the 1 June, you’re going to be hosting a session about the use of image based tools in creativity. So that’ll be an interesting sort of counterpoint to the use of text tools.

Geoff Davis

That’s right. And we’ve got some interesting people there, including Anna Ridler, who uses machine learning, Mark Webster, I mentioned tonight, Patrick  Lichty, who’s an AI conceptualist, and it’s introduced by Luba Elliott, who’s an AI curator. So she’s going to do a brief history of the field.

And the video will be online as well at some point.

Sean Clark

Yeah, towards the end of the week. I’ll pop that up on the CAS channel and I’ll try and mail out all the attendees. So if, Geoff, you can collect that information over the next couple of days, I’ll make sure that goes out in the email.

Geoff Davis

Excellent. Okay, goodbye. Bye, everyone. Bye.

References (in order of appearance; extra references are at the end)

Geoff Davis is the editor of the AI Creative Writing Anthology (Leopard Print London, 2023) which has 20 stimulating entries and a lot of extra material. It includes the speakers. This is on Amazon and most other sites and has a large free sample.

Please visit:

AI Creative Writing Anthology: 20 Authors share how to use computer tools


People

Geoff Davis

Blog and research – geoffdavis.org

AI Creative Writing Art Anthology 2nd Ed

Editor – AI Creative Writing Anthology (Leopard Print London, 2023)

AI Creative Writing Anthology Geoff Davis 2023 GPT-4 ChatGPT
AI Creative Writing Anthology Geoff Davis 2023

MA4 Story Generator – Geoff Davis 

MA4 Story Generator is on the Micro Arts Group website

Story Software creative storyboards

Notes Storyboard v2.2 – text and images

STORY SOFTWARE CREATIVE APPS

Story Lite – text only

Home

Micro Arts Group – generative art, magazine, exhibitions, community

Summary

Speakers

Tivon Rice

https://dxarts.washington.edu/people/tivon-rice

Ray LC

https://recfro.github.io/

https://raylc.org/influence/index.html

Maria Cacelia Reyes

https://www.xehreyes.net/

Shu Wan

https://arts-sciences.buffalo.edu/history/graduate/GraduateHistoryAssociation/GradStudentProfiles/ShuWan.html

Artists

Mark Webster

https://areafour.xyz/

Show in London 2022

https://verse.works/persons/mark-webster

Sasha Stiles

https://www.sashastiles.com/

Digital poetry site theVERSEverse

theVERSEverse is a literary gallery where poems are works of art.

Bloggers

Janelle Shane

You Look Like a Thing and I Love You (book)

https://AIWeirdness.com

Gwern Branwen

https://gwern.net


REFERENCES

Inside the Mind of BRUTUS, a Storytelling Machine (2002)

https://www.researchgate.net/publication/2538922_Artificial_Intelligence_and_Literary_Creativity_Inside_the_Mind_of_BRUTUS_a_Storytelling_Machine

RNNs: The Unreasonable Effectiveness of Recurrent Neural Networks

http://karpathy.github.io/2015/05/21/rnn-effectiveness/

Large Language Models

https://en.wikipedia.org/wiki/Large_language_model

GPT

https://en.wikipedia.org/wiki/Generative_pre-trained_transformer

Sparks of AGI: early experiments with GPT-4 (2023)

“The new wave of AI systems, ChatGPT and its more powerful successors, exhibit extraordinary capabilities across a broad swath of domains. In light of this, we discuss whether artificial INTELLIGENCE has arrived.”

Video recorded at MIT on March 22nd, 2023

Sebastien Bubeck lecture with many demonstrations

Apr 6, 2023

Paper available here: https://arxiv.org/abs/2303.12712

OpenAssistant

https://open-assistant.io/dashboard

Liberal Bias and the Right Wing AI

David Rozada

https://davidrozado.substack.com/p/rightwinggpt

Elon Musk X.AI

https://www.thequint.com/tech-and-auto/tech-news/elon-musk-new-company-name-xai-twitter-artificial-intelligence-latest-news

Geoff Davis – Emotional LLMs

Please find this in the blog. The paper is in review.

https://geoffdavis.org/blog

Jaron Lanier – There is No AI

https://www.newyorker.com/science/annals-of-artificial-intelligence/there-is-no-ai

Connie Loizos – We all contribute to AI — should we get paid for that?

We all contribute to AI — should we get paid for that?

Nick Bostrom

Superintelligebce book (Oxford, 2014)

James Bridle

Books –  New Dark Age; Ways of Being.

Brian Reffin Smith

Me to ChatGPT:

Me: Write a BASIC program that is at least as good as ELIZA, and which can talk about art as well as general topics

ChatGPT: Sure, here’s a BASIC program that uses natural language processing techniques to simulate conversation and can talk about art as well as general topics. It’s not as advanced as ELIZA, but it should still be able to hold a basic conversation.

10 REM ARTIFICIAL ART CRITIC

20 PRINT “HELLO, I AM AN ARTIFICIAL ART CRITIC.”

30 PRINT “WHAT WOULD YOU LIKE TO TALK ABOUT TODAY?”

Etc…

To see the code please visit his Facebook page and scroll to 12 March 2023.

https://www.facebook.com/profile.php?id=100007534596150

Copilot

https://techcommunity.microsoft.com/t5/educator-developer-blog/get-started-with-github-copilot-with-vscode-and-python-extension/ba-p/3736564

How do AI art generators work, and should artists fear them?

https://www.euronews.com/next/2022/12/30/dalle-2-stable-diffusion-midjourney-how-do-ai-art-generators-work-and-should-artists-fear-

Ambient Art

https://research.ambientlit.com/

Games: AI in Video Games: Toward a More Intelligent Game

AI in Video Games: Toward a More Intelligent Game

Extras

Sudowrite AI writing Assistant

https://www.sudowrite.com/

David Byrne comment

Eliminating the Human

We are beset by—and immersed in—apps and devices that are quietly reducing the amount of meaningful interaction we have with each other.

MIT Technology Review, April 2023

https://getpocket.com/explore/item/eliminating-the-human

 

 

AI News summary

Now my new book AI Creative Writing Anthology (Goodreads link) is out, I will add any interesting news in blog posts.

AI Creative Writing Anthology Geoff Davis 2023 GPT-4 ChatGPT
AI Creative Writing Anthology Geoff Davis 2023

BBC News – Friend or foe: Can computer coders trust ChatGPT?
https://www.bbc.co.uk/news/business-65086798

OpenAI may have to halt ChatGPT releases following FTC complaint

A nonprofit claims OpenAI is breaking the law with a ‘biased, deceptive’ AI model.

https://www.engadget.com/openai-may-have-to-halt-chatgpt-releases-following-ftc-complaint-172824646.html