Anna Ridler Artist – CAS AI Image Art talk 2023 transcript

ANNA RIDLER – Artist

All material is copyright Anna Ridler, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page

Visit Anna Ridler’s website 

Anna Ridler
Anna Ridler’s Myriad (Tulips)

It was really interesting hearing both of your talks [Geoff and Luba], and especially it’s always a pleasure to speak after Luba because, as she mentioned, we have been working together quite closely for the past five, six, seven years has it been? And it’s been really interesting to see how the space has evolved in that time, especially now with the recent advancements in these diffusion models and the text to image models. I’m most well known for my Tulip projects, which I will talk about. But I did also want to which I made using Gans. I want to touch on how the field has changed, because now when you talk about AI art, it does now have, I think, like if you go on Twitter or if you go on Discord, it very much is being used to refer to a very particular type of work, which is this text to image work. What I’m showing now is just some very quick examples of Dall-E Tulips that I made. I was actually very lucky, and I was one of the first people to get access to Dall-E back when it was released.

And I found it actually really difficult to make work with. And it’s taken me a long time to get my hands into things like stable diffusion and Daly and mid journey to make work with because I find that the way that it’s structured and the way that it’s designed, particularly with Daly, I found it really you have no access to the code base. You have no access to the data set. There’s no way that you can tinker with it. And you’re very reliant on an API and everything is closed off. So as an artist who’s very interested in the tools and the means of production, I found it very difficult to get into and work with. I mean, that is changing. I still think there are conceptually interesting things that you can do with it. There’s really interesting research that is coming out around how it relates to memory and how it relates to and you can do some interesting things around language and ontology, but because for the most part, it’s locked away. And even when you’re working with stable diffusion, I find that you can’t look through all of the data and you can fine tune it, but you’re always going to be working off the base of the like the massive lion data set.

But for me, it’s been a long time to get to a place where I think I can do something with it. And that being said, there’s so much being produced every day with it. It does feel like magic when you play with it. I remember the first time I typed something in and got these images out. It did feel so incredible to get it. But it does raise this real question about kind of, I think, what art is, because not every image is necessarily art. And I think that’s a debate that is now going on because so much is being produced and there is this question about where does the art sit? And I think for me, the art is very much like how it’s then displayed or the message that it contains and the experience that someone has through interacting with it. So, as Liber mentioned, I’m most well known for my data sets, the work that I do with data sets. And this isn’t something that is just explicitly linked with machine learning. It’s something that I’ve worked with for a much longer period of time.

I’ve always been interested in archives and libraries and data and information because for me, like every piece of data, every piece of information is a trace of something that once existed. In many ways, I feel like reconstructing that data or that data set is like a very human thing to do work almost like a detective building up these bits of data to produce an idea or a story or to use it in my project. I think there are lots of interesting parallels between encyclopaedias and dictionaries and the history of those with some of the issues around machine learning and data sets that I’ve explored. This project that I showed in the previous slide, and it’s playing now, which was commissioned by the Photographers Gallery, where I essentially created my own ImageNet using Victorian and Edwardian encyclopaedias. I think one of the things that’s also really important for me and part of my practice is showcasing the labor that goes into these projects and the way of working so it’s not just for me like the final output.

It’s also how I got to that output. And so a lot of the time, I will document the process of making, and that documentation will be equally as important part of the final project as the artefact. That’s something that I come to you again and again in my work project that Luba commissioned me to make back in 2017. It was a project that really took off for me, actually, which is about Chileps, where I created a piece using a GAN. It was a Singan back then, the first one, which I trained on 10,000 images of tulips that I took myself. I didn’t make the tulips myself. I was in the Netherlands at the time working. And one of the reasons why I was really interested in tulips was I was making a comparison between Chulip Mania this was the first no speculative bubble and bitcoin and also the bubble that was going on around AI at the time. So in the gan piece shall show in a bit. The gan is controlled, the price of bitcoin. And it was a really important project for me to do.

And I spent a lot of time building this data set. And it’s you have a very different relationship to the data when you’re when you’re working with it very physically. Carrying all of these tulips was very heavy. Stripping them was very heavy. And one of the reasons why I stopped at 10,000 tulips wasn’t because it’s a very nice round number, although it is it’s because tulip season ended. So even though this was a very digital project, it was driven by the rhythms of nature. After I took these, if I can go to the next slide. After I took all of these photographs, I really wanted to display the data as an artwork in and of itself, which led to a separate piece called Myriad, where I’ve taken the photographs and showed them with some of the labels that I attached to them handwritten underneath. And for me, it was part of the way to draw the attention to all of the human decision making that sits somewhere in the chain of AI. Because at the time that this was being shown back in 2018, there wasn’t yet that discussion and bias around how human, Eric can creep into these systems.

This piece, when it was shown, and you can see on some of them that you can see my handwriting where I’ve crossed things out. And the piece, when it’s shown, is huge, and it’s around 50 m². It’s only been shown its entirety twice because you need quite a large space to put it up in. And I think it also gives people a real sense of the scale of data, because 10,000, when you scroll through on a thumb drive, you don’t really understand what that means in the way that you do when your body is physically reacting to it. So it takes a long time to walk by all of these photographs, and you get a sense of how long the process of putting it all together was and the labor, effort, energy and all of those things that sits behind creating a data set. Another reference point for this project was very much the Dutch still life, like the Golden Age Dutch painter, very heavily referenced in how I composed my data set, which is also another reason why I really like doing things myself.

You can’t well, I suppose you can now with like tools like stable diffusion and things. But at the time, you can create a data set of you can Google and ask for 10,000 images of Tulips against a black background. And one of the things that I really liked about the further comparison that I liked about like these Dutch steel lifes and the way that Gans work is that these paintings, the flowers in them can all exist at once. They’re flowers from spring and summer and autumn and winter that are combined. So they’re botanical impossibilities, these bouquets. But they’re combined using all of the fragments that the artist has got of the flowers that he’s seen, sketches and memories and things like that. So rather than copying from nature, paintings are drawing in from the experience of the painter. And for me, that’s a really nice parallel to how gowns work. They’re not merely copying images from the data set and collaging them together, but creating an imagined botanical possibility through the knowledge that it’s gained through the data set.

So I think there’s, like, that nice parallel that exists there. The other reference that I like to bring out with this project is the history of floral data sets that sit inside machine learning. This is the Iris data set, which is inside psychic learn. So every time that you’re importing that into a piece of code, you’re also importing the Iris data set, which was something that it was a data set created by Ronald Fisher, which has all this different data about Irises. So there is this history, this hidden history inside machine learning of Laurel data sets, which is also something that I quite like in this project. The final piece. This is a later made two versions of it. This is the 2019 edition that I made after StyleGAN was released. It’s three screen installation. And as I mentioned, the tulips are controlled by the price of Bitcoin becoming more stripy and open as the price goes up. The title references the disease that gives Tulips their stripes, which was also made them the most valuable at the height of Tulip mania.

And it’s asking questions around value and around notions of value and like speculation and collapsing these two different moments in history. And what I also really enjoyed about this project, because it is a very complicated project, was that through working with the data set and with the Gan, I was able to explore very different things in each part of it. So the data set piece was much more explicitly about machine learning and about the issues and ethics that maybe sit inside of it. Whereas the Gan piece was much more talking about something not really related to the technology and about like wider questions around value and notions of like speculation. I said, it’s a work that still gets exhibited quite regularly in various different variety of different institutions and cultural spaces everywhere. From like public it was on like buses in a town in Germany, just as a very pretty, moving image piece through to critical overviews as to where photography is going in different museums. And then because I know we don’t have masses amount of time, I just wanted to end with something that I often end talks that I do where I often am asked about where I see like AI art sitting.

And I can only answer for myself because I’m not a curator. But I find that I look back into history to see where it might go and I find it hard to connect to some of the early algorithmic artists. But I find it very easy to see the parallels between my practice and land and environmental artists. And I often look to them for inspiration, because land and environmental art is so much about planning. It’s so much about thinking through like all the various different possibilities and then allowing something that you can predict but you can never control, to then act on that planning. And for me, that’s the same as spending all of this time building my data set. All of this time like constructing like a model and then like pressing go and then allowing something to come out of it. And then also like there is this question about where the art sets. And in London Environmental Artists, a lot of it is in the documentation. I think for AI artists it’s also the documentation.

It’s not necessarily like the model as it runs or like the insides of what’s going on it’s the images, it’s the sound, it’s the performance that comes out of it. And so, yeah, that’s like where I wanted to end it and how even like now, I’m still amazed at the possibilities that this technology can offer and how inspirational I find out on a daily basis.

 

Patrick Lichty Artist & Writer – Studio Visits Posthuman Atelier – CAS AI Image Art talk 2023 transcript

PATRICK LICHTY – Conceptual Artist, Writer

Discussion of project, “Studio Visits: In the Posthuman Atelier” before the Computer Art Society (of Britain).

All material is copyright Patrick Lichty, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page

Visit Patrick Lichty’s website here.

Patrick Lichty

Patrick Lichty

I am Patrick Lichty, an artist, curator, cultural theorist, and Assistant Professor of Creative Digital Media at Winona State University in the States.  I will talk about my project, a curatorial meta-narrative called “Studio Visits: In the Post-Human Atelier.”  Much of my AI work has yet to be widely shown in the West, as until two years ago, I had spent six years in the United Arab Emirates, primarily at Zayed University, the Federal University in Abu Dhabi.  I have been working in the New media field for about 30 years, dealing with notions of how media shape our reality.  So you can see some of my earlier work in this slide; I was part of the activist group RTMark, which became the Yes Men, the Second Life performance art group Second Front, and some of my Tapestry and algorism work.

This slide shows my 2015 solo show in New York, “Random Internet Cats,” which comprises generative plotter cat drawings.  The following slide shows some of the asemic calligraphy I fed through GAN machine learning.  I worked with Playform.IO’s machine learning system to create a personal Rorschach by looking for points of commonality in my calligraphy.  I called the project Personal Taxonomies, and other works, like my Still Lives, generated through StyleGAN and Playform.IO.  So I’ve been doing AI for about 7-8 years and new media art for almost three decades.  Let’s fast-forward to now.  I decided to go away from the PyTorch GAN machine learning models I used with my Calligraphy work and my paintings at Platform.io in the middle of last year.  Switching to VQGAN and CLIP-based diffusion engines, I worked with NightCafe for a while.  Then I found MidJourneyAI.  And at first, I was only partially satisfied with the platform as I was on the MidJourneyAI Discord server and saw people working with basic ideas.  I decided to focus on two concepts as I decided to think of what I was doing as concrete prose with code.  And then secondly, I decided to take contestational aesthetics, as my prompts would contain ideas not being used on the MidJourney Discord.

I wanted to find the concepts for my prompts that needed to be less representational than the usual visuals of a CLIP-based AI.  I did two things.  First, I ignored everything typed on the MidJourney Discord, which was almost an aesthetics of negation.  And then, I considered the latent space of the Laion-5 database that MidJourneyAI was using as an abstract space.  I decided conceptually how to deal with a conceptual space using abstract architecture.  I decided to start querying it with images like Kurt Schwitters’ Merzbau, just as a beginning, as well as Joseph Cornell.  I did about twelve series called “The Architectures of the Latent Space,” illustrated here.  They are unusual because they still refer to Schwitters but are much more sculptural and flatter.  And so these went on for about twelve series.  But this was the beginning of my work in that area, then I started finding what I felt were narratives of absence.

I have considerable differences in abstraction – multiple notions of abstraction, as I want to see what is transcendent in AI realism.  For example, I started playing with real objects in a photography studio.  This image is of a simulated photo of a four-dimensional cube, a tesseract, which isn’t supposed to be representational.  Still, it was exciting that it emerged and illuminated the space.  And so this told me that I was on a path in which I was starting to confuse the AI’s translator and that it was beginning to give results that were in between its sets of parameters, which is interesting.  One body of work where my attempts at translator confusion are evident is The Voids (Lacunae), basically brutalism and empty billboards.  It is inspired by a post that Joseph DeLappe from Scotland made on Facebook of a blank billboard.  And one of the things that I noticed that these systems tried to do is that they try to represent something.  They try to fill space.  If there’s a blank space, it tries to put something in it.

MidJourney AI tries to fill visual space with signifiers.  One of my challenges was forcing the AI engine to keep that space open.  So this resulted in experiments with empty art studios and blank billboards.  Artists were absent or had no physical form, which was the conceptual trigger.  These spaces have multiple conditions and aesthetics, with a lot of variation.  The question lingered, “How do I put these images together?”  There are numerous ways to deal with them, so I made about 150 or 200 in a series and then created a contact book.  And this gets away from this idea of choice in AI art, anxiety, and so on.  I have a book that’s ready for publication so that someone can see my entire process and they can see the whole set of images.  But in this case, what I thought was very interesting is that I wound up going into a bit of reverie around the fantasy of these artists who I’d been looking into their studios, and they weren’t in, or they didn’t exist in a physical form.

Having worked in criticism, curation, and theory, as well as being an artist, I decided to take these concepts and create a meta-structural scaffold to create a curatorial narrative based on this concept of the body of 50 artists.  When I visited their fictional studios, thinking about theoretical constructs such as Baudrillard and Benjamin’s ideas of absence and aura, I created a conceptual framework that was a catalog for a general audience but preceded the exhibition.  There’s precedent for this.  There’s Duchamp and the Boite en Valise.  I’ve done work like this before, constructing shows in informal spaces like an iPod.  Here is a work from 2009, the iPod en Valise, as the iPod is a ‘box’ (Boite) for media work.  And then I thought, why can’t I do the same with a catalog?  Why can’t I use the formal constraint of the catalog to discuss the sociology of AI and some of the social anxieties putting this into a robust conceptual framework beyond its traditional rules?  So another restriction that I have frequently encountered as a New Media curator and artist is time.

A moment in time, when technological art or a form emerges is often ephemeral.  Curating shows on handheld art, screen savers, etc., show these might have a three to six-month period of time in which art is fresh.  Studio Visits is tied formally to the MidJourneyAI 4  engine because MidJourneyAI 5 has a different aesthetic.  A key concept is where the work situates itself in society and how it’s developing in a formal sense.  And then, is there time to deploy an exhibition before the idea goes cold?  And most times, most institutions are, unless you’re dealing with a festival, planning about a year out, possibly two.  And, of course, for every essay I’m writing now, there is a disclaimer saying that this is written at such a date, such a year, such a month, and this may be obsolete or dated by the time you read this, in the case of something developing as quickly as AI, this idea of being aware of the temporal nature of the form itself.

So I decided to deploy the catalog first, as the museum show would emerge from this, and create the catalog and then exhibit.  As I said before, I’ve been making these contact books, which are reverse-engineered catalogs;  I’m almost up to 15 editions.  I’ve only mentioned about six or seven on my Instagram so far.  But in general, I’m looking at curation as an artistic scaffold.  Given this project, a curatorial frame structure creates a narrative around meta-structural conceptual ones rather than representing the images themselves.  It’s a narrative dealing with society’s anxieties about AI and culture.  What happens if we finally eliminate those annoying artists and replace them with AI as a provocation?  So here’s the structure of the piece.  The overall work is the catalog.  There is a curatorial essay, the artist’s name, statement, and the studio “photograph.”  The names derive from the names of colleagues.  So I’m reimagining in a synthetic lens my community, the studio image, as we can see through the narrative that I’ve presented.  I started generating these empty spaces and let myself run through about a few hundred.

I chose the 50 most potent synthetic studio images.  A description emerges using MidJourrneyAI’s /describe function.   The resulting/Describe prompt, a brief discussion of the artist and what they do, is fed to GPT-3, which generates a statement.  So here’s the form of an artist’s layout.  You have the name.  The following layout is the first one I did for Artificium, 334-J452, inspired by George Lucas’ THX 1138.  And the layout came from this initial image.  I took these with a description from MidJourneyAI and put it into GPT-3.  The artist’s statement is as banal as any graduate school one and reads, “As an artist, my work expresses my innermost thoughts and emotions.  I seek to capture the energy and chaos of the world around me, using bold brushstrokes and vibrant colors to convey my message.”   So these were 50 2-page spreads.  The book is  112 pages and fits very much with a catalog format.  So the name, as said before, was based on the conceptual frame of the artist I was thinking of, based on the image generated, some of the concerns I saw in the mass media, and loosely upon those of names of colleagues, family, et cetera.

In many ways, I was taking a fantasy and re-envisioning my community through a synthetic lens.  These images came first when developing across the imagined artists of diversity, identity, species, and planet.  This reimagining is interesting because I wasn’t necessarily thinking of my ethnographic sphere.  I worked in Arabia, West Asia, and Central Asia and dealt with people from Africa and the subcontinent.  So many of these people of my experience figure into this global latent space of imagined artists, not just those of the European area or even those, more specifically, North America.  And then I expanded this to species and planet, as we’ll see in a moment.  So here we have an alien sound artist.  The computer in this studio is almost cyberpunk and has a very otherworldly art studio image.  I must remember which artist this is, but it has a New England-style look.  And then third one is a Persian painter obsessed with color, Zafran Pavlavi, based on my partner, Negin Ehtesabian, who is currently coming to America from Tehran.

This slide is a rough outline of the structure of the catalog.  I take the name and the framework of the artist’s practice, and you can see here that this information went into GPT-3 reading statements almost indicative of the usual graduate school art statements.  Once again, these elements reflect some of the anxieties in the popular media.  = I’m using this as a dull mirror from a visual sociology standpoint, based on scholars like Becker.  In addition, this is a draft, but more is just a pro forma approach to the conceptual aspect.  The project catalog is available on Blurb.  It’s about $100 and still needs a few little revisions.  But, this is something that is from a materialist perspective in basically inverting many practices regarding the usual modalities of exhibition curation and execution of a show or an exhibition.  I’m also thinking about the standard mechanisms of artistic presentation within an institutional path.  So not only is this dealing with AI, but it’s using AI to talk about the sociological space, the institutional space in which these works inhabit, and how these works propagate.

Studio Visits deals with institutions, capitalism, and digital production.  So issues this project engages concerning AI exacerbate social anxieties about technology.  The deluge of images problematizes any cohesive narrative.  Using this meta-narrative through this conceptual frame, I can focus on some of the social and cultural questions about AI and the future of society and how it affects it within a fairly neat package.  Design and curatorial fictions provide solutions for cultural spaces.  Cultural institutions typically need to catch up with the speed of technology.  Then bespoke artifacts, which are problematic, can remain in place long enough for the institution to adopt them.  In other words, if you get something together and get it out there, you can have that in place, take it to the institution, and hopefully, they can explicate the work.

I’ve been asked about a sequel.  I’ve had many people ask me who these artists are.  What’s their work look like?  You can see excerpts of their work in the studios.  But people were asking me to take the conceit one step further, and I’m starting to work on that idea and show the portraits of the artists and their work.  This portrait is of the artist Zafran, who I talked about earlier.  These both continue the fiction and then humanize the story, which also problematizes it.  And so this is this project and its ongoing development in a nutshell.  I invite you to go to my Instagram at @Patlichty_art, and thank you for your time.

In closing, this is another portrait of the artist Vedran VUC1C.  And once again, this is an entirely constructed fantasy.  But once again, as Picasso said, these are lies that reveal the truth about ourselves.

 

Luba Elliott curator – AI Art History 2015-2023 – CAS AI Image Art talk 2023 transcript

LUBA ELLIOTT – AI Creative Researcher, Curator and Historian

All material is copyright Luba Elliott, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page

Visit Luba Elliott’s Creative AI Research website.

Luba Elliott

I’m looking forward to giving a quick overview of AI art from 2015 to the present day. These are the years I’ve been active in the field and part of the recent generation that includes Anna Ridler, Jake Elwes, and Mario Klingemann.

To start off, I’ll mention a couple of projects to explain the perspective I’m coming from. I began by running a meetup in London that connected the technical research community and artists working with these tools. That led to a number of projects: curating a media art festival, organizing exhibitions at business conferences, and launching the NeurIPS Creativity Workshop, which is probably one of the projects I’m best known for. This was a workshop on creative AI at a major academic AI conference. Alongside that, I also curated an art gallery that still exists online from 2017 to 2020. The workshop now continues, currently run by Tom White, an artist from New Zealand.

If you’re interested, you can still submit work to it. I also curate the Art and AI festival in Leicester, where we exhibit work in public spaces around the city. I’ve done some work with NFTs, including exhibitions at Faro File and at Unit Gallery.

Now I’ll start the presentation, and I usually begin with Deep Dream. This was a technology that came out of Google in 2015. You’d input an image, and the algorithm would enhance certain features, producing vivid colors and strange shapes. It was one of the first developments that excited the mainstream about AI. It’s still one of my favorite projects because it’s quite creative and aesthetically interesting. Few artists continued working with it. Daniel Ambrosi is one who has, often creating landscape artworks that retain the Deep Dream aesthetic while preserving the subject matter. That’s important because many artists let the aesthetic overshadow the image itself. Ambrosi has also experimented with cubist influences to refresh his approach.

Then came style transfer, where you could take an image and apply the style of Monet or Van Gogh. This excited many AI researchers and software engineers, who saw it as a representation of art similar to what you’d find in museums. In contrast, many contemporary artists and art historians found it unappealing because today’s artists aim to create something new, whether aesthetically or conceptually. Interesting work in this area often requires broadening the definition of style beyond just artistic style. Gene Kogan, a key figure in the field, created variations of the Mona Lisa using styles like Google Maps, calligraphy, and astronomy.

Next came GANs, which gained popularity around 2014 and evolved rapidly over the next few years. By 2018 or 2019, they were producing photorealistic images. Some of my favorite works come from the earlier GAN period. Mario Klingemann created striking images exploring the human form that drew comparisons to Francis Bacon. These early works often contained visual glitches—misplaced facial features, oddly angled limbs—which became integral to the artistic expression. As GANs improved, artists had to move beyond relying on those glitches.

Scott Eaton is one example. He deeply studies human anatomy and uses GANs to combine realistic textures with slightly distorted forms familiar to those tracking GAN development. Mario Klingemann also continued experimenting. At a show I curated for Unit Gallery, we displayed two of his works: one from his 2018 “Neural Glitch” project, and another made using Stable Diffusion based on the earlier image. The newer image is more realistic, illustrating how much the technology has advanced.

Ivona Tau explores machine learning itself. One of her projects involved machine forgetting, where image quality deteriorated over time, challenging the usual goal of improvement in machine learning. Entangled Others—Sofia Crespo and Feileacan McCormick—have done fantastic work inspired by the natural world. Their recent projects combined generations of GAN images to create creatures with traits from multiple species.

Other artists have focused on how to display their work or how to engage with ecosystems. Jake Elwes, whose work is currently on show at Gazelli Art House in London, trained an AI on images of marsh birds and installed a screen in the Essex marshes. Real birds interacted with this digital bird, creating a fascinating encounter between two species.

In sculpture, Ben Snell created a project called “Dio,” where he trained an AI on sculptures from antiquity to modern times, then destroyed the computer that created the designs and used its remains to make the sculpture. Conceptually, this is far more developed than many other AI art pieces and recalls 20th-century artists who destroyed their own work.

Roman Lipski is an artist who considers datasets deeply. Though he primarily paints landscapes, he experimented with AI by photographing a scene, painting nine versions, training an AI on those, and responding to its output. His style evolved through this interaction, becoming more abstract and cooler in tone. Despite using digital tools, he continued working in physical media like paint and engraving.

Helena Sarin is known for using her own datasets and developing a distinct aesthetic. She often combines media—flowers, newspapers, photography—with GANs to create highly original work.

Normally, I talk about Anna Ridler’s tulip project, which I commissioned for the 2018 Impact Festival. Since she will be discussing it later, I’ll just mention that she made a conscious effort to highlight the human labor behind AI art. Her exhibitions often paired generated tulips with walls of hand-drawn flowers, drawing attention to the dataset—a rare approach at the time.

In more recent years, with the rise of DALL·E and CLIP, attention has shifted to text-to-image generators. These tools create images from written prompts and have changed the focus of AI art. Earlier AI artists often explored the underlying technology or its ethical implications. In contrast, much current text-to-image work is more focused on aesthetics.

Some projects still stand out. Botto, by Mario Klingemann, operates as a DAO. A community votes on which image to sell, and during the NFT boom, some pieces fetched over a million euros. Vadim Epstein has worked deeply with CLIP, developing a personal aesthetic and narrative video works. Maneki Neko, whom I curated in an NFT exhibition, creates intricate, detailed images that feel distinct from typical Stable Diffusion outputs, likely combining multiple images and heavy post-processing.

Ganbrood has found success with fantasy-themed images. Artists like Varvara and Mar use text-to-image generation to design sculptures. Controversies have emerged too. Jason Allen won first prize at a US art fair with an image made using Midjourney and Stable Diffusion. Critics questioned whether he clearly disclosed the AI’s role. He argued that refining the prompt was itself artistic labor.

At the Sony Photo Awards, Boris Eldagsen submitted a more ambiguous image that won praise from judges. He later withdrew it, aiming to spark discussion about AI’s place in such contests.

Jake Elwes’ “Closed Loop,” made in 2017, involved two neural networks: one generated images from text, the other generated text from images, creating an ongoing dialogue. It demonstrates both how far the technology has come and how conceptually rich earlier works were.

To close, I want to highlight a project by South Korean artists Shin Seung Back and Kim Yong Hun. They used facial recognition in a fine art context, asking portrait painters to disrupt the algorithm’s ability to detect faces. The results varied—some still looked like portraits, others did not. One portrait took me a long time to recognize because the face was tilted 90 degrees. It’s a brilliant example of using AI tools outside their original purpose to produce meaningful art.

That’s the end of my presentation. You can find out more about my work on my website or email me with any questions. Now I’ll pass over to Geoff for the next speaker.

 

Mark Webster artist – Hypertype – CAS AI Image Art talk 2023 transcript – NLP

MARK WEBSTER – Artist

All material is copyright Mark Webster, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page

Visit Mark Webster’s Area Four website.

https://areafour.xyz/

Mark Webster Hypertype London exhibition
Mark Webster Hypertype London exhibition

Thank you for inviting me. It’s wonderful. Okay, just maybe a quick introduction. So, my name is Mark Webster. I’m an artist based out in Brittany. Lovely part of France. I’ve been working with generative systems, well, computational generative art, since 2005. And it’s only recently though, that I’ve started to work on a personal body of work. And one of those projects that I’ve worked on is Hypertype, which is the project I’d like to talk to you about this evening. So, in a nutshell, Hypertype is a generative art piece that uses emotion and sentiment analysis as its main content. So I’m pulling in data using a specific technology. And that data is used as content to organise typographic, form as letters and symbols. So it’s first and foremost a textual piece. And secondly, it’s trying to draw attention to a specific field in AI specific technology, which is called Natural Language Processing or Understanding. So I’m going to talk a little bit about this. I’m going to talk a little about the ideas that came to develop Hypertype. What you’re actually seeing on the screen at the moment is two images from the program and that were exhibited in London last year in November.

So just to talk a little bit about where this all came from. A few years back now, I came across this technology. So it’s by IBM. It’s an API Natural Language Understanding. And it enables you basically, what this technology does is it enables you to give it a text and it will analyse this text in various ways. And there was one particular part of the API that interested me was the analysis, the emotional and sentiment analysis part. So what it does is you give it a text and it basically spits out keywords. And so these keywords are relevant words within the text and it will assign a sentiment score. So something that is positive, negative, or neutral to these keywords. It also assigns an emotion score and based on five basic emotions. So these five basic emotions are joy, sadness, fear, disgust, and anger. So, yeah, I came across this technology quite a few years ago and I became kind of fascinated by it. I wanted to learn a little bit more. So to do that, I did a lot of reading and I also wrote a few programs to try and explore what I could do with this technology, just try and understand it without thinking about doing anything artistic to begin with.

So what you’re seeing on screen here is on the left, you have basically just a shoot screen of some data. So this is a JSON file. This is typically the kind of data that you get out of the API. So I’m here. I’m just underlying the keyword. This is one keyword, which is a person. So it’s Douglas Hofstadter, a very well known philosopher in the world of AI. And as you can see, there are five emotions with a score that go from zero to one that are given to Douglas. And there’s a sentiment score. It’s neutral. On the right, what you’re seeing is probably something that you’ve seen before. This is a different technology, but it’s very much linked with textual analysis. What you’re seeing on the right is facial recognition technology. And in fact, you are seeing as well the photo of the face of a certain Paul Ekman. Now, Paul Ekman is quite famous because he was, along with his team, one of the people in the kind of bring up this theory that emotions are universal and they can be measured and they can be detected. And this was a theory that was used in part to develop facial recognition, emotion recognition, but also textual.

So, as I said, as I learned about this technology, I wrote a lot of programs. What I have here is a shoot screen of an application that I wrote in Java, basically enabling me to kind of visualise the data. Very simple. Actually. What you’re seeing is an article on Paul Ekman’s research, which is quite funny. On the right, you can see that there are a list of keywords. And for each keyword, there is a sentiment score. And on the left, there’s a little graphic, and there’s a general sentiment score. And then I can go in through the list of each keyword, and I can see kind of information about the various five emotions. So there’s a score for each emotion joy, sadness, disgust, anger, and fear. So I built quite a few programs. I also made a little Twitter bot that enabled me to kind of… because you can get so much data from this technology, it was really important for me to kind of get an understanding of how it not just works, but what it was giving back. So I wrote a little Twitter bot that basically would take an article from The Guardian every morning, and then it would analyse this and it would publish the results on Twitter.

And this just enabled me to kind of observe things. But at the end of the day, at some point, there was the burning question about, well, how does this actual technology work? How does a computer go about labelling text with an emotion? And so I went off and I did a lot of reading about this, not just kind of general articles in the press, but I tried to learn about it technically, and I came across a lot of academic articles. Now, I’m not a computational linguist, and very quickly I came across things that were very, very technical, but something else very different happened. And it was quite interesting. While I was trying to learn about the technical side, about how this computer was labelling text with emotion, another question came about, which was, well, what is an emotion? What is a human emotion? And that was really interesting because at the time, I was reading a lot of Oliver Sacks. You may have heard of Oliver Sacks. He’s written a lot of books. He was as a neuroscientist, and although he never really touched upon emotion, his books kind of opened up a door. And I started to read and learn about other people who were working in the field of neurobiology.

And there were a few people so there was Antonio Damasio and there was Robert Sapolsky, two people who are very well known in the field of neurobiology and touching on questions, not just emotion, but also consciousness. And a lot of their texts can be quite technical, yet they were very, very interesting. And then there was another person that came along – which I’ll show the book cover – of a certain Dr. Lisa Feldman Barrett, also based out in the States, who’s doing wonderful research and written a number of books, one of them here called The How Emotions Are Made. Now, it was really with Lisa Feldman Barrett’s book that something kind of clicked, because in it she started to talk about Paul Ekman, and in a way, she kind of really pulled his whole theory apart. Now, Dr. Barrett is doing a lot of research in this field, so she’s working in a lab and she’s doing contemporary research. What she says in the book really kind of debunks the whole theory of Paul Ekman. That is to say that human emotions are not universal, that they are not innate, we’re not born with them. And this very interesting quote that I put here, emotions are not reactions to the world, but rather they are constructions. So her whole theory kind of drives towards this idea that, in fact, emotions are concepts, and they’re things that we do on the fly in terms of our context, in terms of our experience, and in terms of also our cultures.

And so this was really kind of an eye opener for me. And it also, in a way, kind of made me think, well, again, how does a computer label words and try and infer any kind of emotional content from that from what I was reading from these people, Lisa Feldman Barrett, Antonio Damasio, Robert Sapolsky, they added the whole kind of body side to things. We tend to think that everything is in the brain, but emotions or experiences are very much a bodily experience. And so at the end of the day, I basically came up with the conclusion that, well, a computer is not going to really infer any kind of emotional content from text. So from that point of view, I thought, well, it’s an interesting technology and it’d be nice to do something artistic with it. So how am I going to go about this? So that was the next stage. I did all this kind of research and I basically came to the conclusion that the data is there.

There’s lots of data. How do I use it? Let’s use it as a sculptor might use clay or a painter will use paint. Let’s use that data to do something artistic with. And at the time, I actually didn’t do anything visual. I did actually did a sound piece to begin with which was published back in 2020 called The Drone Machine. And this particular project was pulling in data from IBM Emotion Data Analysis and using that to drive a generative sound oscillators. I basically built a custom made synthesiser, digital synthesiser that was bringing in this data and it was generating sound. And I can share the project perhaps later on with a link. It was published and went out on the radio. It’s 30 minutes long, so I’m not going to play it here. This was the first project I did. What was interesting is that I chose sound because I found that sound was a medium that probably kind of spoke to me more in terms of emotion. But the next stage was indeed to do something visual. And this is where Hypertype came along. And again, the idea was not at all to do a visualisation.

Again, the data for me just didn’t compute use that word. So for me, the data was purely just a material with which I could just play with. So here what you’re seeing on the screen are just very first initial kind of prototypes, visual prototypes for Hypertype in which I was just pulling in the data and I was just putting it on the screen. There were two basic constraints I gave myself for this project. The first one was that it had to be textual, okay? That was it. It had to be based on text, so everything is a font. And secondly, the context that is the content should come that’s what I said to myself. It should come from the articles that I read, all the research I’d done about the technology. So those were the two kind of constraints. And from there I basically started to develop. I’ll probably pass through a few slides because I’m sure we’re running out of time, but here I’m just showing you a number of iterations of the development of the project. So I brought in color. Of course, colour was an interesting parameter to play. With because you could probably think that with emotion that you want to assign a certain color to an emotion or anything.

But that, again, I completely dismissed. In fact, a lot of the colour was generated randomly from various colour palettes that I wrote. Here’s a close up of one of the pieces. Here are some more refined iterations. For those people who work with kind of generative programs, you get a lot of results. And I think as an artist, what is really, really difficult to do when you’re working with a generative piece is to try and find the right balance between something that has visual coherency across the whole series, yet has enough variety. Here these two images, obviously, they’re very different, one of them being quite minimal, the other being a little bit more charged, yet you notice that they have a certain coherency visually. So as a generative artist, you create a lot of these. I usually print out a lot. I usually create websites where I just list all the images so I can see them. And then eventually, yes, there was the exhibition. So for the exhibition, there were five pieces that I curated. So five pieces I chose from the program. And then the program was also went live for people to mint in an NFT environment again. So these were the last pieces, so maybe I’ll stop there.

 

AI & Image Art CAS Talk 1 June 2023 – video & transcripts online

AI & Image Art CAS Talk 1st June 2023

The talk included Geoff Davis (host and Introduction), Luba Elliot (curator) with a history of AI Art, and the artists Anna Ridler, Mark Webster and Patrick Lichty.

Transcripts are below the video. With thanks to CAS and Sean Clark.

AI and Text talk is also online, please see the AI & Text Transcript page which has the video link, or visit the Computer Arts Society Talks page

The AI & Image Art CAS talk video:

TRANSCRIPTS 

Geoff Davis – Introduction – AI Researcher at UAL CCI London, Artist

Luba Elliott – Curator, Creative AI Researcher, Historian

Anna Ridler – Artist

Mark Webster – Artist

Patrick Lichty – Artist, Writer

AI and Image Art Talk – Geoff Davis – 1 June 2023 – Computer Arts Society CAS

AI DOOM BANDWAGON

See below for the transcript of the whole evening.

Introduction from Geoff Davis followed by the four speakers from the book,

    • Luba Elliott
    • Anna Ridler
    • Patrick Lichty
    • Mark Webster

This first part is EXTRAS to the Talk, which were not in the live talk.

To get to the actual talk, scroll or search down down for Live Talk.

I edited these out to reduce time, as the slot was not that long. I mean, this is a lot to cover.

DOOM BANDWAGON

Social media has huge and unpredictable social and political effects, but regulation only started twenty years after it appeared. The precursor to Facebook was MySpace and was founded in 2003. The UK’s Online Safety Bill has arrived in 2023, but is not law yet.

My AI research is in interaction bias, and I have a new paper out soon. There are many pressing problems with AI, rather than possible dystopian scenarios.

It’s obvious that there will be economic changes from more automation, but the so-called existential threats from AI are threats that we already have now, but more so, such as more fake news, more surveillance, robots helping dictatorships run amok with even more appalling weapons. These headline terrors make great news, rather than pragmatic and current problems like racial and gender bias, accuracy, and so on. AI bias can have a subtle and insidious effect, without anyone noticing

ARTIFICIAL POSTURE – AI FILM MOVIE MAKING

Already AI video generation is becoming mainstream, liberating film-making from being an exciting team process, where people interact and learn new skills and have amazing experiences, to a deskbound headache-producing solo computer graphics marathon.

THE MEDIA OF AI

With this very positive response, and the fastest ever take up of the new AI art and text tools, would this media fear of AI be so intense, if there had been no Terminator films, no pandemic, and a war wasn’t raging in the middle of Europe? And if middle-class journalistic jobs weren’t threatened, as I mentioned in my last talk?

HUMAN ENFEEBLEMENT

The fear of enfeeblement, such as if computers replaced human creativity, is like fearing that mechanical devices would reduce human power or effectiveness. Now, great strength is mainly celebrated in sport rather than the work place.

The art world has parallels with professional sport, with A grade and B list artists, and a huge number of amateurs who do it for fun. Stellar art celebrities, with huge amounts of money at the top, little or no money below, causing competitiveness and status anxiety. Both sport and art are group activities.

Perhaps it is the origin and separation of AI Art from the art group or art world that is causing unease, and AI art’s ambiguous position. Some call for AI art to be a new art category, with its own exhibitions, others think AI will be just another tool.

OVERPRODUCTION

With a saturation of AI images and words, humans might give up trying to become professional authors or artists, instead creating ambient works for their own personal or social amusement.

This is apparent in Amazon’s huge book self-publishing market, which has been around for many years, and has not quite destroyed traditional publishing, which still continues. More indie publishers exist now than before this change. The mainstream art generators have community forums and awards.

Serious artists are usually driven individuals who would do art anyway. So it is unlikely that AI will affect the art world too much. AI will be absorbed as another art-historical trend, or influence.

With AI being in the news, and technology prevalent, artists will experiment and produce reflective art.

Our panel of speakers will illuminate these topics.

ART ROBOTS JOKE

Ai-Da The Art Robot has praised Sasha Stiles, the author of the AI book “Technelegy”, saying “Sasha’s beautiful poetry evokes the experience of an intimate social gathering, with views on life that make me feel I’m there.” This is not actually an android talking by the way. It would have said, “I’d like to go to that party but someone has bolted my feet to the floor!”

CLASSIC AI ART

Classic AI art was covered in the recent AI History talk with innovative work from Paul Brown, Ernest Edmonds and Steve Bell amongst others. Their work was more rooted in artistic considerations of human-computer interactions and the physical characteristics of the hardware of the time. Systems art was an inspiration. This talk is online, please visit the Computer Arts Society website.

ORIGINS

During the early days, making computer art was not respected, and some saw it as reactionary and irrelevant. The 1960s and 1970s were full of intense political strife, and passionate art movements which included Fluxus, Situationism, Performance Art, Land Art, Psychedelic Art, and many other anti-establishment approaches.

Computer art emerged from military-industrial-academic research labs, complete with their unknown, baffling and expensive mainframe computers, which had been developed only twenty years previously during the Second World War to create the atomic bomb.

“Nearly every computer artist tells a similar story, a tale in which their computer art is accepted on its merits, only to be rejected once the curators discovered it was generated on a computer. Computer artists were regularly rebuked and insulted by gallery directors. Such was the stigma attached to computers that artists, such as Paul Brown, have used the expression “kiss of death” to describe the act of using computers in art.”
When The Machine Made Art, Grant D. Taylor 2014.  https://www.bloomsburycollections.com/book/when-the-machine-made-art-the-troubled-history-of-computer-art/notes

Many professional artists don’t like to be labelled ‘computer artists’, even if their art installations, video or sound design are completely dependent on computers. The tools are absorbed, as in music production, where everything is now recorded using DAWs or digital audio workstations, which now include AI tools.

WHAT HAPPENS WHEN AI DOES ALL THE JOBS

Plus starvation when all jobs disappear. This could cause the adoption of Universal Basic Income UBI systems, which have been promoted for decades. Then everyone can be an artist.

(Of course this was a desired society back ages ago. Now it is feared, as no-one expects much help from the State, apart from subsidence level hand-outs to stop riots.)

COMPUTER PERFORMANCE

In AI art and text, the ‘uncanny valley’ effect was often mentioned as a flaw of all human – AI interactions. This was because the outputs of the generators had an unreal tone perceived as spooky or uncanny. This was not actually due to the accuracy of human perception as claimed, but due to the low performance of the generators, which were still being developed.

MUSIC SOFTWARE TOOLS – DEATH BY DAW

In the music world, every few years there were new music styles, but once the digital audio workstation or DAW appeared, they blended together into today’s hybrid pop music. Genres such as Techno and Jungle or Drum and Bass predated the general use of a digital audio workstation, and used a mix of analogue and digital equipment.  Digital audio workstations have led to a homogenisation of music using preset styles and a disconnect from musical society. Sound familiar?

Critics of Digital Audio Workstations (DAWs) have raised several concerns and highlighted potential negative effects associated with their use. Here are some of the criticisms:

Over-reliance on presets: DAWs often come with a vast library of pre-made sounds, loops, and effects. Critics argue that this can lead to an over-reliance on these presets, limiting the creativity and originality of the music produced. It may discourage musicians from exploring unique sounds and experimenting with different techniques.

Homogenization of music: With the widespread availability of DAWs, it becomes easier for anyone to create music. Critics argue that this has led to a saturation of generic and formulaic music. The ease of use and access to pre-made elements can result in a lack of innovation and artistic diversity.

Loss of human touch: DAWs offer precise editing tools and the ability to fix imperfections in recordings. However, critics argue that this level of control can lead to an overemphasis on perfection, resulting in sterile and overly polished music. The natural variations and imperfections that can add character to a performance may be eliminated, leading to a loss of the human touch.

Disconnect from the traditional recording process: Critics contend that the ease and convenience of DAWs have contributed to a detachment from the traditional process of recording music. The speed and efficiency of digital production can undermine the organic and collaborative nature of music creation, potentially affecting the dynamics between musicians and the overall quality of the music.

Potential for overproduction: DAWs offer an extensive range of plugins, effects, and editing capabilities, which can lead to excessive tinkering during the production process. Critics argue that this overproduction can result in music that sounds overworked, cluttered, or lacking authenticity. It may prioritize technical perfection over the emotional impact of the music.

High learning curve and complexity: While DAWs provide powerful tools, critics argue that their complexity can be overwhelming for newcomers. Learning to use these software programs effectively requires time, patience, and technical skills. This learning curve can discourage some individuals from fully harnessing the potential of DAWs or even deter them from pursuing music production altogether.

It’s important to note that these criticisms represent perspectives from certain individuals or communities, and opinions may vary. DAWs also have numerous advantages and have revolutionized music production by making it more accessible and democratized.

Music technology provides some insights. The effect of computer workstations on music production was to create more homogenised, preset and socially disconnected music. Electronic music genres with dedicated audiences, like Techno and Drum and Bass, predated the use of digital audio workstations or DAWs.

Since art and music are socially situated activities, the simplistic use of AI art generators is outside of the normal methodology.

So it will be interesting to see what happens with the use of AI art generators, since art is not only about the image.

LABELS

In 2018 the French collective Obvious hit the headlines with “Edmond de Belamy”, an AI print on canvas, which sold for a very high price at Christies. This was a success as they positioned AI art firmly in classic art history, as a new AI twist on an old style, thus becoming marketable. The label became the art.

LIVE AI & ART IMAGE TALK:

WHAT WAS PRESENTED

1st June 2023 (transcript)

AI SAFETY STATEMENT

“Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war.”

It was on the newsstands yesterday. Headline we’re all going to die, basically. Now, the ethics of AI is a large I could read some of this stuff. Now, the ethics of AI is a large and complex area. The speakers might have something to say about it through their artwork, which people do these days, especially the newer artists working in a more social way, or they might not, of course, since it’s not an obligation for artists to have ethical attitudes unless they’re applying for grants. So there’s always that that an artist is supposed to be free of these constraints. Now, this statement was from the center for AI Safety who got some them and a bunch of AI experts and presidents of various things made this statement. They didn’t put much detail in because they wanted it to be simple and to the point. Now this is a really good idea, obviously, but these fears have been around for a while. This book, ‘Superintelligence’ by Nick Bostrom, 2014, that’s ten years ago. This is a good one to read if you want to get up to date with what’s being talked about now. But nobody took any notice of this, until now.

I mean there were but arguments about bias and so on. So there has been a debate, but it’s been more in the industry than outside the industry as now it’s all gone mainstream public. There’s no structure for any sort of overreach organization at the moment. Governments aren’t very good at this sort of thing. There’s only now some regulation coming in for social media and MySpace started in 2003, so that’s 20 years you’ve had social media with hardly any regulation at all. And I think the online safety bill, which is the British government’s attempt, is still in the law. It’s still in Parliament, so we don’t expect anything very soon on that front. Now then, a quick one.

ARTIFICAL POSTURE

Perhaps the biggest immediate threat from AI will be postural as it promotes even more desk bound computer or smartphone work, leading to an increase in back and neck aches, eye strain, headaches, repetitive strain injury. David Byrne, who is in the band Talking Heads and writes on music and culture, has commented that all recent technologies from recording music, especially in his case obviously, and AI now eliminates humans and makes people more isolated. This is a good point.

So the health aspect in physical and mental health is quite important with the growth of these isolating technologies rather than the kind of terminator myth we have, which is an edipor construct saying that AI is going to turn into like killer robots. I know that isn’t all that the center of AI safety are talking about, but it’s definitely an aspect.

THE JOY OF AI

Most of the responses to it have been very positive. I mean, I’m an AI researcher, and I’ve been researching on how creatives use AI. And so I’ve done a paper and I’ve got a new paper on now. And the reaction we get from creatives using generators, text generators, or image generators for the first time, Joy, is their most I mean, in the words of their comments, they say, Joy, it’s amazing. And then you do a sentiment analysis on all of the comments and the sentiment joy comes up as the highest scoring term. Now. We had these results from the studies. We had 82 writers using AI, most of them for the first time. This was a couple of years ago and they were amazed. They couldn’t believe how much fun it was.

And the same applies to the image generators that are around now. People just really like using them, which is why the takeup has been so great, so rapid. I think certainly Chat GPT is the fastest ever app to get to 100 million users, much faster than Facebook and so on, because it read spread by word of mouth as well. It wasn’t promoted when it came out, wasn’t advertised or anything. Generative and AI arts now I’m moving on more to the art aspects rather than ethics or professional use of these systems and how people feel about them. Generative art is not the same as AI art. They often get mixed up. I mean, people in the industry, in the art business or whatever know about this, but it’s worth mentioning. Generative art uses computer programs to produce repetitive variations, usually to make abstract images or sequences. You have people like Zancan now who does more artistic figurative images, but they’re still created abstractly from algorithms, whereas AI programs attempt to formulate output that mimics human responses, as in games, robot painting systems, I get onto that later. And modern image and text generators.

I mean, I did all this in Micro Arts Group years ago. We had a lot of art generators, but also one program that was a simple kind of expert system using language to produce stories. So I did both aspects, and that was a while ago. Another thing, Tyler Hobbs, who’s a very famous generative artist, prolific, moving into galleries now. He’s had a show at Unit London recently. He doesn’t do AI, but he just produces all this generative art. And some of the AI artists produce, most of them, in fact, figurative art. Organizations like Botto, which is democratic, produce figurative art. So even though they’re AI and they’re related to these sort of things, they still produce what looks like traditional art.

AI ART ROBOTS – OLD and NEW

Okay, now the next slide. This is a fun one. We’ve got Harold Cohen, who gets to mention again, this is in the 70s. He had an AI system, which was an expert system he’d coded to be like himself. And it would draw on using a turtle, which is a kind of small robot that drew on the floor on a piece of giant paper on the floor. So you get an outline and then he’d fill them in. So you can see in this picture, you can see the outline and then he’d go and fill them in.

And I think later versions had color as well, but the device drew lines. Now, the other one on the right is AiI-Da, the art robot, who has been around since 2019. Now, it’s got a chrome hand with a brush painting mechanism, which you can see in this picture. This art robot is ignored by the fine art and academic communities, but it’s well known popular art figure. So I thought I’d mention it because it gets in the press. It’s like a publicly known AI artist. It’s presented as an android, but it’s not really. It’s just a drawing machine that’s kind of dressed up, does a bit of speaking, but I’m not convinced by that. But last month in the Design Museum in London, which is a really big institutional kind of place, it was on for a three day show in a really big auditorium. Huge crowds arrived to watch the robot, AiI-Da do thing. It produced, I think, only one painting a day. It’s quite a slow process with this big, funny arm. And also she gave a talk at the House of Lords in Britain on AI and art, which is kind of interesting. And also she was arrested in Egypt when they went there for a tour, which I thought was hilarious. And apparently they thought she was some kind of spying device. So they just stuck her in a cell for a while. So she gets to places that AI art doesn’t normally get to, including the public consciousness, really. So I think she’s worth a mention, even though she’s a bit weird in a lot of ways. And there’s a big debate about whether you should have these anthropomorphic robots, especially pretending to be women and pretending to be attractive artists. There’s something a bit funny about it.

AI ART TOOLS

But anyway, moving swiftly on, we’ve got AI art tools now. I use Photoshop a lot. I’ve got the Adobe suite. But obviously there’s alternatives to Adobe now. There’s all sorts of new plugins in Photoshop. So it’s going even more mainstream now into design and into general production. So generative tools are in and generative editing tools are in these programs, the Paint programs. Now, I mean, we were talking about Quantel [a CAS talk in Leicester]. That was kind of simple Paint program that did a huge amount of new work when it came out. Photo editing and so on. So things have moved on quite a bit. The other thing to mention about all these tools is that these art generators are more or less free for most people. Or you can pay a few pounds and you get much more use of them. But I remember when certainly music and art software cost hundreds of pounds to get one program. So another reason for the rapid take-up is that they’re so cheap or free so everybody will have a play with them. I mean, a friend of mine was saying at their school they did AI art using, I think, MidJourney as a competition for the children where they’d write prompts and then the teacher would produce these art images from their prompts and then they print them out and it’s all good fun. It’s like a workshop using it.

Now, I’ll quickly go into these mainstream image generators because this is the thing that’s made it such a publicly known area. Image generation development accelerated last year from last year. The field has gone from half working systems, which we’ve been looking at for a few years now, to similar to human level production when they’re very cheap or free.

CHOICE ANXIETY

AI computer art from a year or two ago is experimental. All those blobs and psychedelic effects that we’ve been looking at for a while from transitional systems so from like nothing to now there was this period of a few years where there was a lot of quite interesting work being done because artists are experimental. They prefer it like that anyway.

Now, these generators enable people with artistic urges or requirements for presentation graphics, say, but no training to produce images of whatever takes their fancy. They’re very useful for presentation and illustrations as I’ve used them in these slides. The difficulty for an artist is the tremendous overproduction of images. And this is a feature of AI. It can make one thing, but it can make a million things. So how do you decide? So that creates choice anxiety and befuddlement it’s just too much. The way to avoid choice anxiety is not to be a perfectionist and live with uncertainty, which is difficult for an artist if you’re working on something and then you’re presented with a huge amount of possibilities. Maybe the opposite of what you want see this. I’ve got here, I did something called Micro Arts Group years ago.

MAINSTREAM IMAGE GENERATORS

Now, if you win Micro Arts into Google, you get lots of people making tiny things, making sculptures inside the eye of a needle. This type of activity should definitely give you bad eyes. But I did a generation on Micro Arts and it came up with a mixture of monitors with things going on inside them. So you’ve got group activities, smallness and like computer monitors. So that’s quite a good one. The one in the middle is me having a chat with Tyler Hobbes because I did have a chat with him and I didn’t take a selfie, so I just not made an image for fun. And there it is. And there’s me looking at myself, which is very odd, what these generators come up with sometimes. And the one on the right is some abstract stuff I’m working on now, which is more like drawn or repetitive drawn, generative graphics, which is really interesting area to use them to create what you use a plotter or a drawing device to make before they could just do it for you. And of course, they’re only making the output, not the process. So they’re lots of fun, really.

OUTSIDER or NAIVE ART

They do open up the possibility for more diverse art production. There’s this thing, outsider or naive art, which is obviously not in the mainstream gallery system, but there’s an awful lot of it around, an awful lot of amateur artists who would like to use these kind of things and maybe explore aspects of what they’re working on. And also it creates a community and people communicate with each other. So there’s a big online scene now for this type of work. It’s this origin and separation of AI art from more traditional artists and the art world. So it’s kind of a separate area. People later will talk about this a lot, but there is a separation of this mainstream image generator work from the traditional gallery system. So this puts in a slightly ambiguous position. Some call for AI art to be a separate art category and have separate exhibitions for AI art. And other people think it would just become integrated in the way that it already has, with tools in Photoshop and so on. But, I mean, this is a whole area that we’re talking about. So we’ll be discussing this through the artists and through the discussion at the end. And it’s all very new as well, and nobody really knows how it’s going to work out.

AI VIDEO

Also, video can be generated. Now, having a mobile phone opened up a whole area of phone filming, making films, using only a phone. Whereas a few years ago and I’ve done a bit of filmmaking, ages ago, this was a huge team involved and it costs a fortune. You got to have all this expensive equipment. But now you can win the toner prize with an iPhone film and you can produce really interesting stuff that people want to see on a phone. So this having AI video would just add features to that and maybe you’ll get animations coming out of it and so on. And all this is kind of starting to happen now, but it’s still fairly early. Okay.

FOUND ART

The other thing I was going to mention, which we don’t really have the time for, is found art because a generator is working within a huge data space of images of all sorts, photographic art, medical, anything you can think of so it will produce from that data set. So you could use that to create a kind of found art.

Rather than know what you want with a descriptive prompt, you could go off investigating the data space.

AI News summary

Now my new book AI Creative Writing Anthology (Goodreads link) is out, I will add any interesting news in blog posts.

AI Creative Writing Anthology Geoff Davis 2023 GPT-4 ChatGPT
AI Creative Writing Anthology Geoff Davis 2023

BBC News – Friend or foe: Can computer coders trust ChatGPT?
https://www.bbc.co.uk/news/business-65086798

OpenAI may have to halt ChatGPT releases following FTC complaint

A nonprofit claims OpenAI is breaking the law with a ‘biased, deceptive’ AI model.

https://www.engadget.com/openai-may-have-to-halt-chatgpt-releases-following-ftc-complaint-172824646.html

 

 

 

AI writing apps and software CST – top 10 – top 50

(Updated frequently) A long list of writing software that uses AI, quoting their by-lines (more than 50 now)

Geoff Davis AI writing and art
It’s not that bad.. quite useful in fact….

AI text processing writing software, also including Notes Story Board (zooming), Granthika, Scrivener.

Includes NLP and text generation techniques.

October 2021, updated ocasionally – last Nov 2022

After The Deadline

https://www.afterthedeadline.com/

“We use artificial intelligence and natural language processing technology to find your writing errors and offer smart suggestions. Our technology is available under the GNU General Public License. ”

AI Writer

http://ai-writer.com/

“Generate unique text with the ai article writer”

Anyword

“Generate effective copy for ads, emails, landing pages, and content. ”

Articoolo

http://articoolo.com/

“Create unique textual content in a flash”

Autocrit

“The best self-editing platform available for a writer. ”

AX Semantics from Gartner

Increase Your Online Sales With Better Automated Content Writing

Our easy-to-use Natural Language Generation software helps you and your team

Conversion AI

https://www.conversion.ai/

“Your AI copywriting assistant. Now Jarvis can help you write blog articles, social media posts, sales letters, and even books. ”

Copyshark

https://www.copyshark.ai/

“AI powered software that generates ad copy, product descriptions, sales copy, blog paragraphs, video scripts & more.”

Copysmith

https://app.copysmith.ai/

“Supercharge Your Content Brainstorming with AI. ”

Broca

https://www.usebroca.com/

“Create content for every stage of your marketing funnel. Start your first campaign free. Broca generates content for ads, blogs, email, social media, and more using AI. ”

Essaybot

https://www.essaybot.com/

“EssayBot is your personal AI writing tool. With your essay title, EssayBot suggests most relevant contents. It paraphrases for you to erase plagiarism concerns”

Explain Paper

Upload a paper, highlight confusing text, get an explanation. A better way to read academic papers.

https://www.explainpaper.com/

FloWrite

https://www.flowrite.com/

“Flowrite turns words into ready-to-send emails, messages, and posts in your personal style”

Galactica (from Meta – now offline)

https://galactica.org/
“Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and the inconsequential.”

This has been taken offline after three days as it produced too much false information, which is acceptable with normal language production as it can be easily spotted and edited, but not science, where people don’t know the topics, and can’t tell randomly generated science waffle from actual scientific results.

Gavagai Explorer

“Optimize customer perception, boost operational excellence,
manage brand reputation, and detect potential crisis instantly. ”

GenerateAI

https://www.scalenut.com/generate

“With fast, easy, and effective content generation, artificial intelligence is here to take away writing blues”

GoCopy

https://gocopy.io/

“Make writing easier. Team up with our AI-powered writing assisitant”

Grammarly Business

https://www.grammarly.com/

“Professional Communication For Your Team

With Grammarly Business, every member of your team can compose credible, mistake-free writing that makes your business look good.”

Granthika (not AI)

https://granthika.co/

“The writing super-app built by writers.”

Headlime

https://headlime.com/

“Writing copy is time-consuming and difficult. Headlime’s artificial intelligence can take your thoughts and turn them into words, saving you tons of time so you can focus on what matters: your business”

Hemingway App 

https://hemingwayapp.com/

“Makes your writing bold and clear.”

Hypotenuse AI

https://www.hypotenuse.ai/

“AI Generated Product Descriptions. Automatically generate copywriting for your e-commerce website in seconds. ”

iAwriter
From Linus Lee.

See Merlot.

A focused environment where you can write freely. Now with lasers.
(iA is information architecrure.)

Writer

Ink For All

“Explore over 40 AI writing tools for short form content, ads, email, product, startups and more. ”

https://inkforall.com/writing-tools

Jarvis AI

“Artificial intelligence makes it fast & easy to create content for your blog, social media, website, and more!”

Jarvis has been renamed Jasper because it was the name of Tony Stark’s AI assistant in the Marvel movie Iron Man. Marvel sent a C&D to them. So hello…

Jasper AI

“Artificial intelligence makes it fast & easy to create content for your blog, social media, website, and more!”

Lightkey

https://www.lightkey.io/

“Write Smarter, with Confidence. Take your typing to the next level using Lightkey’s AI-powered text predictions in applications you use every day.”

manuscript.ai

The World’s fastest editing tool became 10x more faster after our latest update.

Merlot

From Linus Lee

See iAwriter

Merlot is a web-based writing app that supports Markdown. It replaces iA Writer

https://merlot.vercel.app/

Muse Creative Content Assistant (Muse CCA)

https://www.musecca.com/Welcome

“Create More. Work Less. 3 Easy Steps! ”

Neuroflash

“Your AI copywriting tool for more conversions with less work.”

NLPCloud

“Text understanding/generation (NLP), ready for production, at a fair price.
Fine-tune and deploy your own AI models. No DevOps required.” (Also has text to image using Stable Diffusion.)

NovelAI

“Driven by AI, construct unique stories, thrilling tales, seductive romances, or just fool around. Anything goes!”

Peppertype AI

https://www.peppertype.ai/

“Your Virtual Content Assistant. Generate better content copies in seconds with the power of Artificial Intelligence”

ProWritingAid (not AI)

“For the smarter writer.”

https://prowritingaid.com/

Sapling AI

https://sapling.ai/

“AI writing assistant for customer-facing teams”

Scrivener (not AI)

https://www.literatureandlatte.com/scrivener/

“Scrivener is the go-to app for writers of all kinds, used every day by best-selling novelists, screenwriters and non-fiction writers”

Story Software (includes Notes Story Board (not AI), Story Live AI (GPT-J), Story Lite) – from Geoff Davis (this blog author)

STORY SOFTWARE CREATIVE APPS

“The first and best 5 star story board text & images zooming notes app”

Story Live

“AI text generator and editor , with GPT-J Text Synth by Fabrice Bellard.”

https://storylive.com

Sudowrite

“Bust writer’s block and be more creative with our magical writing AI.”

https://www.sudowrite.com/

TextCortex

https://textcortex.com/

“Text Cortex uses its advanced AI to generate Product Descriptions, App Reviews, App Descriptions and many other marketing texts.”

Verse By Verse (poetry)

“Google’s New AI Helps You Write Poetry Like Poe”

https://sites.research.google/versebyverse/

Virtual Ghost Writer

“Writer’s block? Never stare at a blank screen again!”

https://virtualghostwriter.com/

Word AI

https://wordai.com/

“Automatically create human quality content with WordAi. WordAi uses artificial intelligence to understand text and is able to automatically rewrite your article with the same readability as a human writer”

Writesonic

https://writesonic.com/

“With Writesonic’s AI-powered writing tools, you can generate high-performing Ads, Blogs, Landing Pages, Product Descriptions, Ideas and more in seconds.”

Wordtune

https://www.wordtune.com/

“Your thoughts in words. Say exactly what you mean through clear, compelling and authentic writing.”

Writer! (Qordoba)

“AI writing, content intelligence, and AI writing assistants—these are the waystations for what will soon become just simply writing. Writing with the full breadth and depth of your objectives, audience, messaging, and brand at your fingertips.”

WritingAssistant

https://www.writing-assistant.com/

“The most powerful writing improvement software in the world. Powered by advanced artificial intelligence (AI) technology, WritingAssistant can assess and enhance your writing”