Patrick Lichty Artist & Writer – Studio Visits Posthuman Atelier – CAS AI Image Art talk 2023 transcript

PATRICK LICHTY – Conceptual Artist, Writer

Discussion of project, “Studio Visits: In the Posthuman Atelier” before the Computer Art Society (of Britain).

All material is copyright Patrick Lichty, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page

Visit Patrick Lichty’s website here.

Patrick Lichty

Patrick Lichty

I am Patrick Lichty, an artist, curator, cultural theorist, and Assistant Professor of Creative Digital Media at Winona State University in the States.  I will talk about my project, a curatorial meta-narrative called “Studio Visits: In the Post-Human Atelier.”  Much of my AI work has yet to be widely shown in the West, as until two years ago, I had spent six years in the United Arab Emirates, primarily at Zayed University, the Federal University in Abu Dhabi.  I have been working in the New media field for about 30 years, dealing with notions of how media shape our reality.  So you can see some of my earlier work in this slide; I was part of the activist group RTMark, which became the Yes Men, the Second Life performance art group Second Front, and some of my Tapestry and algorism work.

This slide shows my 2015 solo show in New York, “Random Internet Cats,” which comprises generative plotter cat drawings.  The following slide shows some of the asemic calligraphy I fed through GAN machine learning.  I worked with Playform.IO’s machine learning system to create a personal Rorschach by looking for points of commonality in my calligraphy.  I called the project Personal Taxonomies, and other works, like my Still Lives, generated through StyleGAN and Playform.IO.  So I’ve been doing AI for about 7-8 years and new media art for almost three decades.  Let’s fast-forward to now.  I decided to go away from the PyTorch GAN machine learning models I used with my Calligraphy work and my paintings at Platform.io in the middle of last year.  Switching to VQGAN and CLIP-based diffusion engines, I worked with NightCafe for a while.  Then I found MidJourneyAI.  And at first, I was only partially satisfied with the platform as I was on the MidJourneyAI Discord server and saw people working with basic ideas.  I decided to focus on two concepts as I decided to think of what I was doing as concrete prose with code.  And then secondly, I decided to take contestational aesthetics, as my prompts would contain ideas not being used on the MidJourney Discord.

I wanted to find the concepts for my prompts that needed to be less representational than the usual visuals of a CLIP-based AI.  I did two things.  First, I ignored everything typed on the MidJourney Discord, which was almost an aesthetics of negation.  And then, I considered the latent space of the Laion-5 database that MidJourneyAI was using as an abstract space.  I decided conceptually how to deal with a conceptual space using abstract architecture.  I decided to start querying it with images like Kurt Schwitters’ Merzbau, just as a beginning, as well as Joseph Cornell.  I did about twelve series called “The Architectures of the Latent Space,” illustrated here.  They are unusual because they still refer to Schwitters but are much more sculptural and flatter.  And so these went on for about twelve series.  But this was the beginning of my work in that area, then I started finding what I felt were narratives of absence.

I have considerable differences in abstraction – multiple notions of abstraction, as I want to see what is transcendent in AI realism.  For example, I started playing with real objects in a photography studio.  This image is of a simulated photo of a four-dimensional cube, a tesseract, which isn’t supposed to be representational.  Still, it was exciting that it emerged and illuminated the space.  And so this told me that I was on a path in which I was starting to confuse the AI’s translator and that it was beginning to give results that were in between its sets of parameters, which is interesting.  One body of work where my attempts at translator confusion are evident is The Voids (Lacunae), basically brutalism and empty billboards.  It is inspired by a post that Joseph DeLappe from Scotland made on Facebook of a blank billboard.  And one of the things that I noticed that these systems tried to do is that they try to represent something.  They try to fill space.  If there’s a blank space, it tries to put something in it.

MidJourney AI tries to fill visual space with signifiers.  One of my challenges was forcing the AI engine to keep that space open.  So this resulted in experiments with empty art studios and blank billboards.  Artists were absent or had no physical form, which was the conceptual trigger.  These spaces have multiple conditions and aesthetics, with a lot of variation.  The question lingered, “How do I put these images together?”  There are numerous ways to deal with them, so I made about 150 or 200 in a series and then created a contact book.  And this gets away from this idea of choice in AI art, anxiety, and so on.  I have a book that’s ready for publication so that someone can see my entire process and they can see the whole set of images.  But in this case, what I thought was very interesting is that I wound up going into a bit of reverie around the fantasy of these artists who I’d been looking into their studios, and they weren’t in, or they didn’t exist in a physical form.

Having worked in criticism, curation, and theory, as well as being an artist, I decided to take these concepts and create a meta-structural scaffold to create a curatorial narrative based on this concept of the body of 50 artists.  When I visited their fictional studios, thinking about theoretical constructs such as Baudrillard and Benjamin’s ideas of absence and aura, I created a conceptual framework that was a catalog for a general audience but preceded the exhibition.  There’s precedent for this.  There’s Duchamp and the Boite en Valise.  I’ve done work like this before, constructing shows in informal spaces like an iPod.  Here is a work from 2009, the iPod en Valise, as the iPod is a ‘box’ (Boite) for media work.  And then I thought, why can’t I do the same with a catalog?  Why can’t I use the formal constraint of the catalog to discuss the sociology of AI and some of the social anxieties putting this into a robust conceptual framework beyond its traditional rules?  So another restriction that I have frequently encountered as a New Media curator and artist is time.

A moment in time, when technological art or a form emerges is often ephemeral.  Curating shows on handheld art, screen savers, etc., show these might have a three to six-month period of time in which art is fresh.  Studio Visits is tied formally to the MidJourneyAI 4  engine because MidJourneyAI 5 has a different aesthetic.  A key concept is where the work situates itself in society and how it’s developing in a formal sense.  And then, is there time to deploy an exhibition before the idea goes cold?  And most times, most institutions are, unless you’re dealing with a festival, planning about a year out, possibly two.  And, of course, for every essay I’m writing now, there is a disclaimer saying that this is written at such a date, such a year, such a month, and this may be obsolete or dated by the time you read this, in the case of something developing as quickly as AI, this idea of being aware of the temporal nature of the form itself.

So I decided to deploy the catalog first, as the museum show would emerge from this, and create the catalog and then exhibit.  As I said before, I’ve been making these contact books, which are reverse-engineered catalogs;  I’m almost up to 15 editions.  I’ve only mentioned about six or seven on my Instagram so far.  But in general, I’m looking at curation as an artistic scaffold.  Given this project, a curatorial frame structure creates a narrative around meta-structural conceptual ones rather than representing the images themselves.  It’s a narrative dealing with society’s anxieties about AI and culture.  What happens if we finally eliminate those annoying artists and replace them with AI as a provocation?  So here’s the structure of the piece.  The overall work is the catalog.  There is a curatorial essay, the artist’s name, statement, and the studio “photograph.”  The names derive from the names of colleagues.  So I’m reimagining in a synthetic lens my community, the studio image, as we can see through the narrative that I’ve presented.  I started generating these empty spaces and let myself run through about a few hundred.

I chose the 50 most potent synthetic studio images.  A description emerges using MidJourrneyAI’s /describe function.   The resulting/Describe prompt, a brief discussion of the artist and what they do, is fed to GPT-3, which generates a statement.  So here’s the form of an artist’s layout.  You have the name.  The following layout is the first one I did for Artificium, 334-J452, inspired by George Lucas’ THX 1138.  And the layout came from this initial image.  I took these with a description from MidJourneyAI and put it into GPT-3.  The artist’s statement is as banal as any graduate school one and reads, “As an artist, my work expresses my innermost thoughts and emotions.  I seek to capture the energy and chaos of the world around me, using bold brushstrokes and vibrant colors to convey my message.”   So these were 50 2-page spreads.  The book is  112 pages and fits very much with a catalog format.  So the name, as said before, was based on the conceptual frame of the artist I was thinking of, based on the image generated, some of the concerns I saw in the mass media, and loosely upon those of names of colleagues, family, et cetera.

In many ways, I was taking a fantasy and re-envisioning my community through a synthetic lens.  These images came first when developing across the imagined artists of diversity, identity, species, and planet.  This reimagining is interesting because I wasn’t necessarily thinking of my ethnographic sphere.  I worked in Arabia, West Asia, and Central Asia and dealt with people from Africa and the subcontinent.  So many of these people of my experience figure into this global latent space of imagined artists, not just those of the European area or even those, more specifically, North America.  And then I expanded this to species and planet, as we’ll see in a moment.  So here we have an alien sound artist.  The computer in this studio is almost cyberpunk and has a very otherworldly art studio image.  I must remember which artist this is, but it has a New England-style look.  And then third one is a Persian painter obsessed with color, Zafran Pavlavi, based on my partner, Negin Ehtesabian, who is currently coming to America from Tehran.

This slide is a rough outline of the structure of the catalog.  I take the name and the framework of the artist’s practice, and you can see here that this information went into GPT-3 reading statements almost indicative of the usual graduate school art statements.  Once again, these elements reflect some of the anxieties in the popular media.  = I’m using this as a dull mirror from a visual sociology standpoint, based on scholars like Becker.  In addition, this is a draft, but more is just a pro forma approach to the conceptual aspect.  The project catalog is available on Blurb.  It’s about $100 and still needs a few little revisions.  But, this is something that is from a materialist perspective in basically inverting many practices regarding the usual modalities of exhibition curation and execution of a show or an exhibition.  I’m also thinking about the standard mechanisms of artistic presentation within an institutional path.  So not only is this dealing with AI, but it’s using AI to talk about the sociological space, the institutional space in which these works inhabit, and how these works propagate.

Studio Visits deals with institutions, capitalism, and digital production.  So issues this project engages concerning AI exacerbate social anxieties about technology.  The deluge of images problematizes any cohesive narrative.  Using this meta-narrative through this conceptual frame, I can focus on some of the social and cultural questions about AI and the future of society and how it affects it within a fairly neat package.  Design and curatorial fictions provide solutions for cultural spaces.  Cultural institutions typically need to catch up with the speed of technology.  Then bespoke artifacts, which are problematic, can remain in place long enough for the institution to adopt them.  In other words, if you get something together and get it out there, you can have that in place, take it to the institution, and hopefully, they can explicate the work.

I’ve been asked about a sequel.  I’ve had many people ask me who these artists are.  What’s their work look like?  You can see excerpts of their work in the studios.  But people were asking me to take the conceit one step further, and I’m starting to work on that idea and show the portraits of the artists and their work.  This portrait is of the artist Zafran, who I talked about earlier.  These both continue the fiction and then humanize the story, which also problematizes it.  And so this is this project and its ongoing development in a nutshell.  I invite you to go to my Instagram at @Patlichty_art, and thank you for your time.

In closing, this is another portrait of the artist Vedran VUC1C.  And once again, this is an entirely constructed fantasy.  But once again, as Picasso said, these are lies that reveal the truth about ourselves.

 

Luba Elliott curator – AI Art History 2015-2023 – CAS AI Image Art talk 2023 transcript

LUBA ELLIOTT – AI Creative Researcher, Curator and Historian

All material is copyright Luba Elliott, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page

Visit Luba Elliott’s Creative AI Research website.

Luba Elliott

I’m looking forward to giving a quick overview of AI art from 2015 to the present day. These are the years I’ve been active in the field and part of the recent generation that includes Anna Ridler, Jake Elwes, and Mario Klingemann.

To start off, I’ll mention a couple of projects to explain the perspective I’m coming from. I began by running a meetup in London that connected the technical research community and artists working with these tools. That led to a number of projects: curating a media art festival, organizing exhibitions at business conferences, and launching the NeurIPS Creativity Workshop, which is probably one of the projects I’m best known for. This was a workshop on creative AI at a major academic AI conference. Alongside that, I also curated an art gallery that still exists online from 2017 to 2020. The workshop now continues, currently run by Tom White, an artist from New Zealand.

If you’re interested, you can still submit work to it. I also curate the Art and AI festival in Leicester, where we exhibit work in public spaces around the city. I’ve done some work with NFTs, including exhibitions at Faro File and at Unit Gallery.

Now I’ll start the presentation, and I usually begin with Deep Dream. This was a technology that came out of Google in 2015. You’d input an image, and the algorithm would enhance certain features, producing vivid colors and strange shapes. It was one of the first developments that excited the mainstream about AI. It’s still one of my favorite projects because it’s quite creative and aesthetically interesting. Few artists continued working with it. Daniel Ambrosi is one who has, often creating landscape artworks that retain the Deep Dream aesthetic while preserving the subject matter. That’s important because many artists let the aesthetic overshadow the image itself. Ambrosi has also experimented with cubist influences to refresh his approach.

Then came style transfer, where you could take an image and apply the style of Monet or Van Gogh. This excited many AI researchers and software engineers, who saw it as a representation of art similar to what you’d find in museums. In contrast, many contemporary artists and art historians found it unappealing because today’s artists aim to create something new, whether aesthetically or conceptually. Interesting work in this area often requires broadening the definition of style beyond just artistic style. Gene Kogan, a key figure in the field, created variations of the Mona Lisa using styles like Google Maps, calligraphy, and astronomy.

Next came GANs, which gained popularity around 2014 and evolved rapidly over the next few years. By 2018 or 2019, they were producing photorealistic images. Some of my favorite works come from the earlier GAN period. Mario Klingemann created striking images exploring the human form that drew comparisons to Francis Bacon. These early works often contained visual glitches—misplaced facial features, oddly angled limbs—which became integral to the artistic expression. As GANs improved, artists had to move beyond relying on those glitches.

Scott Eaton is one example. He deeply studies human anatomy and uses GANs to combine realistic textures with slightly distorted forms familiar to those tracking GAN development. Mario Klingemann also continued experimenting. At a show I curated for Unit Gallery, we displayed two of his works: one from his 2018 “Neural Glitch” project, and another made using Stable Diffusion based on the earlier image. The newer image is more realistic, illustrating how much the technology has advanced.

Ivona Tau explores machine learning itself. One of her projects involved machine forgetting, where image quality deteriorated over time, challenging the usual goal of improvement in machine learning. Entangled Others—Sofia Crespo and Feileacan McCormick—have done fantastic work inspired by the natural world. Their recent projects combined generations of GAN images to create creatures with traits from multiple species.

Other artists have focused on how to display their work or how to engage with ecosystems. Jake Elwes, whose work is currently on show at Gazelli Art House in London, trained an AI on images of marsh birds and installed a screen in the Essex marshes. Real birds interacted with this digital bird, creating a fascinating encounter between two species.

In sculpture, Ben Snell created a project called “Dio,” where he trained an AI on sculptures from antiquity to modern times, then destroyed the computer that created the designs and used its remains to make the sculpture. Conceptually, this is far more developed than many other AI art pieces and recalls 20th-century artists who destroyed their own work.

Roman Lipski is an artist who considers datasets deeply. Though he primarily paints landscapes, he experimented with AI by photographing a scene, painting nine versions, training an AI on those, and responding to its output. His style evolved through this interaction, becoming more abstract and cooler in tone. Despite using digital tools, he continued working in physical media like paint and engraving.

Helena Sarin is known for using her own datasets and developing a distinct aesthetic. She often combines media—flowers, newspapers, photography—with GANs to create highly original work.

Normally, I talk about Anna Ridler’s tulip project, which I commissioned for the 2018 Impact Festival. Since she will be discussing it later, I’ll just mention that she made a conscious effort to highlight the human labor behind AI art. Her exhibitions often paired generated tulips with walls of hand-drawn flowers, drawing attention to the dataset—a rare approach at the time.

In more recent years, with the rise of DALL·E and CLIP, attention has shifted to text-to-image generators. These tools create images from written prompts and have changed the focus of AI art. Earlier AI artists often explored the underlying technology or its ethical implications. In contrast, much current text-to-image work is more focused on aesthetics.

Some projects still stand out. Botto, by Mario Klingemann, operates as a DAO. A community votes on which image to sell, and during the NFT boom, some pieces fetched over a million euros. Vadim Epstein has worked deeply with CLIP, developing a personal aesthetic and narrative video works. Maneki Neko, whom I curated in an NFT exhibition, creates intricate, detailed images that feel distinct from typical Stable Diffusion outputs, likely combining multiple images and heavy post-processing.

Ganbrood has found success with fantasy-themed images. Artists like Varvara and Mar use text-to-image generation to design sculptures. Controversies have emerged too. Jason Allen won first prize at a US art fair with an image made using Midjourney and Stable Diffusion. Critics questioned whether he clearly disclosed the AI’s role. He argued that refining the prompt was itself artistic labor.

At the Sony Photo Awards, Boris Eldagsen submitted a more ambiguous image that won praise from judges. He later withdrew it, aiming to spark discussion about AI’s place in such contests.

Jake Elwes’ “Closed Loop,” made in 2017, involved two neural networks: one generated images from text, the other generated text from images, creating an ongoing dialogue. It demonstrates both how far the technology has come and how conceptually rich earlier works were.

To close, I want to highlight a project by South Korean artists Shin Seung Back and Kim Yong Hun. They used facial recognition in a fine art context, asking portrait painters to disrupt the algorithm’s ability to detect faces. The results varied—some still looked like portraits, others did not. One portrait took me a long time to recognize because the face was tilted 90 degrees. It’s a brilliant example of using AI tools outside their original purpose to produce meaningful art.

That’s the end of my presentation. You can find out more about my work on my website or email me with any questions. Now I’ll pass over to Geoff for the next speaker.

 

Mark Webster artist – Hypertype – CAS AI Image Art talk 2023 transcript – NLP

MARK WEBSTER – Artist

All material is copyright Mark Webster, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page

Visit Mark Webster’s Area Four website.

https://areafour.xyz/

Mark Webster Hypertype London exhibition
Mark Webster Hypertype London exhibition

Thank you for inviting me. It’s wonderful. Okay, just maybe a quick introduction. So, my name is Mark Webster. I’m an artist based out in Brittany. Lovely part of France. I’ve been working with generative systems, well, computational generative art, since 2005. And it’s only recently though, that I’ve started to work on a personal body of work. And one of those projects that I’ve worked on is Hypertype, which is the project I’d like to talk to you about this evening. So, in a nutshell, Hypertype is a generative art piece that uses emotion and sentiment analysis as its main content. So I’m pulling in data using a specific technology. And that data is used as content to organise typographic, form as letters and symbols. So it’s first and foremost a textual piece. And secondly, it’s trying to draw attention to a specific field in AI specific technology, which is called Natural Language Processing or Understanding. So I’m going to talk a little bit about this. I’m going to talk a little about the ideas that came to develop Hypertype. What you’re actually seeing on the screen at the moment is two images from the program and that were exhibited in London last year in November.

So just to talk a little bit about where this all came from. A few years back now, I came across this technology. So it’s by IBM. It’s an API Natural Language Understanding. And it enables you basically, what this technology does is it enables you to give it a text and it will analyse this text in various ways. And there was one particular part of the API that interested me was the analysis, the emotional and sentiment analysis part. So what it does is you give it a text and it basically spits out keywords. And so these keywords are relevant words within the text and it will assign a sentiment score. So something that is positive, negative, or neutral to these keywords. It also assigns an emotion score and based on five basic emotions. So these five basic emotions are joy, sadness, fear, disgust, and anger. So, yeah, I came across this technology quite a few years ago and I became kind of fascinated by it. I wanted to learn a little bit more. So to do that, I did a lot of reading and I also wrote a few programs to try and explore what I could do with this technology, just try and understand it without thinking about doing anything artistic to begin with.

So what you’re seeing on screen here is on the left, you have basically just a shoot screen of some data. So this is a JSON file. This is typically the kind of data that you get out of the API. So I’m here. I’m just underlying the keyword. This is one keyword, which is a person. So it’s Douglas Hofstadter, a very well known philosopher in the world of AI. And as you can see, there are five emotions with a score that go from zero to one that are given to Douglas. And there’s a sentiment score. It’s neutral. On the right, what you’re seeing is probably something that you’ve seen before. This is a different technology, but it’s very much linked with textual analysis. What you’re seeing on the right is facial recognition technology. And in fact, you are seeing as well the photo of the face of a certain Paul Ekman. Now, Paul Ekman is quite famous because he was, along with his team, one of the people in the kind of bring up this theory that emotions are universal and they can be measured and they can be detected. And this was a theory that was used in part to develop facial recognition, emotion recognition, but also textual.

So, as I said, as I learned about this technology, I wrote a lot of programs. What I have here is a shoot screen of an application that I wrote in Java, basically enabling me to kind of visualise the data. Very simple. Actually. What you’re seeing is an article on Paul Ekman’s research, which is quite funny. On the right, you can see that there are a list of keywords. And for each keyword, there is a sentiment score. And on the left, there’s a little graphic, and there’s a general sentiment score. And then I can go in through the list of each keyword, and I can see kind of information about the various five emotions. So there’s a score for each emotion joy, sadness, disgust, anger, and fear. So I built quite a few programs. I also made a little Twitter bot that enabled me to kind of… because you can get so much data from this technology, it was really important for me to kind of get an understanding of how it not just works, but what it was giving back. So I wrote a little Twitter bot that basically would take an article from The Guardian every morning, and then it would analyse this and it would publish the results on Twitter.

And this just enabled me to kind of observe things. But at the end of the day, at some point, there was the burning question about, well, how does this actual technology work? How does a computer go about labelling text with an emotion? And so I went off and I did a lot of reading about this, not just kind of general articles in the press, but I tried to learn about it technically, and I came across a lot of academic articles. Now, I’m not a computational linguist, and very quickly I came across things that were very, very technical, but something else very different happened. And it was quite interesting. While I was trying to learn about the technical side, about how this computer was labelling text with emotion, another question came about, which was, well, what is an emotion? What is a human emotion? And that was really interesting because at the time, I was reading a lot of Oliver Sacks. You may have heard of Oliver Sacks. He’s written a lot of books. He was as a neuroscientist, and although he never really touched upon emotion, his books kind of opened up a door. And I started to read and learn about other people who were working in the field of neurobiology.

And there were a few people so there was Antonio Damasio and there was Robert Sapolsky, two people who are very well known in the field of neurobiology and touching on questions, not just emotion, but also consciousness. And a lot of their texts can be quite technical, yet they were very, very interesting. And then there was another person that came along – which I’ll show the book cover – of a certain Dr. Lisa Feldman Barrett, also based out in the States, who’s doing wonderful research and written a number of books, one of them here called The How Emotions Are Made. Now, it was really with Lisa Feldman Barrett’s book that something kind of clicked, because in it she started to talk about Paul Ekman, and in a way, she kind of really pulled his whole theory apart. Now, Dr. Barrett is doing a lot of research in this field, so she’s working in a lab and she’s doing contemporary research. What she says in the book really kind of debunks the whole theory of Paul Ekman. That is to say that human emotions are not universal, that they are not innate, we’re not born with them. And this very interesting quote that I put here, emotions are not reactions to the world, but rather they are constructions. So her whole theory kind of drives towards this idea that, in fact, emotions are concepts, and they’re things that we do on the fly in terms of our context, in terms of our experience, and in terms of also our cultures.

And so this was really kind of an eye opener for me. And it also, in a way, kind of made me think, well, again, how does a computer label words and try and infer any kind of emotional content from that from what I was reading from these people, Lisa Feldman Barrett, Antonio Damasio, Robert Sapolsky, they added the whole kind of body side to things. We tend to think that everything is in the brain, but emotions or experiences are very much a bodily experience. And so at the end of the day, I basically came up with the conclusion that, well, a computer is not going to really infer any kind of emotional content from text. So from that point of view, I thought, well, it’s an interesting technology and it’d be nice to do something artistic with it. So how am I going to go about this? So that was the next stage. I did all this kind of research and I basically came to the conclusion that the data is there.

There’s lots of data. How do I use it? Let’s use it as a sculptor might use clay or a painter will use paint. Let’s use that data to do something artistic with. And at the time, I actually didn’t do anything visual. I did actually did a sound piece to begin with which was published back in 2020 called The Drone Machine. And this particular project was pulling in data from IBM Emotion Data Analysis and using that to drive a generative sound oscillators. I basically built a custom made synthesiser, digital synthesiser that was bringing in this data and it was generating sound. And I can share the project perhaps later on with a link. It was published and went out on the radio. It’s 30 minutes long, so I’m not going to play it here. This was the first project I did. What was interesting is that I chose sound because I found that sound was a medium that probably kind of spoke to me more in terms of emotion. But the next stage was indeed to do something visual. And this is where Hypertype came along. And again, the idea was not at all to do a visualisation.

Again, the data for me just didn’t compute use that word. So for me, the data was purely just a material with which I could just play with. So here what you’re seeing on the screen are just very first initial kind of prototypes, visual prototypes for Hypertype in which I was just pulling in the data and I was just putting it on the screen. There were two basic constraints I gave myself for this project. The first one was that it had to be textual, okay? That was it. It had to be based on text, so everything is a font. And secondly, the context that is the content should come that’s what I said to myself. It should come from the articles that I read, all the research I’d done about the technology. So those were the two kind of constraints. And from there I basically started to develop. I’ll probably pass through a few slides because I’m sure we’re running out of time, but here I’m just showing you a number of iterations of the development of the project. So I brought in color. Of course, colour was an interesting parameter to play. With because you could probably think that with emotion that you want to assign a certain color to an emotion or anything.

But that, again, I completely dismissed. In fact, a lot of the colour was generated randomly from various colour palettes that I wrote. Here’s a close up of one of the pieces. Here are some more refined iterations. For those people who work with kind of generative programs, you get a lot of results. And I think as an artist, what is really, really difficult to do when you’re working with a generative piece is to try and find the right balance between something that has visual coherency across the whole series, yet has enough variety. Here these two images, obviously, they’re very different, one of them being quite minimal, the other being a little bit more charged, yet you notice that they have a certain coherency visually. So as a generative artist, you create a lot of these. I usually print out a lot. I usually create websites where I just list all the images so I can see them. And then eventually, yes, there was the exhibition. So for the exhibition, there were five pieces that I curated. So five pieces I chose from the program. And then the program was also went live for people to mint in an NFT environment again. So these were the last pieces, so maybe I’ll stop there.

 

Micro Arts exhibition

Micro Arts was founded by me in the 1980s, to make, distribute and promote computer art. There is a website about it  – visit Micro Arts Group.

Geoff Davis Micro Arts exhibition 2021
Geoff Davis Micro Arts exhibition 2021

There was a big exhibition organised by the Computer Arts Archive at the LCB Depot, Leicester UK in June 2021. This was Micro Art’s first exhibition since 1985!

BCS Moorgate London – exhibition July – November 2022.

Visit the Micro Arts website above for videos, images, the ebook, etc..

Please email me for more details geoffdavis5 AT gmail DOT com