Mark Webster artist – Hypertype – CAS AI Image Art talk 2023 transcript – NLP

MARK WEBSTER – Artist

All material is copyright Mark Webster, all rights reserved. The Computer Arts Society CAS talk AI & Image Art, 1 June 2023. For the full video please visit the CAS Talks page

Visit Mark Webster’s Area Four website.

https://areafour.xyz/

Mark Webster Hypertype London exhibition
Mark Webster Hypertype London exhibition

Thank you for inviting me. It’s wonderful. Okay, just maybe a quick introduction. So, my name is Mark Webster. I’m an artist based out in Brittany. Lovely part of France. I’ve been working with generative systems, well, computational generative art, since 2005. And it’s only recently though, that I’ve started to work on a personal body of work. And one of those projects that I’ve worked on is Hypertype, which is the project I’d like to talk to you about this evening. So, in a nutshell, Hypertype is a generative art piece that uses emotion and sentiment analysis as its main content. So I’m pulling in data using a specific technology. And that data is used as content to organise typographic, form as letters and symbols. So it’s first and foremost a textual piece. And secondly, it’s trying to draw attention to a specific field in AI specific technology, which is called Natural Language Processing or Understanding. So I’m going to talk a little bit about this. I’m going to talk a little about the ideas that came to develop Hypertype. What you’re actually seeing on the screen at the moment is two images from the program and that were exhibited in London last year in November.

So just to talk a little bit about where this all came from. A few years back now, I came across this technology. So it’s by IBM. It’s an API Natural Language Understanding. And it enables you basically, what this technology does is it enables you to give it a text and it will analyse this text in various ways. And there was one particular part of the API that interested me was the analysis, the emotional and sentiment analysis part. So what it does is you give it a text and it basically spits out keywords. And so these keywords are relevant words within the text and it will assign a sentiment score. So something that is positive, negative, or neutral to these keywords. It also assigns an emotion score and based on five basic emotions. So these five basic emotions are joy, sadness, fear, disgust, and anger. So, yeah, I came across this technology quite a few years ago and I became kind of fascinated by it. I wanted to learn a little bit more. So to do that, I did a lot of reading and I also wrote a few programs to try and explore what I could do with this technology, just try and understand it without thinking about doing anything artistic to begin with.

So what you’re seeing on screen here is on the left, you have basically just a shoot screen of some data. So this is a JSON file. This is typically the kind of data that you get out of the API. So I’m here. I’m just underlying the keyword. This is one keyword, which is a person. So it’s Douglas Hofstadter, a very well known philosopher in the world of AI. And as you can see, there are five emotions with a score that go from zero to one that are given to Douglas. And there’s a sentiment score. It’s neutral. On the right, what you’re seeing is probably something that you’ve seen before. This is a different technology, but it’s very much linked with textual analysis. What you’re seeing on the right is facial recognition technology. And in fact, you are seeing as well the photo of the face of a certain Paul Ekman. Now, Paul Ekman is quite famous because he was, along with his team, one of the people in the kind of bring up this theory that emotions are universal and they can be measured and they can be detected. And this was a theory that was used in part to develop facial recognition, emotion recognition, but also textual.

So, as I said, as I learned about this technology, I wrote a lot of programs. What I have here is a shoot screen of an application that I wrote in Java, basically enabling me to kind of visualise the data. Very simple. Actually. What you’re seeing is an article on Paul Ekman’s research, which is quite funny. On the right, you can see that there are a list of keywords. And for each keyword, there is a sentiment score. And on the left, there’s a little graphic, and there’s a general sentiment score. And then I can go in through the list of each keyword, and I can see kind of information about the various five emotions. So there’s a score for each emotion joy, sadness, disgust, anger, and fear. So I built quite a few programs. I also made a little Twitter bot that enabled me to kind of… because you can get so much data from this technology, it was really important for me to kind of get an understanding of how it not just works, but what it was giving back. So I wrote a little Twitter bot that basically would take an article from The Guardian every morning, and then it would analyse this and it would publish the results on Twitter.

And this just enabled me to kind of observe things. But at the end of the day, at some point, there was the burning question about, well, how does this actual technology work? How does a computer go about labelling text with an emotion? And so I went off and I did a lot of reading about this, not just kind of general articles in the press, but I tried to learn about it technically, and I came across a lot of academic articles. Now, I’m not a computational linguist, and very quickly I came across things that were very, very technical, but something else very different happened. And it was quite interesting. While I was trying to learn about the technical side, about how this computer was labelling text with emotion, another question came about, which was, well, what is an emotion? What is a human emotion? And that was really interesting because at the time, I was reading a lot of Oliver Sacks. You may have heard of Oliver Sacks. He’s written a lot of books. He was as a neuroscientist, and although he never really touched upon emotion, his books kind of opened up a door. And I started to read and learn about other people who were working in the field of neurobiology.

And there were a few people so there was Antonio Damasio and there was Robert Sapolsky, two people who are very well known in the field of neurobiology and touching on questions, not just emotion, but also consciousness. And a lot of their texts can be quite technical, yet they were very, very interesting. And then there was another person that came along – which I’ll show the book cover – of a certain Dr. Lisa Feldman Barrett, also based out in the States, who’s doing wonderful research and written a number of books, one of them here called The How Emotions Are Made. Now, it was really with Lisa Feldman Barrett’s book that something kind of clicked, because in it she started to talk about Paul Ekman, and in a way, she kind of really pulled his whole theory apart. Now, Dr. Barrett is doing a lot of research in this field, so she’s working in a lab and she’s doing contemporary research. What she says in the book really kind of debunks the whole theory of Paul Ekman. That is to say that human emotions are not universal, that they are not innate, we’re not born with them. And this very interesting quote that I put here, emotions are not reactions to the world, but rather they are constructions. So her whole theory kind of drives towards this idea that, in fact, emotions are concepts, and they’re things that we do on the fly in terms of our context, in terms of our experience, and in terms of also our cultures.

And so this was really kind of an eye opener for me. And it also, in a way, kind of made me think, well, again, how does a computer label words and try and infer any kind of emotional content from that from what I was reading from these people, Lisa Feldman Barrett, Antonio Damasio, Robert Sapolsky, they added the whole kind of body side to things. We tend to think that everything is in the brain, but emotions or experiences are very much a bodily experience. And so at the end of the day, I basically came up with the conclusion that, well, a computer is not going to really infer any kind of emotional content from text. So from that point of view, I thought, well, it’s an interesting technology and it’d be nice to do something artistic with it. So how am I going to go about this? So that was the next stage. I did all this kind of research and I basically came to the conclusion that the data is there.

There’s lots of data. How do I use it? Let’s use it as a sculptor might use clay or a painter will use paint. Let’s use that data to do something artistic with. And at the time, I actually didn’t do anything visual. I did actually did a sound piece to begin with which was published back in 2020 called The Drone Machine. And this particular project was pulling in data from IBM Emotion Data Analysis and using that to drive a generative sound oscillators. I basically built a custom made synthesiser, digital synthesiser that was bringing in this data and it was generating sound. And I can share the project perhaps later on with a link. It was published and went out on the radio. It’s 30 minutes long, so I’m not going to play it here. This was the first project I did. What was interesting is that I chose sound because I found that sound was a medium that probably kind of spoke to me more in terms of emotion. But the next stage was indeed to do something visual. And this is where Hypertype came along. And again, the idea was not at all to do a visualisation.

Again, the data for me just didn’t compute use that word. So for me, the data was purely just a material with which I could just play with. So here what you’re seeing on the screen are just very first initial kind of prototypes, visual prototypes for Hypertype in which I was just pulling in the data and I was just putting it on the screen. There were two basic constraints I gave myself for this project. The first one was that it had to be textual, okay? That was it. It had to be based on text, so everything is a font. And secondly, the context that is the content should come that’s what I said to myself. It should come from the articles that I read, all the research I’d done about the technology. So those were the two kind of constraints. And from there I basically started to develop. I’ll probably pass through a few slides because I’m sure we’re running out of time, but here I’m just showing you a number of iterations of the development of the project. So I brought in color. Of course, colour was an interesting parameter to play. With because you could probably think that with emotion that you want to assign a certain color to an emotion or anything.

But that, again, I completely dismissed. In fact, a lot of the colour was generated randomly from various colour palettes that I wrote. Here’s a close up of one of the pieces. Here are some more refined iterations. For those people who work with kind of generative programs, you get a lot of results. And I think as an artist, what is really, really difficult to do when you’re working with a generative piece is to try and find the right balance between something that has visual coherency across the whole series, yet has enough variety. Here these two images, obviously, they’re very different, one of them being quite minimal, the other being a little bit more charged, yet you notice that they have a certain coherency visually. So as a generative artist, you create a lot of these. I usually print out a lot. I usually create websites where I just list all the images so I can see them. And then eventually, yes, there was the exhibition. So for the exhibition, there were five pieces that I curated. So five pieces I chose from the program. And then the program was also went live for people to mint in an NFT environment again. So these were the last pieces, so maybe I’ll stop there.