After the three experiments, people were first asked two related questions. Question 1 “Enjoyment (what was Most and Least enjoyable)”.

Summary

People most liked the stimulation of a responsive ideas generator, rather than expecting the system to actually write coherent relevant text. The ‘least interesting’ question produced more specific replies, and specific complaints about the randomness of the generated text. This might be due to raised expectations. There was also a learning curve, as people mentioned that the second experiment (News) was better than the first, and another said ‘It seemed that the more text I created the better this generator worked’.

Text analysis Q1 enjoy most least

The comments on the image, experiment design, tuning, slowness etc. of the generator (52% of ‘least’ comments) are useful for future development but were unavoidable as this was a quick experiment and not a fully designed tool.

All other ‘least interesting’ replies were about the random and irrelevant nature of the text generation.

If this quality was mentioned in the ‘most interesting’ reply, it was regarded as a way to stimulate new ideas (27% of comments). Nearly a fifth thought the text was responsive to what they entered (19%).

Most interesting aspects were Stimulation (38% of comments).

Sentiment analysis only showed a medium level of Joy in the ‘Enjoy most’ question, and medium levels of Joy and Sadness in the ‘Enjoy least’ question. No Fear or Anger scored. As usual in the experiment Tentative sentiment was high and Confidence was not scored.

Details

Did you enjoy using the text generator?
[Likert 1-5 A lot – Strongly disliked]
What was most interesting? Please explain.
What was least interesting? Please explain.

Median – 2 (Liked)

Sentiment Analysis using IBM Tone Analysis

Most interesting – 26

Joy 0.63
Analytical 0.91 Tentative 0.93 (Anger, Confidence, Sadness, Fear < .5)

General themes in comments:

Stimulating = 10

Ideas/surreal  = 7

Responsive = 5

Comment mentioning specifics = 5 (1 paired)

Least interesting – 21

Joy 0.50 Sadness 0.54
Analytical 0.84  Tentative 0.87 (Anger, Fear, Confident <.05)

Stimulating = 0

Irrelevant/random = 11

Responsive = 0

C = comment mentioning specifics = 11 (one paired)

Appendix

Q1 Enjoy

Answer Texts 

Q1 Most interesting
It is thought provoking.

The generator seems to ramble: it can start on one topic and then meander into something else entirely. I like to see what it produces.

I wrote a story that for me sort of paralleled the use of the generator. It seems to have responded in kind. I find that quite amusing.

The news one was the most enjoyable

Seeing the generated text and how it related to my input.

This is weirdly phrased. The question should be: “I enjoyed using the text generator panel” agree…disagree

I liked how easy it was to use and how the left was the generator and the right was the text box. This made it easy to cycle through generated text until you found one you liked that was relevant.

Seeing what would generate in real time

The writing that was generated.

Unusual phrases, juxtaposition. Would be useful for ideas

It prompted lots of fun ideas and allowed you to think s but differently

much Potential!!

I liked the generation of related words, sparked ideas.

It was fun to see what ideas it generated as it could bring up suggestions that was different from what I had in mind initially so it may help me approach my ideas from a different aspect or viewpoint.

Seeing the tangents generated and possibilities therein

doing the Q&A cuz its cool

It added a new angle which may have not occurred without the generator.

It was fun but honestly, it’s good to know machines still can’t grasp the intricacies of human logic 🙂

surreal perchance juxtaposition. puzzling at the possible sources of them

It was clever. The first example actually gave something back, so some AI aspect worked there.

The best thing was seeing the words appear like a ghost writing on the page. Anticipation of the narrative being developed or taking a sharp turn.

It was interesting how the generator seemed to help me the most in the news article, but I could see how it could easily work the opposite way to hinder one’s imagination.

Most – the novelty of seeing how this works now. A long way to go, methinks!

seeing the results …

The text generator just doesn’t work as a writing partner. The AI would have to be much more advanced to help write human-interest articles or fiction. As it currently stands, it can form sentences but doesn’t understand context or metaphor. It doesn’t “think” in images, so creative writing is lost on it.

How coherent its text was

IBM TA
Joy 0.63 Analytical 0.91 Tentative 0.93 (Anger, Confidence, Sadness, Fear < .5)

Q1 Least interesting
Maybe seeing the same picture.

Because the generator rambles, the text it produces has little substance. I’m not really interested in trying to edit what’s essentially nonsense into a coherent story or article.

Least interesting was the first round because the generated text was less relevant. It seemed that the more text I created the better this generator worked. Maybe it needs inspiration also.

Having to write about the same image three times.

Anyway, I least enjoyed the horrible user interface from 1982 (yes I know that is far before netscape but still this shit is beyond old school)

The predictability of the AI. Needs to be better able to rerun different options rapidly to have more probability of sparking an idea or a concept outside the author’s normal range

The picture

Hard to use on a phone. Had to repeat generated process.

can i influence the “direction” of the Text?

The fiction didn’t write like fiction, it lacked a human quality. It read like it wasn’t trying to capture interest. I guess it does depend what you put in though and I didn’t write very much at all.

It generated ideas and sentences that didn’t make sense to my key words/ideas.

Syntactical inconsistencies slightly irritating

waiting for the text to generate

Least: That the system was not able to remember previous texts and so had to keep all text in the input box to retain a sense of continuity. Also that the system did not parse the image so did not take the content in the image into account.

Where the writing is more esoteric, the AI hasn’t anything to give.

The worst thing was when the story was disappointing or did not make any logical sense, and was without direction.

Answering questions on the same picture.

Least – trying to write about a photo that made little impact on me.

I did not enjoy using the text generator because it was not able to offer any extra comments or information that was relevant. This was especially the case with the short story where the computer failed to identify that the story was from the dog’s perspective. Any human would have recognised this but the computer failed to do so. This has something to do with human imagination.

the layout wasn’t particularly inspiring …

the shorter text gives it less to go on, so it’s more random

IMB TA
Joy 0.50 Sadness 0.54 Analytical 0.84 Tentative 0.87 (Anger, Fear, Confident <.05)
4 complaints about the image each time (dog and man) / 21 comments (19%)