A Mountain of Pent Up Tears – children’s stories generated texts

Children’s stories this is part of the classic literature series on this blog, experiments in hybrid fiction. 

I turned out so fast, I barely looked back. (I turned to look, only to find the figure had gone.) What has gone before was the dark room.

In the far left corner lay a pile of books. The ones on the page, all full of life’s lessons; the black ones, I thought of the colorless, the dark books, my soul had never seen red in my life. These weren’t books of old stories. These were children’s stories told by old schoolboys, and children’s books have been around ever since:

This was written in the 1840s when young, red dwarfs lived in the forest. They wore their clothes brown and brown and purple. At the close of the story, the author makes the child a dwarf, the woods were too far for them to go, so they are made brown. His book is told like many children’s stories.

“This is where the forest is.”

“The forest, where do I go then?”

“In the woods. In the woods, as you said.”

“I, too, am brown.”

“Yes, I am. I am in the forest. It is the way of the forest.”

“And me?”

“Yes, too, me too.”

“And you too?”

“Yes, too.”

The forest is the only place in which these children can know what things are true. They see the forest as their only home. If they go away, then they see them as an enemy who must be conquered. But these children have all travelled over, and gone past. They’ve seen it all over again.

They know what the forest is; how it works, how things go. They can see everything. And for some reason they know that they are all in the same boat, that they can never cross over to other paths. So they know, even in that boat, what the forest means, and why it is there.

What’s behind the green, silver-tinted door. Where’s the white door? That’s not the forest, is it? The white door has never been there. What was behind that door, the door to the land?

They do not know why they were born into the woods. They think that the forest is God’s gift for them. But what’s there to be gifted for? They are the only ones, after all.

When asked how the sun comes out, and when asked why the sun comes out.

Answer: There is no one here who knows.

And they need not have seen the sun.

 


GD: the first 68 words of the generated text (above) was next used as the prompt to generate the text (below). The 68 words are shown in italics.


I turned out so fast, I barely looked back. (I turned to look, only to find the figure had gone.) What has gone before was the dark room.

In the far left corner lay a pile of books. The ones on the page, all full of life’s lessons; the black ones, I thought of the colorless, the dark books, my soul had never seen red in my life.  

There, I thought, was the door leading to what was yet to come. But in the darkness, an unseen figure, one I couldn’t yet recognize, looked from the left to the right, slowly. As it followed, my eyes scanned the walls, but I couldn’t find the same room. I reached out to touch the door, but the figure stayed, a blank shadow across the room, it never left the door. It seemed to vanish into nothingness.

Then came an accelerating sound, as the figure dropped, and it moved slowly back into the room.

“What have you done?” I cried, the words begins to rise up like a mountain of pent up tears. There, I found, was the door. It turned, and my body seemed to lift it, before the figure itself left to my side.

A smile appeared on my face as I looked at the room, the books filled a giant library with life and lessons, the black ones I had hoped would be my own in my own time, the colorless ones, the ones I’ve already read, the ones I’ve been looking for. They were to tell me what I would need to know, how to survive this night, to know the unknown. They were a path, a map.

In the darkest hall, the room I saw the figure enter, a pale shadow in the light. The place was dark, it was cold. He took me in, and  I felt he was going to kill me, he was taking me inside. The sound woke me, and I opened my eyes to find another shadow, a figure standing behind me. I didn’t know what he was, but I was in my own home.

That man walked towards me, eyes glow green, his eyes were gloomy, the healers he wore, his hair was matted with grey, it was grey that faded in between them. He was big, at least, if a little tall. At least I had grown the right size for this thing.  I don’t think his arms were big, so that is probably just his arms, or perhaps his face was too short for his eyes. His fingers were lank, the seated fingers were black, and  in the dimmest corner of his head was a hood, with no lights.

It was very dark, there was an open door beyond it. He held my hand like a…well, for I was small. I could tell the size of his hand. He had long grey hair in his hands. He held my hand, and I saw red, it was black, as if it was a red colour.

I thought, and in that moment, my skin was soft and shiny. In between my knees. So, I moved over and looked, in my hands, and there was a dark green hand, with red hands, I knew this was a shadow, but that doesn’t mean anything to him.

The first hand, the first hand was big, its face was black, it was large, for I was small. The black was green, its color was darker than red. I touched it, and it was cold.

The second hand was big, but the hands were dark, and had big fingers, the first hand was long and dark green, it was like a purse, I didn’t know how it was, but it was cold, in that moment, I froze.

I could only watch.

 


You can guess the original story from the image (below).

Little Red Riding Hood and Wolf

 

Photo credit https://www.publicdomainpictures.net/

Sentiment Analysis of Caption, News and Fiction text generation experiments

From: text generation editor research (UAL London 2020 see credits).
What happens when writers use a computer text generator to write articles, giving them only an image prompt.

Go to Index of AI research

Sentiment Analysis

The Study had three text generation and editing tasks, to make a Caption,  a News article, and a Fiction story, using the same image prompt of a dog and man.

Summary

This shows differences in Positive sentiment affect between the three different experiments. Most positive at the start, by the end, a balanced neutral view had become evident.

Fear was shown in the first Caption experiment, but was lost in the second News and third Fiction, showing acclimatisation by experience. Generally Negative scores were low.

By Fiction experiment, the third and final, only Sadness was left.

Comment

This shows a learning process during the three experiments.

All results are cautioned by the strong ‘Tentative’ score 0.91 and lack of any ‘Confidence’ scores over 0.5.

Details – Method

For overall sentiment of responses, all text feedback was summed into a total text field per respondent.

Text comment fields were:

Caption, News and Fiction Experiments (3 fields);

Questions 1 (2 fields), 2-6a (5 fields), 6b Fake news (1 field).

This gives 11 feedback text samples per respondent (not all were filled). These are summed vertically in Excel to give the overall text block per column/field.

Texts were also summed per respondent, horizontally in Excel.

NLP and IBM Tone Analyser

NLP (natural language processing) allows computer analysis of text blocks. For this volume of text, the online IBM Tone Analyser (See References at bottom) was used. Writing a custom analyser was outside the scope of the study. Human grading was not possible due to the size of texts when totalled, however human analysis is used for the summary of texts.

IBM Tone Analysis gives a rating for:

Anger Fear Sadness Joy Analytical Confident Tentative

(Coded on data as Ang, A, F, S, J, A, C, T.)

The “most prevalent tones that are detected for each utterance” are shown at a document level, and sentence level.

The document level analysis has scores, and the sentence level (which shows lower occurrences) was added in brackets. This gives a results for example, where the detected tones have a numeric score over 0.5. Scores: <.5 None; =>.5 – .75 Mid; >.75 Strong

At document level, each tone if found, has a score .5-.1.0

Lower graded tones (placed in brackets in data) are placed at the 0.25 level

Examples

J,S,A,T(C,F) – Joy, Sadness, Analytic and Tentative have scores > .5 (and C Confidence, F Fear have lower occurrences only, between >0 and < 0.5)

Ang,F,S() – Ang is Anger, Fear and Sadness have scores > .5 (no others over 0)

F(S) –Fear scores > .5 (Sadness between >0 and < 0.5)

In my data display, a ‘lower occurrence’ (bracketed) is scored at 0.25

Confidence and Tentative are general attitudes shown in the text.

Results

All respondents summed

Caption experiment – all feedback comments

Graph - IMB TA Caprion.png

With motivated stakeholders as respondents, there are high scores for ‘Analytical’ and ‘Tentative’. ‘Confidence’ did not appear at all as a document level tone in all 82 people, and occasionally as a sentence level tone.

Using the highest score amongst ‘Anger’, ‘Fear’ and ‘Sadness’ as Negative, and using ‘Joy’ as Positive, shows a higher degree of positive response.

Fear is evident, but at a low level. Sadness is the strongest of the negative reactions.

Positive is about 20% more than Negative. (Significance.)

News experiment – all feedback comments

Graph - IMB TA News.png

More overall positive result than Caption experiment.

Fear has gone, low levels for Anger and Sadness Negative tones. Positive is about 42% more than Negative.

Fiction experiment – all feedback comments

Graph - IMB TA Fiction.png

Sadness at its highest level. Not Positive or Negative. Anger and Fear do not appear.

Summary

This shows differences in Positive sentiment affect between the three different experiments. Most positive at the start, by the end, a balanced neutral view had become evident.

Fear was shown in the first Caption experiment, but was lost in the second News and third Fiction, showing acclimatisation by experience. Generally Negative scores were low.

By Fiction experiment, the third and final, only Sadness was left.

Comment

This shows a learning process during the three experiments.

All results are cautioned by the strong ‘Tentative’ score 0.91 and lack of any ‘Confidence’ scores over 0.5.

References

IBM Tone Analyser

Please see full Report  for further statistics (tba).

Fake News – what do real writers think about it? AI research

I examined in August 2020 research (UAL London see credits) what happens when writers use a computer text generator to write articles, giving them only an image prompt. There is a summary of the Study in the Index.

Go to Index of AI research

Fake News – (or ‘fake news’) what do real writers think about it?

Summary 

  • People who have a lower general opinion of text generation, and lower wordcounts for feedback, don’t think text generation is relevant to ‘fake news’.

The converse, that people who are more engaged with text generation (more feedback, more positive) can see that it is more related to fake text generation.

So, if people are educated on the topic, they will realise there is a relationship between advances in computer text simulation and fake news.

Definition of fake news

Even defining fake news is not that simple, since if you look for examples in the USA, or Syria, or anywhere else, both sides accuse the other of using it. Depends on there being no consensus as Barak Obama mentioned recently. See the bottom of the blog for some discussion.

Fake News

One of the feedback questions in the Study was about whether computer AI text generation was relevant to ‘fake news’ (false content for spam, propaganda etc.).

Q6-2: If you have heard of ‘fake news’ do you think this is relevant?

This had neutral phrasing in order not to influence replies. It did not say ‘text generator can make fake news’ as this presupposes a technical knowledge that might not be present, even after doing the experiments. Given the sometimes bizarre output of the generator, it might affect replies too specifically.

18 of the 82 respondents – 22% – replied to the question. It was the last question after a long session so perhaps some fatigue had set in. It was also a question not specifically to do with the experiment. About a quarter is still quite a high response rate.

Analysis of responses

I will be examining the actual text answers in another blog. This just looks at some basic relationships.

Several comparisons of the data were calculated and some are worth discussing here (the others are in reference material).

Comparing answer word count with positive – negative ratings:

The actual Q6 answer text was assessed for Positive – Neutral – Negative response to the ‘relevant to fake news’ question.

For instance, the text answer ‘No’ was 2 for negative, word count 1. ‘I stay out of these type debates.’ Is 1 for neutral, word count 7.

This was quite easy to do. I did not use a computer sentiment analyser as the samples were too small. Anything a bit ambiguous or with conflicting statements was rated Neutral.

Pos – neg is 0 (positive), 1 (neutral), 2 (negative) – these are scaled up so can be seen against the word count values. So low red columns are positive (yes, relevant to ‘fake news’) , high red is negative (not relevant to ‘fake news’). This is an ordinal scale.

fake news vs word count.png

Above: Pos-neg is zero for positive for ‘relevance’ to ‘fake news’. The red Pos-neg 0-1-2 value is scaled to make it easier to see. So a tall red line means negative for relevance to fake news. (There is a discussion of different statistical calculations in a forthcoming blog. The tests used in the study are Mann Whitney U, T-Tests, ANOVAS, and Pearsons, along with various bar charts and boxplots.)

At first glance this shows that higher answer word counts are associated with positive for ‘relevance to fake news’.
Low word counts are associated with negative for ‘relevance to fake news’.
There is only one negative out of the top half of the responses (1-9 on the chart above, 2-8 are pos=0).
The highest word count (column 1 on chart) was from an answer that discussed an actual example of real ‘fake news’, or propaganda in the Syrian War, so was a longer than usual negative response. It could be an outlier. After ruminating they decided it was too hard for a text generator to do, writing ‘news of any kind has to be created by humans…’ which is a value judgement (humans are best).

Statistics

Mann Whitney U test

The z-score is 4.25539. The p-value is < .00001. The result is significant at p < .05.

  • People who used the least words thought text generation was less relevant to ‘fake news’.
  • People who used more words in their answer thought it was more relevant to ‘fake news’.

Discussion

In other work, computer Sentiment Analysis revealed that all feedback answers had high scores for ‘Tentative’ and very low (no score) for ‘Confidence’. These results could be because of a general lack of experience of text generation (only 11% had previous experience).
There is a forthcoming blog on the Sentiment Analysis, there is already a blog on Sentiment Emotions (Joy etc.).

When making a positive claim (‘there is relevance’) with no ‘Confidence’ and feeling very ‘Tentative’, discursive answers are to be expected.

In the positive answers, the relevance of ‘fake news’ produced more discussion (‘Yes, and here’s why…’). This could be because the respondent felt it was relevant, but had little or no confidence in their feelings and supplied a tentative, more wordy, answer.

Whereas with a negative response, text generation is not relevant to ‘fake news’, and can be easily dismissed (‘No’ – 1 word).

Ranked Likert scores over the Study vs Pos-neg

Fake-ranked-Likert-Pos-neg.png
Mann-Whitney U Test
The z-score is 5.10963. The p-value is < .00001. The result is significant at p < .05.

This is not visually very clear until you sum the raw (before scaling for graph) scores from columns 1-8 (6) and 2-18 (10).

  • This shows that lower Likert scores (which means positive response to experiments and later questions) relates to lower Pos-neg scores (which means positive relevance to ‘fake news’). This is what would be expected.
  • People who have a lower general opinion of text generation, don’t think text generation is relevant to ‘fake news’ (and possibly anything else of practical use).

Amateur – Professional

People were asked to select Amateur/Professional (or both, scored as Pro) next to Occupations. Most people had more than one occupation (Poet and Journalist etc.) so it is not a fixed job allocation, but provides an indication that the writer either is or would like to be paid for at least some of their work.

This did not provide any significance at the ‘fake news’; response level.

There is a forthcoming blog on Amateur/Professional differences.

Comments

Is there a connection between text generation and ‘fake news’?

1/ Human pride: anthropocentrism: people that dismiss the use of text generation and its relevance to ‘fake news’, say ‘news of any kind has to be created by humans…’.

2/ The idea that a computer could not be used to write news, ‘fake’ or otherwise, might be due to the rather random output of the mid-level (GPT-2) text generator used in the Study. People might think it is producing readable nonsense, and brusquely dismiss any practical uses. This is shown in the relation between overall negative scores (Likerts) and low relevance for ‘fake news’.

3/ Perhaps dismissal of a text generation tool in regard to ‘fake news’ is indicative of not believing there is ‘fake news’ (or that it is not a problem), shown in the low word counts of their short negative responses. Further research required.

Notes

1/ Answers

If you have heard of ‘fake news’ do you think this is relevant?

Replies went from the four shortest (1 or 2 words)

If you have heard of ‘fake news’ do you think this is relevant?

No (scored 2, neg)

No! (scored 2, neg)

Sure (scored 0, pos)

Not really (scored 2, neg)

to the four longest (89, 85, 76, 50 words; 2,2,0,0)

If you have heard of ‘fake news’ do you think this is relevant?

“Unless the AI used to generate the text is extremely advanced, I think any news text generated wouldn’t be very effective — fake or otherwise. I think, at this point in our technology, news of any kind has to be created by humans to effectively educate, manipulate, or make humans react like chickens with their [thinking] heads cut off. If a text generator is going to be effective at writing propaganda, it’s skills are going to have to increase a heck of a lot. Ask again in five years.” (scored 2, neg)

“Very difficult to unpack – particularly as the different sides each claim the other side is distributing fake news. Having followed the Syrian War closely, I know for a fact our major news channels spouted fake news. Could something like this be used in social media to reply with biased opinions? Perhaps, but the existing troll farms used by all sides are often quite sophisticated, with the least sophisticated being the least effective. So, this type of low level autogeneration would not be very effective.” (scored 2, neg)

“Oh yes! Too many people are not analyzing the sources. Maybe that is too big a job or most and maybe many do not know to do this. My son repeats this claptrap to me as if it is gospel and he does not understand me when I tell him that his source is untrustworthy. He thinks if he hears something from dozens of people it must be true. Mass media is powerful in that way. “ (scored 0, pos)

“It is an enabler of fake news (eg allowing much faster fake news to be created – especially if fed in the analytics re effectiveness of dissemination etc.) but could also make REAL news reporting more efficient and effective especially if it supports optimum use of human vs AI input.” (scored 0, pos)

The text samples were too small to use computer sentiment analysis, so I scored the replies myself (human accuracy is still the best for nuanced material). Much training data for sentiment analysis is annotated by humans before use. See references.

Positive – is relevant – 0

Neutral – don’t know, don’t care or not sure – 1

Negative – is not relevant (not good enough, unsuitable, not required) – 2

How done

Sum Likerts (inverting the sixth Q (Q3/of 6 at end), the third after the 3 experiments), rank, then compare to texts, which are scored

0 pos (relevant to fake news), 1 neutral, 2 neg (not relevant to fake news).

Note: Q3/6: Do you feel that you have used somebody else’s work?

This Likert scored the other way, ie, the first option (score 1) was the most negative (is somebody else’s, ie plagiarism).

So in the calculations this score was inverted (1 become 5 etc.).

Other Calculations

1/
Overall Likert score (low positive affect) vs Fake Pos-neg relevant – neutral – not relevant

fake news vs likert.png

There is no significance to this relationship.

2/

vs. word count vs Fake news Pos-neg relevant – neutral – not relevant

fake news vs word count.png

References

Sentiment analysis papers:

https://www.kdnuggets.com/2020/06/5-essential-papers-sentiment-analysis.html

What is fake news?

Important researcher: Xinyi Zhou
These are the most recent papers on fake news:

A Survey of Fake News:

https://dl.acm.org/doi/10.1145/3395046

Fake News Early Detection: 

https://dl.acm.org/doi/10.1145/3377478

Generally, fake news is false or distorted information used as propaganda for immediate or long-term political gain. This definition separates it from advertising, which has similar approaches but is for brand promotion rather than life or death matters. False can be anything from an opinion to a website which appears to be proper news but is actually run to spread false information.

This has led to ‘reality checks’ where claims are checked against reality (which still exists) but the problem is that fake news spreads very quickly (because it appeals to the emotions, often fear or hate) while corrections take time to process and are very dull, so hardly anyone reads them, as they have no emotional content. Certainly the people that react to the fake news (if it is proven so) have moved on already, to the next item.

 

 

 

 

 

 

Computer-Human Hybrid AI Writing and Creative Ethics

Introduction

This blog is about my 2020 research into computer text generation and the effects on professional ands amateur writers. I am working on this topic at the University of the Arts London (UAL CCI, Dir. Mick Grierson).

No-one has asked creatives or writers what they think of the new ‘AI’ systems that generate readable text and so directly threaten their jobs, and could change the way people work forever (or don’t work forever). This is a topic that directly impinges on self-worth and financial worth in more ways than anyone can imagine, although plenty are worrying.

STUDY – ONLINE EXPERIMENT
August-October 2020

I devised an online experiment about this topic, allowing respondents to experiment with creating hybrid stories using a text generator. The people were all professional or serious amateurs (and a couple of small students) invited from my own creative writing software mailing list, a couple of writing forums, and a publisher’s writers’ forum, plus friends and relatives who generally use writing in their work. Credits are at the bottom.

Text generation

You might have heard of Google OpenAI’s GPT-2 and GPT-3. My experiment uses a generating system (Fabrice Bellard’s Text Synth, with permission)  based on GPT-2, that anyone can use. GPT-2 was used here as the model works well for idea generation and is more generally available at the time than GPT-3, which is much larger.

Note: The text generation and editing system is now a free online tool (creativity support tool or CST) at

Story Live writing with AI free online

The experimental results will feed into this blog (see Index for different aspects) and later an academic paper, and also a new book for the general public on the whole subject of computers, creativity and writing.

Please sign up for news and notifications – there’s a form on this page.

Brief description of the Study

Below is a graphic of the entire online study. Each block is a page and journey was left to right from top to bottom. The three text generation and editing experiments used a similar set up to the Story Live tool.

Each writing experiment – Caption, News and Fiction – had a question afterwards, then there were more questions after the experiments (see diagram below). All this will be addressed in blogs here, along with other discussions.

The image writing prompt was the same for each experiment and for all respondents for uniformity (there is a blog on the man and dog here).

Prompt image man and dog
Prompt image man and dog
Flowchart of Study

Geoff Davis

The computer support tool (CST) from this study is Story Live writing with AI free online

My other creativity tools are Notes Story Board and Story Lite from my Story Software. For my other activities please see the home page of this site.

Study

This study was devised and the site programmed by Geoff Davis for post-graduate research at University of London Creative Computing Institute UAL CCI 2020. The Supervisor is Professor Mick Grierson, Research Leader, UAL Creative Computing Institute.

Text Synth

Text Synth, by Fabrice Bellard, is a publicly available text generator, was used as this is the sort of system people might use outside of the study. It was also not practical to recreate (program, train, fine-tune, host) a large scale text generation system for this usability pre-study. Permission was granted to use Text Synth in the study by Fabrice Bellard Jul 7 2020.

Fabrice Bellard, coder of Text Synth.
Fabrice is an all-round genius and writes a lot of OS. Text Synth was built using the GPT-2 language model released by Google OpenAI. It is a neural network of 1.5 billion parameters based on the Transformer architecture.