Can ChatGPT or other Chatbot interfaces really write anything better than a feeble imitation of postmodern cut-up techniques? Polina and James Mackay think so, and they offer some reasons for holding onto a human, guiding intelligence in the writing process.
Most serious writers refuse to make themselves available to the things that technology is doing. I’ve never been able to understand this sort of fear. Many of them are afraid of tape recorders and the idea of using any mechanical means for literary purposes seems to them some sort of a sacrilege. This is one objection to the cut-ups. There’s been a lot of that, a sort of superstitious reverence for the word. My God, they say, you can’t cut up these words. Why can’t I?
William S. Burroughs interviewed by Conrad Knickerbocker (Third Mind 4).
The idea for this paper began with a simple challenge, one which led us to work more as writing practitioners than as literary analysts. Could we use ChatGPT or other chatbot writing interfaces with large language models (LLMs) to create cut-up texts? And if so, what would be the implications of such an experiment?
This is, in a sense, a fairly obvious extension of cut-up theory. William S. Burroughs was a writer whose own family fortune was due to the invention of the first arithmometer, a machine that automated the dull clerical job of calculation and a direct predecessor to the Turing machine.
Although cut-ups can and have been produced non-technologically – for instance, in some of the Surrealist creations they called “exquisite corpses” – they seem to reflect a technologically disenchanted world of the mechanical reproduction and distortion of language. This can be seen in the precedents that Burroughs cited for the cut-up in The Third Mind – Tristan Tzara’s satirical “How to Make a Dadaist Poem,” T.S. Eliot’s The Waste Land, John Dos Passos’s USA – as all involve materials that had been through the mechanical process of mass printing (Burroughs and Gysin 3-4; Gray 141). Indeed, the cut-up method has emerged spontaneously several times since the invention of the columnar newspaper format, with the oldest example being traced by Paul Collins to Caleb Whitefoord’s 18th century publication of humorous conjunctions he found in the papers (Collins).1
Examples of Whitfoord:
Dr. Salamander will, by her Majesty’s command, undertake a voyage round—
The head-dress of the present month.
Wanted to take care of an elderly gentlewoman—
An active young man just come from the country.
Yesterday the new Lord Mayor was sworn in,
and afterwards toss’d and gored several Persons (Collins).
Just as Charles Bernstein has argued that the techniques of lyric poetry could not have evolved without their being written down, so the techniques of the cut-up rely on the constant bombardment of random and unrelated textual snippets that is a feature of modern life, from newspaper articles pre-cut with advertisements to discarded and rotting leaflets, their text only half-visible (Bernstein, Attack 95-96; Bernstein, “Art” 31-33).
So cut-ups are always already in conversation with technology. Producing them with AI, moreover, recognizes the newer forms and attitudes towards text that have arrived with personal computing and the Internet. As Kenneth Goldsmith explains it in his manifesto on uncreative writing, the cut-up is a form of "twentieth-century postwriting print-based detournement" but still dependent on human manual labour (6). More recent cut-up authors are able to work with much larger language sets: Jordan Abel, for instance, created from Project Gutenberg a corpus of almost 100 dime Westerns to mine for poems and visual art pieces (Injun 83).
A paper published in Scientific Reports in August 30, 2023 notes that “large language models […] can produce text indistinguishable from human-generated work” (Ibrahim et al). The first purpose of our experiments was to test this claim as it might pertain to experimental and avant-garde writing. Could an AI really write anything better than a feeble imitation of a postmodern technique? But to do so, we needed to decide what would make our cut-ups indistinguishable from something generated by a human experimental writer.
Since a cut-up text is by its nature at least partly mechanical, it would be trivially easy to program a computer to simply slice up texts into random chunks and fit them back together. In fact, this would not require artificial intelligence at all – if the results were to be truly random, any standard text editing program would do. The nature of the cut-up, however, is not random: instead, it is human craft that decides which combinations are most useful. Jordan Abel writes in his process statement for Injun that “Sometimes I would cut up a page without looking. Sometimes I would rearrange the pieces until something sounded right. Sometimes I would just write down how the pieces fell together” (83). Burroughs and Gysin describe this “narrative turn” in their cutups as “an operation of decoding,” and it is precisely this need for the controlling figure of the author as deciding intelligence that led Gilles Deleuze and Félix Guattari to decry the cut-up as an example of a fascicular root still seeking spiritual unity rather than a true rhizome (Burroughs and Gysin 14-15; Deleuze and Guattari 5).
The challenge, then, is to create texts, using ChatGPT, that look and feel like cut-ups that have involved a (human?) guiding intelligence in the writing process.
The first set of experiments were run in March 2023, building on earlier work that we had tried with GPT2 in 2022. The most recent ones we carried out in May 2024. During that time, generative AI underwent several changes, the most significant being the launch of GPT4, to which we took out a personal paid subscription, and then GPT4o, which proved to be best at the task. This variance turned out to be useful for a couple of reasons, not least that it provides a snapshot of the advance in artificial intelligence’s capabilities during this period. Moreover, as will be seen below, the difficulties we experienced using cruder versions helped us discover less obvious problems with the fine-tuned GPTs.2A peer reviewer for this article suggested that we try out the more customisable versions of ChatGPT, such as the discontinued da Vinci model. For a number of reasons, including cost and lack of experience in computer programming, we were not able to work with these tools. Since part of the nature of our experiment was precisely to do with the ways that generative AI can become an inadvertent agent of control, we also believe that the focus of this article needs to be on public facing and simple interfaces for generative AI. See also our comments in the conclusion.
The least successful experiments we tried were with Sudowrite, an AI-based commercial platform for writers. We thought this AI might be a better fit for a creative writing project, as it is after all only designed to produce creative texts. We fed in sections of cut-up writing from Minutes to Go and The Soft Machine, and asked the program to continue on in the same vein and style. Here is a sample result:
“ First we go into the city and get a few of the big kids to help us out—So we kidnap the Mayor and take him to the jungle—We tie him to a tree and then we start cutting away at his clothes—He starts to scream and we cut off his lips and take out his teeth—We leave him there naked and screaming—Then we go into the city and start kidnapping the kids—We take them to the jungle and tie them to trees—We start cutting away at their clothes—They start to scream and we cut off their lips and take out their teeth—We leave them there naked and screaming—Then we go into the city and start kidnapping the pregnant women—We take them to the jungle and tie them to trees— “
Self-evidently, this is very repetitive stuff. What we discovered was that the program, which only offers limited options for adjusting outputs, is very biased towards linear storytelling. Cut-ups do the job that Burroughs and Gysin intended for them to do, disrupting pre-set patterns of narrative, characterization, and pre-chewed logic. The result for a program like Sudowrite is a kind of helplessness, since the AI is very strongly told that its job is precisely to generate texts that fit to standard patterns of narrative storytelling. A tool designed mainly to help commercial self-publishing writers generate books that robotically follow templates such as the Hero’s Journey simply cannot incorporate Burroughsian disruption.
This might seem like a low-stakes issue: a tool cannot do a job for which it was not designed. However, it does illustrate one danger of the already increasing numbers of writers making use of artificial assistants. Certainly, most commercial writers already employ such concepts as Save the Cat, and variants such as Dan Harmon’s “Story circle,” or JJ Abrams’ “Mystery box” – but as props for planning rather than rigid structures. What will be the effect of a tool that automatically begins with the assumption that these are the only ways to tell stories? To take one example: research into breast cancer narratives repeatedly demonstrates the mental damage that can be caused by the “battle” metaphor that so many women bring to their illness experience (e.g. Leopold 165-167). This narrative, resembling the hero’s journey, can create a viciously scarring sense of failure if treatments do not go as planned. Tools that reinforce the idea of the single right way to tell a story have the potential to reinforce false and damaging expectations of the world.
However, we decided that the main problem with working with Sudowrite was that in its 2023 iteration it did not allow us to give it detailed instructions.3The site has subsequently changed its interface and allows for more detailed prompts and a wider range of possible narrative arcs. Its fundamental incompatibility with the cut-up, however, remains. We therefore turned next to ChatGPT, a natural language processing interface for a large language model which is exceptionally good at interpreting inputs written in conversational English. We began with ChatGPT’s version 3.5, which at the time of writing is freely available to the public. We asked it to explain what a cut-up poem was, added some additional information to explain exactly what we were asking it to do, and got the following semi-poem:
A symphony of sirens, a fragmented dance, a collage of chaos, my subconscious wilderness.
Existence’s fragments, a mosaic of life, emotions colliding, a kaleidoscope of dreams.
Passion’s artistry, a cut-up canvas, flickering surreal landscapes, a disjointed rhythm.
Hearts torn apart, evoking imaginations, a surreal montage of intensity.
Generally, most of our early experiments came out that way. These are easily distinguishable from real cut-ups in any number of ways. They come in extremely regular segments of three or four words at a time. Rather than actually being surreal, fragmentary or disjointed, these fake cut-ups regularly tried to tell us they were these things by incorporating words such as “surreal,” “fragmentary” and “disjointed.” This paradoxically introduced an element of regularity and rationality that actively defeated the usual affect of the cut-up, which queers language through violently unexpected juxtapositions and violations of the rules of grammar. These artificial cut-ups also constantly employed pretentious language, with a perceived effect of trying too hard to emotionally impact or to intellectually impress the reader.
Note in our analysis here how difficult it is to talk about ChatGPT’s efforts without ascribing the machine human intelligence and intentionality. Although these earlier efforts were poor imitations of cut-ups, they were not markedly worse than we have seen produced by some of our undergraduate students experimenting with cut-ups in a class on Kathy Acker. The errors are already errors in writing style.
Once we had access to ChatGPT4, we tried a different approach. We gave the chatbot examples of cutup texts and asked it to analyze them – a sample generated analysis included the insight that “Sexuality and violence are presented almost interchangeably, suggesting a world where the boundaries between pleasure, death, and pain are blurred. This might be used to critique a society that's lost its moral compass or to display the raw, primal forces at play in this setting.” ChatGPT was also able, without additional context, to identify the passage as reminiscent of the work of William S. Burroughs.
We then took the chatbot through methods of producing cut-ups, and discussed the effect of the disjunctive prose. Finally, we asked it to produce new work in the same style. We re-ran this experiment a few times to generate different pseudo-cutups, emphasizing different Burroughsian themes each time. Here is one of the better examples:
Scalpel pirates… a pulse in the viral night… language...the fractured wind of… myth—cracked by ancient gods… Sirens singing through the ether—Unconscious seeping… Words, contagion… infinite incubation—buried deep… within cellular structures—whispering code… the epidemic… of syntax. Bacterial in nature… or viral—language, a parasite—a mutation—lurking beneath the tissue of the tongue—infected… the throat—a maze… of Greek heroes and… Egyptian deities… spread through osmosis—dripping… from the mouths of… pirates… navigating the seas of… reason. Ulysses… eye… a scalpel—slicing through…
Could any of the pseudo cutups we generated be passed off as fragments of Burroughs? Yes, at least to non-experts. That would accord with the conclusion in a recent paper which recounts carrying out a similar attempt to mimic the style of H. P. Lovecraft. They created a detailed prompt explaining Lovecraft’s style and giving ChatGPT samples to work with, then asked the AI to create a “new” Lovecraft story (Garrido-Merchán, Arroyo-Barrigüete and Gozalo-Brihuela 9) While that study has not yet been published and is currently reliant on very short samples and inexpert readers, the authors do show that most undergraduate students were unable to reliably differentiate between samples of Lovecraftian prose and similar length imitative samples written by ChatGPT.
The end point of our experiments was not to follow generations of bad writers in generating pastiche Burroughs, however – we were interested in how far we could push ChatGPT into creating its own cut-ups. This would need to involve all elements of the cut-up process: material selection, cut up, selection, and editing. If the machine could manage all of this, we wondered, what might the result be?
We asked the AI to suggest four random article topics, then fed its responses into Google News and copy/pasted the first articles we found into ChatGPT. Since the AI could not physically manipulate texts, we told it to split the texts into random phrases, and then recombine them (again at random). This was immediately difficult, as the AI proved to have a strong bent to insert words of its own to create logical bridges between fragments. One sentence it generated was “It's not easy guarding the future of clean energy, the sun’s power merging with the bugs to feed newly hatched cities like Guantánamo and Baracoa” – looking back at the original articles, it had added the words “merging with the” and “to feed” to try to connect disparate fragments and random words. This is not entirely outside of normal practice by cut-up artists – Dodie Bellamy, for instance, says of her process that:
If a word is cut in half, you may finish it any way you like […] Words may be deleted; tenses and grammatical categories may switch. […] Pronouns are totally up for grabs. [And t]hough the practice is discouraged, extra words may be inserted, cautiously and sparingly, mostly for rhythmic purposes” (“Cunt Norton”).
Jordan Abel, similarly, describes taking cut-up materials from The Last of the Mohicans and “writing over them, writing through them, writing around them and writing with them” to produce his book 2023 Empty Spaces (Nishga 263).
However, we wanted to get the AI to follow all the steps of the cut-up process. We therefore insisted, with a strongly reinforced prompt, that it only use words from the original. While the results still included some inserted words, and while the AI proved incapable of splitting words as scissors can do, we did manage to get some more interesting results.
We then asked the AI to select what it saw as the most interesting recombinant fragments, and to explain its rationale. It proved surprisingly good at this stage. Here are two examples:
First selected sentence: “Hot honey is only getting hotter, like the eggs in the nest”
ChatGPT’s rationale for selection: “This evokes a sense of increasing temperature, a nod to global warming, but juxtaposed in an odd, domesticated way.”
Second selected sentence: “Asking questions about the Ciudad Primada, they awake him with cold water and the legacy of the Spanish from the 16th century.”
Rationale: “This sentence contrasts the act of interrogation with a method of physical reviving. The addition of 16th century Spanish legacies throws the reader into a temporal loop, juxtaposing modern action with ancient history.”
We largely found ourselves agreeing with the AI’s selections for the most promising or surreal sentences, and with its readings of them. Finally, we asked it to recombine the chosen sentences in any way it liked, ending up with several pieces of original cut-up writing that had been wholly produced by AI from original material selection to final output, with only methodological assistance from us.
As can be seen in the below extract, the product is not just wholly random, but in fact has interesting juxtapositions that speak to specific themes:
To solve the song sparrow dad, Pine Ridge and Pizza Hut intertwine. Investments in the shadow of energy efficiency darken our vision while city living suits male sparrows. Hot honey is only getting hotter, like the eggs in the nest. Dutch Bros and Burger King, with the search for bugs, play a dissonant symphony. Chick-fil-A and the paradoxical ways of Red Cloud Renewable Energy Center merge, and InOurHands partners with the era of swicy, signaling a metamorphosis of time.
This compares well with early Burroughs experiments, for instance the following paragraph, created by cutting up articles from “Time and New York Herald Tribune”:
Her 'Fourth Grade Class screamed in terror when I looked at the dogs and I looked at the pavement. .. decided the pavement was safer . . . Stale streets of yesterday policemen back from shadows to embrace his assailant. . . pretty familiar. Talk to my medium . . . Remember my medium of appalling conditions conditions?: (Burroughs and Gysin 58)
We would argue that this extract from The Third Mind shows a more definite style: Burroughs, as is frequently the case, selects for violence and paranoia. Unlike ChatGPT, Burroughs is willing to allow for the scissors to violate the integrity of words in the source text (“AMBOURG, USSELDORF” (56)), and to have meaningless repetitions, which ChatGPT resisted completely.
A final experiment took place in May 2024, on the release of ChatGPT4o, which OpenAI promotes as having additional capabilities. When we asked it to repeat the earlier experiment of generating ersatz Burroughsian texts, it certainly showed itself to be better at the task than any previous iteration, though still clearly interested in inserting thematic materials. Across multiple versions, it tended to create interestingly surreal first sentences, but then follow them with explanations, as in this example:
Language slices through the café’s haze—a virus, alive. Words mutate, infecting the listeners, rearranging their thoughts. The menu reads like a manifesto, each item a command, a suggestion, a subliminal twitch.
When we asked the chatbot about the question of obscenity, it volunteered that it would be able to include swearwords, which resulted in at least one fragment that seemed to us to get quite close to the spirit of Burroughs’ writing:
Futuristic landscapes, where machines digest human screams, convert them into cold hard cash. Chrome limbs twitch, haunted by ghost protocols. “Where does the man end and the machine fucking begin?” a voice crackles from a discarded radio, amidst the ruins of yesterday’s goddamn tomorrow.
However, when we asked it to incorporate elements of violence, sex or drugs, it demurred, stating that “While I can create content with mature themes in mind, like references to sexuality, I do need to keep the language and content appropriate for all audiences.”
The new capabilities of 4o came in useful in a different way as well. We quickly discovered that, unlike 4 or 3.5, it was able to “read” pdf and Word documents, and imagine a virtual cut-up process. Whereas with the earlier experiments we had been careful to feed it fairly bland news articles, we now gave the AI two much more confrontational sources to draw from: firstly, a scientific article on the treatment of metastatic prostate cancer; and, secondly, an opinion piece from the far-right “news” outlet Breitbart News on the Eurovision Song Contest (Yamada and Beltran; Martel). The AI initially read only the first of the two documents and then provided a kind of summary in bad parody-Burroughs speak (example: “life expectancy measured in reluctant pauses, breaths caught between statistical revelations and the quiet dignity of resistance.”) After some reinforcement, however, it proved capable of the task:
Fevered visions interrupt the radium-223; clinical trials meet Israeli flags; mutation or deletion she dedicated; current treatment landscape of metastatic; receiving bone health agents; randomized to receive radium-223; faces hostility from fellow; biomarker-driven therapeutic strategies based
Further, this newer version of OpenAI was capable of more nuanced reasoning when we asked it to select the most promising phrases from the cut-up it had generated. We repeat its first two selections and rationales below:
1. “Fevered visions interrupt the radium-223 - clinical trials meet Israeli flags”
This juxtaposition captures a fascinating blend of medical imagery with political symbolism. “Fevered visions” evokes a sense of delirium or intense imagery, possibly relating to the mental state during illness or the chaotic nature of dreams, while “radium-223” grounds the phrase in a specific medical context related to cancer treatment. Bringing in “clinical trials meet Israeli flags” adds a layer of geopolitical nuance, perhaps alluding to international cooperation or controversy in medical research or public health initiatives. This combination stands out for its vivid imagery and the way it weaves together disparate realms—medical, psychological, and political—into a single, compelling narrative.
2. “gene therapy—biological dogfights in - under the Elsevier user license”
This phrase intriguingly combines the scientific with the mundane administrative. “Gene therapy” is a cutting-edge medical technique that holds significant promise for treating diseases at their genetic roots. Describing this as “biological dogfights” dramatically personifies the cellular battles that gene therapy entails, suggesting a violent confrontation at a microscopic level. Following this with “under the Elsevier user license” abruptly shifts the scene from the biological to the bureaucratic, highlighting the often overlooked but crucial role of licensing and access in the dissemination of scientific knowledge. This stark contrast underscores how high-stakes scientific endeavors are tightly regulated and commodified within legal frameworks.
Although explained in dryer and blander language, this seemingly shows a comprehension of Burroughs’ idea of writing as “an operation of decoding, of contamination, and of sense perversion.”
Conclusion
We have shown that the answer to our original question is yes. It certainly is possible to create new cut-up writing using artificial intelligence platforms that require zero knowledge of programming. As the technology improves, in particular as it is able to incorporate and generate longer texts, we believe that attempts at automated cut-up writing, already somewhat indistinguishable from neophyte attempts at the form, can only improve. ChatGPT is able to combine, select, edit and rewrite materials in the manner of a human artist.
Having said which, there are some distinct restrictions that we also observed. Firstly, when we say that it is “able” to do these things, it can only do them under instruction. The AI here functioned like one of Andy Warhol’s assistants, following instructions dutifully, rather than operating as an independent consciousness. The cut-ups we produced were not particularly good, but that may be because we are not professional writers, and were mostly interested in whether it could be done at all. We’re here reminded of Dodie Bellamy’s observation that “straight guy experimental writing can be so fucking boring” – in other words, experimentation needs to have a point beyond the experiment (Barf Manifesto 11). In the purposive hands of more competent and creative writers, AI might be able to produce something much better. As Burroughs himself observed: “Can recognize Rimbaud cutup as Rimbaud . . . A Melville cut-up as Melville . . . Shakespeare moves with Shakespeare words . . .” (72). Also, the most meaningful combinations we elicited came from the final experiment, and these took their power from our own subliminal connections to the material (a close relative has metastatic cancer, and we are in an area highly affected by the crisis in Gaza referenced in the Breitbart article, so a combination like “phase 3 trial randomized patients home to cheers and fearlessly” had more of an effect on us than it would have had for a random observer).
Secondly, and somewhat relatedly, there are particular difficulties in working with ChatGPT, due to the guard rails written into the model. The chatbot finds it hard to cope with obscenities of any kind in inputs, and has a strong aversion to producing anything that might trigger offence. Given the strong association between the cut-up form and both obscenity and offence, this made it hard to even give it materials to work with, and effectively neutered it from producing particular combinations in its outputs.4 This is not necessarily a problem that will exist for future cut-up writers: it is getting easier and easier to create one’s own LLM , and we can see a future in which a writer might commission a custom AI that’s perfectly comfortable with obscenity, just as there are already an increasing number of AI pornographic image generators. A good recent example poem came from Nick Montfort’s use of such LLMs to create what he calls “Digital Text Surrealism” (Montfort).
Finally, the AI, even after we had created lengthy prompts explaining the value of broken grammar and unexpected combinations of words, was unable to prevent itself from smoothing the results. It introduced verbs, adjectives and conjunctions that attempted to explain the relationship between the nouns. Even 4o, which was better at the task than previous versions, ended up changing verb tenses and other elements to make better grammatical sense. While the idea of an authentic cut-up is ludicrous – cut-ups being a form that resists notions such as authenticity and originality – the results definitely felt inauthentic.
Overall, our experiments left us unambiguously excited for future intersections of artificial intelligence and experimental writing, but what they also revealed was the propensity for AI to subtly control human discourse. We constantly felt ourselves in the process being pushed to be nice, to make sense, to fit to a bland norm of human discourse. The cut-up, as described by Burroughs and Gysin in The Third Mind, is a tool for escaping dominant narratives that operate virus-like on our conscious minds, allowing us to perceive otherwise hidden psychic realities. The guard rails placed around ChatGPT operate in exactly this viral manner, influencing the writing of the estimated 100 million plus users who have signed up to the platform. As we have shown, the tool can be wrested from the hand of the controller and used for other purposes, but the dominance of these master narratives in text generation is troublesome, and their effects in a world where AI generated text threatens to take over the internet are yet to be fully appreciated.
Works Cited
Abel, Jordan. Injun. Talonbooks, 2013.
—. Nishga. McClelland & Stewart, 2021.
Bellamy, Dodie. Barf Manifesto. Ugly Duckling, 2008.
—. “Cunt Norton.” 21 February 2014. PEN America. Online. 01 September 2023. https://pen.org/cunt-norton/.
Bernstein, Charles. “The Art of Immemorability.” Journal of Philosophy: A Cross-Disciplinary Inquiry, vol 2, issue 6, 2006, pp. 30-40.
—. Attack of the Difficult Poems. University of Chicago Press, 2011.
Burroughs, William S. The Soft Machine. Olympia Press, 1961.
Burroughs, William S., Brion Gysin, Gregory Corso and Sinclair Beiles. Minutes to Go. Two Cities Editions, 1960.
Collins, Paul. “The Lost Symphony.” The Believer. 1 November 2004. https://www.thebeliever.net/the-lost-symphony/.
Deleuze, Gilles, and Félix Guattari. A Thousand Plateaus. Trans. Brian Massumi. University of Minnesota Press, 1987.
Garrido-Merchán, Eduardo C., José Luis Arroyo-Barrigüete, and Roberto Gozalo-Brizuela. “Simulating H.P. Lovecraft horror literature with the ChatGPT large language model.” Arxiv (2023). https://arxiv.org/abs/2305.03429. Preprint.
Goldsmith, Kenneth. Uncreative Writing: Managing Language in the Digital Age, Columbia UP, 2011.
Gray, Kathelin. “Burroughs and the Biosphere, 1974-1997.” William S. Burroughs Cutting Up the Century, edited by Joan Hawkins and Henry Alexander Wermer-Colan, Indiana University Press, 2019, pp. 131-152.
Ibrahim, Hazem et al. “Perception, performance, and detectability of conversational artificial intelligence across 32 university courses,” Scientific Reports, vol. 13, 24 August 2023, https://www.nature.com/articles/s41598-023-38964-3.
Leopold, Ellen. A Darker Ribbon : Breast Cancer, Women, and Their Doctors in the Twentieth Century. Beacon Press, 1999.
Martel, Frances. “Israel’s Eurovision Singer Eden Golan Comes Home to Cheers: ‘I Was a Voice’ for the Hostages.” Breitbart, 13 May 2024, www.breitbart.com/middle-east/2024/05/13/israels-eurovision-singer-eden-golan-comes-home-cheers-voice-hostages/.
Montfort, Nick. “Automatism for Digital Text Surrealists.” Electronic Book Review, 5 May 2024, electronicbookreview.com/essay/automatism-for-digital-text-surrealists/.
Yamada, Yasutaka, and Himisha Beltran. “The Treatment Landscape of Metastatic Prostate Cancer.” Cancer Letters, vol. 519, Elsevier BV, Oct. 2021, pp. 20–29, https://doi.org/10.1016/j.canlet.2021.06.010.