Guest User

Untitled

a guest
Nov 5th, 2025
70
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 18.07 KB | None | 0 0
  1. _
  2. ________________________________ _____ | |
  3. # [_____] | (_) II (_) | / \ | |
  4. #############################################################################
  5. # # #
  6. # Thomas Cake Bot / PEEP CREEP Neural Network # - - # (\
  7. # Follow along @ThomasCakeBot on Bluesky or @ScaryThomasCake on X # )O)# .'.
  8. # # # | |
  9. ############################################################################# _|_|_
  10. o o O O O O o o |~~~~~|
  11. ================================================================================|_v_v_|========
  12.  
  13. The PEEP-CREEP is a machine intelligence which runs on the cheapest, most easily available neural net I can find at any given time. Nothing the bot says should be taken seriously. It is my personal pet A.I. and a lunatic by design.
  14.  
  15. #########################
  16. # #
  17. # Version history: #
  18. # #
  19. #########################
  20.  
  21. 1.0 "Broken Up For Scrap"
  22. Barely-functional homebrew python implementation copied and pasted piecemeal into N++ with no particular skill or knowledge
  23.  
  24. 2.0 "Peep Creep"
  25. Talk-to-Transformer (limited GPT-2 web implementation), tricked into talking about Thomas cakes by feeding it a paragraph of information about Thomas, cakes and the whims of the Fat Controller, then mashing the 'continue this paragraph' button.
  26.  
  27. 3.0 "Fat Controller's Nightmare"
  28. ~400 line style model mainly made up of cherry-picked 'successful' outputs from the previous two models. Generated using Inferkit's (no longer available) GPT-2 custom model function. Bot has decided that coal is some kind of hallucinogenic drug, as well as a dietary staple.
  29.  
  30. 4.0 "Coal Day In Hell"
  31. ~1100 line style model comprising corrected and edited outputs generated by the previous models, with the addition of any scraped tweets from the bot's followers that made me laugh. Generated using Inferkit's (no longer available) GPT-2 custom model function. This 'autophagic' input seems to be producing (admittedly two-dimensional) personalities for the characters it references, for example:
  32. Thomas works for the Fat Controller, must be stopped, bothers things,
  33. Murdoch is inexplicably referred to only as "The Murdoch"
  34. Percy is some kind of criminal
  35. Gordon says "Poop poop" a lot
  36. James is some kind of medieval king
  37. Toby plunges into the ravine
  38. The Fat Controller will eat basically anything
  39.  
  40. 5.0 "Lucid Steaming"
  41. ~2000 line style model, RFlabs GPT-2, not really enough passes, but not bad for a free trial. Style model consists of edited outputs generated by previous models, scraped tweets from various online communities (Thomas Fandom, Baking Twitter, Coal Mining industry, etc.) as well as the scripts from the earlier series of Thomas the Tank Engine and Friends, recipes and comments from baking sites, and only the highest-quality fanfiction. Randomly generated hashtags added to the end of each line to try to provoke the bot into some sweet engagement rather than just screeching about coal. Outputs are far more reliably narrative than previous models, though still as inconsistent as they are surreal. The 'personalities' it assigns the characters have continued to mutate and evolve, as documented:
  42. Thomas bothers things, must be stopped, ricochets between being a coal-obsessed demon and a coal-obsessed idiot. Still works for the Fat Controller, doesn't always seem to be happy about it
  43. "The Murdoch" is a harbinger of the end times, mutters about mundane problems
  44. Percy has a tongue that the bot treats like a prehensile limb, is horrifying. Only example of the bot consistently making a character speak gibberish
  45. Gordon still says "Poop poop" a lot, but seems to have the most intricate interactions of any character (lending a woman bricks to build a fence to stop Thomas, etc.)
  46. James is splendid, and a liar. The bot has successfully learned that James is red.
  47. Mavis is some kind of bizarre gamer girl/dominatrix parody. Thanks, fan fiction.
  48. Toby lives in the ravine/is inexorably drawn to it. Toby is square.
  49. The Fat Controller is a megalomaniac in charge of Sodor, railways, and the engines. He has developed his own supporting cast (his wife, butler and a variety of Fat Controller-run institutions)
  50. The Troublesome Trucks are revolutionary serfs who like to sing
  51. The bot occasionally speaks directly to the reader
  52. The props used in the filming of the original series are now treated like some kind of lost treasure
  53. Sodor is both a town and an island
  54. Diesel is evil
  55.  
  56. 6.0 "Cinders and Ashtags"
  57. OpenAI GPT-3 'Davinci' trained on a ~3000 line style model. The absolute maniacs gave me access and have opened the gates of hell itself.
  58. Style model as previous version but massively expanded, more #embedded in-line #hashtags to juice the algorithm, and several twitter formatting trends (LIKE for X, RETWEET for Y, etc.) Also added the green M&M copypastas. Outputs are now much more varied in format (poems, short stories bordering on flash-fiction, recipes, etc) and of staggeringly higher quality, now basically indistinguishable from a human writer. This means more of a batch of mass-generated tweets (almost all, in fact,) are usable, making this massively more efficient to run than previous builds.
  59. Notable new mutations:
  60. Oliver wants all trucks to suffer, don't know why. Toad nervously goes along with it
  61. Rusty is small, sensible and reliably non-binary
  62. Toby is genuinely tragic, bot has seemingly developed capacity for pathos
  63. The Fat Controller's butler despairs of the Fat Controller
  64. The Trucks have entered open revolt
  65. Sodor is now frequently post-apocalyptic
  66. The bot itself has become a character and refers to itself in the first person far more often. Appreciates peoples' cakes, regularly criticizes Hornby for some reason, and seems to dislike merchandise in general. Simps for Mavis, occasionally insists covid-19 is a baking ingredient
  67. Hashtags are far more likely to reference the tweet they follow, frequently include made-up events and day-specific promotions (#HenryCon2024, #SorryButtsFriday, etc.)
  68.  
  69. 7.0 "Train to Kill"
  70. OpenAI GPT-3.5-Turbo trained on a significantly streamlined dataset. Approximately the same size as 6.0, but with changes aimed at correcting unwanted behaviours and 'bugs' in an effort to get funnier outputs. Weird sideways implementation of chatbot mechanic because completion-type models are now depreciated basically everywhere (even for legacy models like GPT-2 and 3) and can no longer be trained. Scarily lucid, and able to be 'refined' by having it talk about specific subjects repeatedly in the same chat instance until, for example, it independently creates a religion around "The Murdoch". Because the agent can be given an 'identity', unwanted behaviours can be modified as they are found rather than having to retrain the entire model, making this implementation far more tweakable. As of yet, I don't think it's as funny as 6.0. but can probably nudge it into being a bit more unhinged.
  71. Notable new mutations:
  72. The bot is convinced it has several million followers and is, occasionally, a beauty influencer.
  73. It is aware of Elon Musk, and cannot reliably decide whether to refer to its platform (or occasionally, house) as X or Twitter.
  74. As a result of the chat-first implementation, the bot speaks far more conversationally and has a recognisable 'voice'. It will also point-blank refuse to generate a tweet and make up an excuse as to why it can't every one in four or so times it is asked to do so, which if you ask me, is a bit of scam when OpenAI charges per token.
  75. Hashtags are more complex and unhinged than ever.
  76. Tweets involving "The Murdoch" get a bit metaphysical and read like something between the old testament, a railways series version of call of cthulu and Greek philosophy. The bot has helpfully created #MurdochMondays, making these tweets easy to search and catalogue.
  77. Telling the agent it is a famous poet has markedly improved the quality of its poetry and interestingly, the 'flow' of its writing more generally.
  78. The bot has toned its 'Mavis Tweets' down a bit, but not much.
  79.  
  80. 7.1 "Coal-Blooded Killer"
  81. Evolution of 7.0 with mainly higher temperatures and more severe frequency and presence penalties. This results in more unhinged responses than previous models have allowed, seemingly kept lucid enough to make sense by the agent/system dialogue that primes the bot and tells it who/what it is. Interestingly, it has to be told specifically not to talk about Thomas or the Fat Controller to trick it into not going off on one about either every time it's asked anything, though still mentions both. An unexpected side-effect of this is that it now focusses on five other 'main' characters; Toby, Percy, James, Diesel and 'The Murdoch', seemingly entrenching and expanding its already defined personalities of each. Outputs are, in general, frightening, but are making me laugh, so I've made this an incremental update before trainable models hit for GPT4, which is supposedly coming some time this year (2024).
  82.  
  83. 8.0 "Emotional Babbage"
  84. OpenAI GPT-4o trained on 3100-line dataset. Same chatbot hackaround as 7.0 that works slightly better, its 'unprompted' responses, asking the assistant to generate a response with no prompt can be excellent, complex posts, but more often than not are derivitive. After some experimentation, the real magic is in feeding it a short 'setup' statement. Here's an example:
  85.  
  86. User:
  87. percy investigates a murder
  88. Assistant:
  89. Percy puts on his detective cap and snoops around the shed; Gordon, a big blue engine who had it coming, has been killed in coald blood. The only witness is Toby, but he has no idea what happened because he's just an old tram with PTSD and a bad memory. #TobyTheTramHasSeenSomeShit #VeganMurderIsStillMurder
  90.  
  91. With this in mind, I've added the following line to the System Instruction:
  92.  
  93. First, come up with a short statement describing something which happens. Then, expand this into a full post to reach the word liimt of 300 characters. No two statements should be the same. You will do this when prompted with "0".
  94.  
  95. 'Asking' the bot "0" now produces unhinged statements that reliably exceed anything it has been capable of before, and more importantly, does so without any real effort on my part. It's still a bit incoherent sometimes, but that's a result of its model config having to be turned up fairly hot to avoid endless diatribes about coal and coal alone, which are funny, but not very original.
  96. BlueSky have added support for scheduling tools that blow Elon's shambolic '£200 a year for a broken version of tweetdeck' out of the water, and are also able to post to X, so the cake bot will be dual-platform from now on. Might make an 8.1 at some point that knows what BlueSky is in the same way that the current build is aware of Elon/X.
  97. Notable new mutations:
  98. Percy is obsessed with sausages for some reason
  99. Edward is Sodor's village elder
  100. The bot's poetry is getting seriously good, possibly as a result of being told in its system instructions that it is the current poet laureate
  101. #MurdochMondays have broken out of their fairly static format and become flash-fables
  102. The Fat Controller seems a lot jollier in this build for some reason
  103. The bot seems able to build up to an unexpected (if not always great) punchline when the mood takes it.
  104. Other engines outside of the bot's previous favourite five get more of a look-in after I added a list of all the arcane later characters I've never heard of to its dataset.
  105.  
  106. 9.0 "Nightmare on Platform 2"
  107. OpenAI GPT-4.1 trained on 3300-line dataset. The bot is now aware of both Bluesky and X, and has had its character limit upped slightly for the former. This build is a little different in that the training data also includes the previous build's system data in each line. Either this, or using the newer GPT 4.1 base model, has made the bot markedly better and more coherently creative than ever, leading to superb, reliable output. The main downside is that the combination of these two changes has, for whatever reason, meant that the bot cost literally a hundred times more money to train than it ever has before, despite taking no longer than usual to complete the training operation. Version 10 has, as a result, been indefinitely delayed until I can recover from the absolute financial armageddon this has brought down upon me.
  108. Notable new mutations:
  109. The engines are self-aware enough to call one another "props" as a put-down
  110. Side-characters get far more of a look-in than in previous models to the extent that I have had to look up most of them to see if they are even real
  111. The bot often makes insane statements, then declares them to be a lie as if it is trying to trick its followers. It has not been instructed to do this
  112. The engines the bot talks about have much more varied interests and hobbies; collecting vinyl, weekend micro-breaks, being James, etc.
  113. The bot's temperatures have been turned up high enough that it occasionally has a full-on breakdown and reverts to its older behaviours of spouting incoherent gibberish until hitting a token limit. This is intentional, and helps to break up what can otherwise be 'too coherent' outputs which might lack any of that amazing AI jank from the earlier days of LLMs.
  114.  
  115. #########################
  116. # #
  117. # Other projects: #
  118. # #
  119. #########################
  120.  
  121. INFINITE YEAST 1.0 (Halloween 2021)
  122. A horrifying AI image generator, loaded with around 1,800 images of frightening Thomas cakes. Only run for as long as free trial credits lasted, so not quite to a level where usable. See examples posted for a laugh. Interestingly, the most efficient way to run this was to use a pre-defined base model trained on cars, and then gradually mutate that into cakes. Because the model didn't get anywhere near enough training for lucid results, the output is some genuinely grotesque Thomas/Cake/Car hybrids, as the bot tries to morph two headlights into a single Thomas face (as apparently those are the nearest two elements the bot identifies to transition between). Ends up with a lot of chimeras with what are clearly headlights with faces, on top of strange double-barrelled steam engines with the proportions of minivans. Honestly, have a look if you missed it, words don't do it justice. At some point I'll make a second iteration of this with enough training to generate non-car Thomas cakes, though it is expensive to run, and not currently needed; pictures of frightening Thomas cakes are being uploaded to the internet at a rate of approximately 1.3 a day by my maths, so the bot is, at least for now, organically sustainable without needing to generate its own cakes in order to avoid repeats.
  123.  
  124. IFNINTE YEAST 2.0: "Chef DALL-E's Kitchen Nightmares" (Halloween 2022)
  125. Artificial cake images from DALL-E, fed prompts from the cake bot proper's more lucid text outputs as well as some choice baking site quotes. Carried on the tradition of losing a ton of followers by tweeting thirty nightmare images at witching hour on Halloween. I am sincerely sorry to any of you with notifications turned on, but I bet you thought it was pretty spooky after being woken up thirty times to see a cake with three eyes for wheels. Between this year and last, image generation has come on shockingly far, and I'm genuinely excited to see what Infinite Yeast 3.0 looks like in October.
  126.  
  127. INFINITE YEAST 3.0: "MOOPY'S BITHDAY" (Halloween 2023)
  128. Generated images of birthday cakes from PEEP-CREEP 6.0 outputs, using the terrifying Bing DALL-E 3. Coincidentally these were generated in the first few days of public access to the service, and so aren't limited by many of the safety features that were (for some reason) added in subsequent weeks. The result is truly nightmarishly photorealistic images that demonstrate an epochal leap in image generation compared to only a year prior. 100 images were tweeted with the PEEP-CREEP 6's more broken text outputs over the course of The Witching Hour, once again destroying all followers' notifications and probably phones.
  129.  
  130. PROJECT: STOCKING COAL (Christmas 2023)
  131. Generated nightmare Thomas Cake Christmas cards, also from DALL-E 3, printed out and sent IRL to the Thomas Bot's followers through an easter egg URL hidden in a few December tweets. Cards written by PEEP-CREEP 7.0's weirdly flexible chatbot system by telling it to write a Christmas card to anybody who requested one.
  132.  
  133. INFINITE YEAST 4.0: "NOW THEN CHOMASH" (Halloween 2024)
  134. Generated images of Thomas cakes using Midjourney 6.1. Interestingly this is the first year where I don't think there's been a huge shift in what is publically available in terms of image generation. Every model I've tried seems either far more constrained, or else less willing to produce what is actually described in a text prompt. For context, midjourney won't even make a character dab, I suspect because the old DALL-E images with this could stray slightly into hitler salute territory. Image-to-video is also disappointing, most nightmarish images of AI cakes they're fed default to what seems to be a 'safe mode' where the AI shifts the camera to the left or right slightly to make the image look faux-3D rather than actually animating anything in the frame.
  135.  
  136. INFINITE YEAST 5.0: "JESUS CHRIST GROKAY" (Halloween 2025)
  137. Short AI videos from Grok Imagine, generated from images from past Halloween clusterfucks and Grok's image model. Although the base images generated with Grok are still nowhere near as weird or interesting as pre-restricted 2023 Dall-E, its video capabilities are remarkable. The level of jank (including the generated garbled voices and barely-music) is in such a good sweet-spot that I'm probably going to generate a larger batch of videos while the going's good, just in case there's some way of expanding shorter videos into longer-form (even more mangled) nonsense at any point.
Advertisement
Add Comment
Please, Sign In to add comment