Sometimes I make simple Twitter bots as an excuse to toy with some piece of tech or language.
Usually they vomit out some kind of procedurally-generated or randomly-selected content on an infrequent schedule, sometimes under the guise of a fictional persona.
Is this art? I suppose this is some kind of art.
A bot that generates periodic public-address announcements for the fictional Black Mesa Research Facility, from the Half-Life series. These announcements take the form of short videos with a synthesized robotic voice speaking over them, occasionally accompanied by sound effects. Written in Python.
In Half-Life itself, the announcement system is used throughout the game as a minor environmental detail as the disaster unfolds, occasionally hinting at the state of the wider facility. Discrete sentences for the announcement system (among other things) are marked up in a semi-human-readable format inside a 'sentences.txt' file, and the system speaks them by matching each word to a sound file, which is played back in sequence. This accessible format enables level designers and modders to easily construct and adjust announcements without touching any code, and after experimenting with it for a while, it occurred to me that it should be relatively easy to create them procedurally.
At its most basic level, the bot operates on a Markov model trained on the existing contents of 'sentences.txt', mimicking the announcements made in-game. However, due to the relatively small amount of training material provided, I decided the model was insufficient for a bot that was going to be assembling multiple sentences a day ad infinitum. To keep things fresh, I developed a series of chaos factors. A secondary Markov model based on random arrangements of words in the bot's vocabulary was merged into the primary model with a lower probability weighting, ensuring a small chance of schisms in the middle of sentences. I also developed a lexicon that would query the Wiktionary API for the words in the bot's vocabulary and classify each word based on its English grammatical roles (noun, adjective, conjunction, preposition, etc). Sentences could then be post-processed by occasionally swapping words with other words in the bot's vocabulary, without disrupting the grammatical structure of the sentence—in theory. In truth, English is complex, and words can have multiple grammatical roles depending on context—something that the bot is nowhere near clever enough to be sensitive to—but as with most machine learning, the best I can hope for is that the average long-term results are acceptable.
After assembling a sentence string, the bot pieces together its audio output by stitching together the appropriate sound files in sequence—essentially, recreating the game engine's approach—and then uses FFMPEG to pair it with a randomly selected screenshot of an area from the game. These screenshots were curated by me to feature desolate spaces without any motion, giving the final impression of the announcement playing over 'footage' of the area.
A small bot that takes advantage of the open storage format of skyboxes in GoldSrc-based games (and the proliferation of user-generated content in many of them). Written in Python.
In GoldSrc, skyboxes are assembled from six distinct image files, each of which is rendered as one of the interior faces of an infinite cube encompassing the entire map, giving the primitive impression of a surrounding landscape. Skyboxes could be hand-drawn, assembled from photographs, or rendered to the images via a specialised skybox generator, such as Terragen. Some were simple landscapes, some were strange and abstract, some were very clearly real-world locales. Due to the limitations of the engine—each individual image must be a Truevision TGA file no larger than 256x256—many were blurry and dreamlike. As a long-time collector and curator of Half-Life maps, I had accumulated a large collection of all of the above, and found their vistas alluring enough to share.
The bot is a simple image processing pipeline that randomly selects a skybox view, scales it up, and posts it with the correct bearing (i.e. what surface of the imaginary cube it corresponds to). To make it more interesting, I also wrote a small library of 'status updates' for it to occasionally pair with an image. These status updates present the bot as a transdimensional satellite (hence the name) that is drifting perpetually through the depicted scenes; a sort of self-aware reality-hopping Voyager probe. Sometimes the result is poignant, other times ironic. That's the curse of procedural generation, I suppose.
A bot that posts images of LaserTank puzzles. Written in C#/.NET
LaserTank is a free/open-source puzzle game played on a grid of tiles, first released in 1995. The player is able to move the tank in any of the cardinal directions and fire the 'laser', which can be used to manipulate or destroy distant objects. While it is possible to die, there is no action involved; the player simply must figure out the sequence of moves to get to the exit. Due to the game's popularity and easy-to-use level editor, it has many thousands of levels, either packaged with the game itself or as part of bonus packs. I played it a lot as a kid due to its modest system requirements, and one day I remembered and was like "sure, why not?".
Since I couldn't just tell the bot to open the game and take a picture (I mean, I probably could have, but where's the fun in that?), I had to develop a process to render levels to an image as they'd appear in-game. Building a primitive sprite sheet for the game's various tiles was easy enough, but I also had to reverse-engineer the level format to figure out how to parse it. With a hex editor and a bit of observation, I was able to determine which bytes corresponded to level data, where the boundaries were determined, and how the game interpreted them. Fortunately, the amount of information associated with each level is fairly minimal: a name, an author, a difficulty setting, a hint, and the 16x16 board.