Escaping the AI Vampire Castle

While I’m not a huge fan of ebooks, I read them out of convenience. I like that I can download free ebooks from the library as well as pick up older books for free online through websites such as Archive.org and Project Gutenberg. My Kindle displays ads in sleep mode (unless you pay Amazon $20 to remove them), and, lately, these ads are almost exclusively for what I’m nearly certain are AI generated children’s stories. This is just part of  a flood of scammy AI books on Amazon.

The AI children’s books advertised on my Kindle combine titles that have an English-as-a-second-language vibe, vaguely Manga style cover illustrations and author names such as “Leanor Varelade” that either yield no search results or are close to the names of real people (Leonor Varela is a Chilean actress). In short these books are what you would get if you took a statistical average of the entirety of the internet and barfed it out as a book. This is, of course what “artificial intelligence” actually is. We don’t and likely never will understand what human “intelligence” is let alone come up with a model of it. What we call AI is just a enormous statistical modeling game not “intelligence” whatever that is. AI is part of yet another hype cycle out of Silicon Valley giving us things that move fast and suddenly break leaving the world a worse place. We’ve seen this with self driving cars, cryptocurrency, social media, and Elon Musk’s tunnels to nowhere and failed Hyperloop.

There’s certainly useful things we can do with these large statistical models. Writing children’s books, however, is not one of them. It’s a fundamental misunderstanding of the nature of language, how humans gain experience and how creativity works. I met a translator recently who does English subtitles for Japanese movies and TV shows. Her work is beginning to be replaced with computer based translations. She expressed her frustration that the studio bosses don’t understand that the Japanese language is not in some kind of one to one relationship with English, that it carries cultural associations and subtleties that no computer will ever be able to parse. In short, that translation is interpretation and that human beings need to be involved in that process.

What I got when I asked Google’s AI, Gemini, to create a Thomas Kinkade painting with brutalist buildings.

Part of me admires these AI children’s book hustlers. There’s a long and creative history to be told about the long arc of scammers, from the card sharks of earlier centuries to the crypto bros of the present. If I taught creative writing I’d suggest to my students that you should go ahead and try these tools and see what happens. Maybe there’s a great post-modern novel in this technology, a true “death of the author.” And the early, wonky, days of AI images produced some hilarious results. But I suspect that the real scam is likely selling people on the hustle of  selling AI books, not actually creating and selling the books. I’ve been unable to figure out if these titles are the result of an individual or some kind of foreign scam farm. I suspect the latter since someone has the capital to buy a lot of Kindle ads.

What AI text tools like ChatGPT really excel at is filling out is bureaucratic forms, those documents nobody actually reads. An admission: I once used ChatGPT for this purpose and got complimented for my writing skills. Maybe we can replace the bosses with AI bots who will simultaneously generate and read this textual nonsense leaving us all more time to garden, handcraft chairs and go for long walks. But that’s not, of course, the way things will work out. AI will likely just put already vulnerable people out of work.

As for predictions of an AI apocalypse, what I fear more is a grinding idiocy. I’m really getting fatigued with seeing AI generated images that are just a kind of summary of the most uninteresting “illustration” type artwork. Not surprising as this kind of boring art is likely the majority of the visual content of the internet. I especially hate that moment when you spot it and have to spend precious brain time discerning if its AI or not. AI reminds me of the vampires and daemons joked about by both Marx and St. Paul. As Zizek puts it,

A dead person loses the predicates of a living being, yet he or she remains the same person; an undead, on the contrary, retains all the predicates of a living being without being one — as in the above-quoted Marxian joke, what we get with the vampire is “the ordinary manner of speaking and thinking purely and simply — without the individual.”

Leave a comment

2 Comments

  1. I think that the advent of “smart” AI engines will lead to the demise of the internet as the source of any useful information whatsoever.

    An AI engine extracts information from the internet then tries to do something clever with it. Unfortunately, there is already a lot of garbage on the internet and the AI engine may not be all that clever. The output from the AI engine will therefore contain all the garbage that it extracted from the internet plus new garbage that it generated itself due to its lack of cleverness. This output will be put back into the internet, leading to an increase in the quantity of garbage already there. Repeat this a few zillion times and the internet will end up holding 99.9% garbage.

    This problem has been known about since the development of the earliest large-scale computer databases. Unless the input to a database is rigorously validated, the database will rapidly fill up with garbage and become useless. However, having rigorous input validation will only postpone this unfortunate condition, hopefully until the designers have moved on to better jobs.

    • 100% agree. Garbage in, garbage out! Good point about how this will end up being self-reinforcing.

Comments are closed.