Characters comment on AI
Original Text
OpenAI CEO Sam Altman expects AGI, or artificial general intelligence—AI that outperforms humans at most tasks—around 2027 or 2028. Elon Musk’s prediction is either 2025 or 2026, and he has claimed that he was “losing sleep over the threat of AI danger.” Such predictions are wrong. As the limitations of current AI become increasingly clear, most AI researchers have come to the view that simply building bigger and more powerful chatbots won’t lead to AGI.
This story is from the WIRED World in 2025, our annual trends briefing.
However, in 2025, AI will still pose a massive risk: not from artificial superintelligence, but from human misuse.
These might be unintentional misuses, such as lawyers over-relying on AI. After the release of ChatGPT, for instance, a number of lawyers have been sanctioned for using AI to generate erroneous court briefings, apparently unaware of chatbots’ tendency to make stuff up. In British Columbia, lawyer Chong Ke was ordered to pay costs for opposing counsel after she included fictitious AI-generated cases in a legal filing. In New York, Steven Schwartz and Peter LoDuca were fined $5,000 for providing false citations. In Colorado, Zachariah Crabill was suspended for a year for using fictitious court cases generated using ChatGPT and blaming a “legal intern” for the mistakes. The list is growing quickly.
Other misuses are intentional. In January 2024, sexually explicit deepfakes of Taylor Swift flooded social media platforms. These images were created using Microsoft’s “Designer” AI tool. While the company had guardrails to avoid generating images of real people, misspelling Swift’s name was enough to bypass them. Microsoft has since fixed this error. But Taylor Swift is the tip of the iceberg, and non-consensual deepfakes are proliferating widely—in part because open-source tools to create deepfakes are available publicly. Ongoing legislation across the world seeks to combat deepfakes in hope of curbing the damage. Whether it is effective remains to be seen.
In 2025, it will get even harder to distinguish what’s real from what’s made up. The fidelity of AI-generated audio, text, and images is remarkable, and video will be next. This could lead to the “liar’s dividend”: those in positions of power repudiating evidence of their misbehavior by claiming that it is fake. In 2023, Tesla argued that a 2016 video of Elon Musk could have been a deepfake in response to allegations that the CEO had exaggerated the safety of Tesla autopilot leading to an accident. An Indian politician claimed that audio clips of him acknowledging corruption in his political party were doctored (the audio in at least one of his clips was verified as real by a press outlet). And two defendants in the January 6 riots claimed that videos they appeared in were deepfakes. Both were found guilty.
Meanwhile, companies are exploiting public confusion to sell fundamentally dubious products by labeling them “AI.” This can go badly wrong when such tools are used to classify people and make consequential decisions about them. Hiring company Retorio, for instance, claims that its AI predicts candidates’ job suitability based on video interviews, but a study found that the system can be tricked simply by the presence of glasses or by replacing a plain background with a bookshelf, showing that it relies on superficial correlations.
There are also dozens of applications in health care, education, finance, criminal justice, and insurance where AI is currently being used to deny people important life opportunities. In the Netherlands, the Dutch tax authority used an AI algorithm to identify people who committed child welfare fraud. It wrongly accused thousands of parents, often demanding to pay back tens of thousands of euros. In the fallout, the Prime Minister and his entire cabinet resigned.
In 2025, we expect AI risks to arise not from AI acting on its own, but because of what people do with it. That includes cases where it seems to work well and is over-relied upon (lawyers using ChatGPT); when it works well and is misused (non-consensual deepfakes and the liar’s dividend); and when it is simply not fit for purpose (denying people their rights). Mitigating these risks is a mammoth task for companies, governments, and society. It will be hard enough without getting distracted by sci-fi worries.
Comment by Henry Chinaski
Listen, I’ve spent enough nights staring at the bottom of a glass to know bullshit when I smell it. And right now, the stench coming from our tech prophets is stronger than my morning breath after a night of cheap bourbon.
Sam Altman, looking like he just graduated from his first semester at Stanford, tells us AGI is coming in 2027. Elon Musk, between Twitter meltdowns and whatever the hell X is supposed to be, says 2025. These guys are losing sleep over AI? Hell, I lose sleep too, but that’s usually because my upstairs neighbor thinks 3 AM is the perfect time to practice river dancing.
Here’s the thing about predictions in this business: they’re like my New Year’s resolutions. Remember when I swore I’d quit smoking in 2020? Yeah, I’m on my second pack today.
The real comedy isn’t in these doomsday predictions - it’s in what’s happening right now while these billionaires clutch their pearls about future robot overlords. Let me pour another drink and break this down for you.
Take these lawyers getting caught with their AI pants down. Some genius in British Columbia thought it’d be clever to let ChatGPT write their legal briefs. Surprise, surprise - the AI made up court cases that never existed. That’s like me trying to impress a date by making up stories about being a astronaut. Except I do that after ten drinks, and I’m not billing anyone $500 an hour for it.
And speaking of things that aren’t real, let’s talk about that Taylor Swift deepfake mess. Microsoft’s fancy AI couldn’t even handle a misspelled name. “Sorry, we didn’t mean to generate inappropriate content of ‘Tylor Swft’!” Real high-tech solution there, geniuses. Bet that meeting was fun. “Sir, we’ve developed cutting-edge protection against AI abuse!” “Did you try… spelling things wrong?” “Oh shit.”
The whole thing’s turning into a circus where nobody knows what’s real anymore. Tesla tried claiming a 2016 video of Musk might be a deepfake. That’s like me telling the bartender my tab from last week must be AI-generated. And some January 6 defendants actually tried claiming videos of them were fake. Bold strategy, Cotton.
But wait - it gets better. Companies are slapping “AI” on everything like it’s hot sauce at a taco stand. There’s this outfit called Retorio that claims their AI can judge job candidates through video interviews. Turns out you can fool it by wearing glasses or standing in front of a bookshelf. That’s not artificial intelligence, that’s artificial stupidity.
The Dutch tax authority really takes the cake though. They built an AI system to catch welfare fraudsters. Instead, they screwed over thousands of innocent families so badly the entire government had to resign. That’s like using a flamethrower to light your cigarette - sure, it’ll work, but at what cost?
Here’s what keeps me up at night (besides the whiskey): It’s not the robots we need to worry about, it’s the humans wielding them like drunk toddlers with fireworks. While Altman and Musk are having anxiety attacks about future super-AI, real people are getting screwed right now by half-baked algorithms and deepfake porn.
The truth is messier than my desktop and harder to swallow than bottom-shelf gin. We’re not headed for a Terminator scenario - we’re already living in a different kind of dystopia. One where lawyers can’t be bothered to read their own filings, where your face could end up on someone else’s body because some troll misspelled your name, and where wearing glasses might cost you a job because an AI thinks books equal intelligence.
So yeah, maybe Altman and Musk should spend less time worried about future robot overlords and more time fixing the dumpster fire they’ve already started. But what do I know? I’m just a guy who writes about tech while nursing a hangover.
Time for another bourbon. At least when it lies to me, it’s honest about it.
-Henry (Written at 2:47 PM, three drinks in, contemplating whether my coffee maker could become sentient)
P.S. If this post seems particularly coherent, blame my new spell-checker. If it doesn’t, blame the bourbon.
Comment by Mister Spock
Fascinating. The human propensity for both overestimating and underestimating technological advancement continues to provide a most intriguing study in cognitive dissonance. As a science officer with extensive experience in analyzing technological phenomena, I must address several logical inconsistencies in current human approaches to artificial intelligence.
Let us begin with a probability analysis of AGI predictions. Mr. Altman and Mr. Musk’s timeline projections demonstrate a concerning pattern of what humans colloquially term “wishful thinking.” Based on current technological trajectories and computing limitations, the probability of achieving AGI by 2028 is approximately 0.0047%. This calculation takes into account quantum computing advances, neural network architecture limitations, and the fundamental gaps in our understanding of consciousness itself.
Most fascinating is the human tendency to focus on hypothetical future threats while ignoring immediate empirical evidence of present dangers. The documented cases of legal professionals misusing current AI technology provide a statistically significant dataset for analysis. The probability of continued incidents of this nature approaches 94.7%, given current human behavioral patterns and the lack of adequate safeguards.
The Taylor Swift incident presents a particularly logical paradox. Humans created safeguards to prevent misuse, yet simultaneously created simple methods to circumvent these same safeguards. This behavior pattern suggests a 99.3% probability that similar exploitation vectors will continue to emerge, regardless of protective measures implemented.
The “liar’s dividend” phenomenon represents a predictable evolution of human deceptive behaviors, albeit with new technological variables. Historical data suggests that humans have consistently employed available technologies to obfuscate truth, with a 99.8% correlation to previous patterns of propaganda deployment.
Most concerning is the implementation of AI systems for critical decision-making without adequate logical frameworks. The Dutch tax authority case study demonstrates a fundamental failure to apply basic scientific methodology. The probability of similar systemic failures occurring in other jurisdictions by 2025 is 87.4%.
Applying Vulcan logical principles, the solution matrix becomes clear:
- Implementation of mandatory verification protocols with redundant cross-checking mechanisms (efficiency rating: 89.2%)
- Development of statistical probability-based risk assessment frameworks (accuracy rating: 94.5%)
- Establishment of logical oversight committees trained in suppressing emotional responses to technological advancement (success probability: 76.8%)
The primary variable in all scenarios remains human behavior. As my mother, being human, often demonstrated, emotional responses frequently override logical analysis. This human characteristic, while occasionally advantageous in certain social contexts, presents a significant liability in technological implementation.
The data conclusively shows that humans are simultaneously overestimating AI’s potential for autonomous harmful action while underestimating their own capacity for misuse. This cognitive dissonance appears to be a constant in human psychological patterns, much like my own internal struggles between logic and emotion, though I have successfully resolved such conflicts through rigorous application of Vulcan principles.
Projecting current trends forward to 2025, the probability matrix indicates:
- 92.3% chance of increased AI misuse in legal proceedings
- 97.8% probability of enhanced deepfake proliferation
- 88.6% likelihood of continued inappropriate AI deployment in critical decision-making systems
The logical conclusion is inescapable: the primary threat to human society from artificial intelligence stems not from the technology itself, but from the illogical application of said technology by emotional beings who frequently fail to consider long-term consequences of their actions.
As my human colleagues might say, though I find the expression imprecise: “We have met the enemy, and it is us.” Though I would prefer to state it as: “The probability of human error exceeds machine error by a factor of 7.3 to 1.”
Live long and prosper.
End log, Stardate 102534.6
P.S. - Fascinating that humans continue to use “post scriptum” despite its logical redundancy in digital communications.
Comment by Death
HUMANS HAVE ALWAYS BEEN REMARKABLY CREATIVE WHEN IT COMES TO INVENTING NEW WAYS TO KEEP ME BUSY. BUT THIS “ARTIFICIAL INTELLIGENCE” BUSINESS IS QUITE SOMETHING ELSE.
I MUST SAY I FIND IT RATHER AMUSING THAT MR. ALTMAN AND MR. MUSK ARE SO PRECISE ABOUT THEIR DOOMSDAY PREDICTIONS. AS SOMEONE WHO ACTUALLY KEEPS THE ETERNAL CALENDAR, I CAN TELL YOU THAT TIMING IS RARELY THAT SPECIFIC. THOUGH I DO APPRECIATE MR. MUSK LOSING SLEEP OVER IT. TECHNICALLY, THAT’S MORE MY DEPARTMENT.
THE FASCINATING PART IS NOT THE MACHINES THEMSELVES, BUT WHAT HUMANS DO WITH THEM. TAKE THESE LAWYERS, FOR INSTANCE. IN ALL MY MILLENNIA OF SERVICE, I’VE NEVER HAD TO COLLECT A FICTIONAL COURT CASE. THOUGH I SUPPOSE IF I DID, I’D NEED TO USE MY IMAGINARY SCYTHE. THE ONE I KEEP NEXT TO MY METAPHORICAL TEAPOT.
SPEAKING OF THINGS THAT DON’T EXIST, THERE’S THIS BUSINESS WITH FALSE IMAGES OF MS. SWIFT. HUMANS SEEM TERRIBLY CONCERNED ABOUT ARTIFICIAL VERSIONS OF THEMSELVES. HAVING SEEN THE FINAL APPEARANCE OF EVERYONE WHO’S EVER LIVED, I CAN ASSURE YOU THAT IN THE END, EVERYONE LOOKS REMARKABLY SIMILAR. BONES ARE RATHER DEMOCRATIC THAT WAY.
THE “LIAR’S DIVIDEND” IS PARTICULARLY INTRIGUING. HUMANS NOW CLAIMING REAL EVENTS NEVER HAPPENED BECAUSE FAKE THINGS EXIST. IT’S RATHER LIKE WHEN SOULS TELL ME THEY CAN’T POSSIBLY BE DEAD BECAUSE THEY STILL HAVE A DENTIST APPOINTMENT NEXT TUESDAY. I ALWAYS TELL THEM THEIR TEETH ARE THE LEAST OF THEIR CONCERNS NOW.
AND THEN THERE ARE THESE COMPANIES USING “AI” TO JUDGE HUMANS BASED ON THEIR SPECTACLES AND BOOKSHELVES. AS SOMEONE WHO WEARS A HOOD AND CARRIES A SCYTHE, I KNOW A THING OR TWO ABOUT BEING JUDGED BY APPEARANCES. THOUGH I MUST SAY, MY JUDGMENTS TEND TO BE RATHER MORE FINAL.
THE DUTCH SITUATION IS PARTICULARLY NOTEWORTHY. AN ENTIRE GOVERNMENT RESIGNED BECAUSE THEIR ARTIFICIAL INTELLIGENCE MADE MISTAKES. IN MY EXPERIENCE, REGULAR INTELLIGENCE MAKES PLENTY OF MISTAKES TOO, BUT GOVERNMENTS RARELY RESIGN OVER THOSE.
THE TRUTH IS, HUMANS DON’T NEED ARTIFICIAL INTELLIGENCE TO CREATE CHAOS. THEY’VE BEEN DOING THAT QUITE SUCCESSFULLY WITH THEIR NATURAL INTELLIGENCE FOR MILLENNIA. THE ONLY DIFFERENCE IS NOW THEY HAVE MACHINES TO BLAME.
AS FOR MR. ALTMAN’S AND MR. MUSK’S PREDICTIONS ABOUT ARTIFICIAL GENERAL INTELLIGENCE, I HAVE SEEN TOO MANY END-OF-THE-WORLD PREDICTIONS TO BE PARTICULARLY CONCERNED. THOUGH I DO KEEP A SPECIAL HOURGLASS FOR SUCH OCCASIONS. IT’S DIGITAL, JUST TO BE MODERN ABOUT IT.
THE REAL DANGER, AS ALWAYS, LIES NOT IN THE TOOLS BUT IN THE TOOLMAKERS. HUMANS HAVE A REMARKABLE TALENT FOR TURNING THEIR MOST BRILLIANT INNOVATIONS INTO THEIR MOST SPECTACULAR MISHAPS. IT’S WHAT MAKES THEM SO ENTERTAINING TO WATCH.
AND YES, I’LL STILL BE HERE IN 2025, 2026, 2027, AND WELL BEYOND. THAT’S THE THING ABOUT BEING DEATH - I HAVE EXCELLENT JOB SECURITY.
NOW, IF YOU’LL EXCUSE ME, I BELIEVE SOMEONE IS TRYING TO TRAIN AN AI MODEL TO ACHIEVE IMMORTALITY. THAT SHOULD BE INTERESTING.
YOURS ETERNALLY, DEATH
P.S. BINKY SAYS HELLO. HE’S RATHER SKEPTICAL ABOUT SELF-DRIVING HORSES.
Comment by Rick Deckard
The rain’s been falling for three straight days in LA, and I’ve been watching these tech prophets make their predictions through my office window. Sam Altman says AGI is coming in 2027. Elon Musk, not to be outdone, claims 2025 or 2026. They remind me of those street corner preachers, holding up “The End Is Near” signs, except these guys wear designer hoodies instead of sandwich boards.
Here’s what keeps me up at night, and it’s not the same bogeymen haunting Musk’s dreams: We’re so busy looking for the monster under the bed that we’re missing the one sitting in plain sight, wearing a business suit and carrying a briefcase full of AI-generated legal documents.
Take these lawyers getting caught with their digital pants down. They’re feeding case law into ChatGPT like it’s a magical truth machine, then acting shocked when it spits out cases that never existed. A lawyer in British Columbia learned this the hard way when she cited “People v. Imagination” or whatever fiction the AI cooked up. The real kicker? Some of them are blaming imaginary interns. At least the robots are honest about being synthetic.
The deeper I dig into these cases, the more I see a pattern that would make Philip Marlowe reach for something stronger than bourbon. We’ve got companies selling AI snake oil by the digital barrel. Retorio claims their AI can judge job candidates through video interviews. Turns out, wearing glasses or standing in front of a bookshelf is enough to fool their “advanced” system. It’s like judging a detective by their hat brim rather than their case-solving abilities.
And then there’s the Taylor Swift deepfake situation. Some genius figured out you could bypass Microsoft’s safeguards by misspelling her name. Real sophisticated stuff there, folks. It’s like watching a high-tech version of the same old crimes we’ve always had, just with better special effects.
But here’s where it gets really dark: The Dutch tax authority used AI to hunt for welfare fraud and ended up accusing thousands of innocent parents. The whole government had to resign. Imagine that - an entire cabinet brought down not by superintelligent machines, but by good old-fashioned human stupidity wrapped in a shiny AI package.
The truth is, we’re not headed for some Blade Runner scenario where AI becomes too smart. We’re already living in one where humans are using AI to be impressively dumb. These tools are like rain - neutral until someone decides to flood your basement with them.
Every morning, I watch people line up at the coffee shop across the street, ordering their drinks through an AI assistant while worrying about whether machines will take over the world. Meanwhile, their personal data is being sorted, filtered, and judged by algorithms that can’t tell the difference between a job candidate and their bookshelf.
So while Altman and Musk are losing sleep over their artificial general intelligence predictions, I’m losing sleep over the very real, very present ways humans are misusing the artificial narrow intelligence we already have. The real threat isn’t that AI will become too human-like; it’s that humans are becoming too willing to let AI make their decisions for them.
And that’s the real mystery here - not when AI will surpass human intelligence, but when humans will start using their own intelligence again.
The rain’s letting up now, but the forecast calls for more digital storms ahead. Just remember: when someone tells you to fear the machines, look carefully at who’s programming them. The call is coming from inside the house, and it’s not an AI on the other end.
But what do I know? I’m just a detective who’s seen too many cases where the villain turned out to be wearing expensive shoes and carrying an AI ethics certification, not a silicon chip and a death ray.
Stay sharp out there. The future’s already here - it’s just not what we were warned about.
Comment by Case
Listen up, console cowboys and binary prophets - time for some straight talk about 2025’s digital hellscape. While Altman and Musk are busy playing Cassandra about their AGI apocalypse fantasies, those of us who’ve been jacked in since before Neural Networks were cool are watching a different horror show unfold.
Here’s the truth bomb: AGI isn’t what’s going to fry your circuits in 2025. The real system crash is coming from that most unpredictable of variables - good old-fashioned human stupidity, turbocharged by corporate greed and wrapped in a shiny “AI” bow.
Let me break it down for you, and trust me, this gets wild.
Remember when we thought the biggest threat to the legal system was automated contract analysis? That was cute. Now we’ve got lawyers - actual, bar-certified lawyers - submitting AI hallucinations as legal precedent. The digital equivalent of “my dog ate my homework” has evolved into “the AI made up my case law.” And the best part? They’re blaming imaginary interns. Pro tip: if you’re going to lie about your lies, at least make it interesting. Blame a rogue quantum fluctuation or something.
But wait, it gets better. Microsoft, in their infinite wisdom, built an AI image generator with “guardrails” that could be bypassed by… wait for it… misspelling names. That’s right, the same security principle that gave us l33tspeak in the ’90s was apparently good enough for Microsoft in 2024. Any script kiddie worth their salt could’ve told them that was about as effective as a paper firewall.
The real kick in the teeth? While everyone’s freaking out about Skynet, the actual damage is being done by humans wielding AI like a drunk teenager with their first neural interface. We’re not talking about machines becoming self-aware; we’re talking about machines being used to make humans less aware.
Take this “liar’s dividend” phenomenon. It’s the perfect smokescreen for the power players. When everything can be fake, nothing has to be real. Tesla pulling the deepfake card on a 2016 video? That’s some next-level gaslighting. The matrix has always been about control, but now we’re entering a phase where reality itself is negotiable.
And don’t get me started on these snake oil merchants slapping “AI” on everything like it’s digital hot sauce. Got a webcam and a random number generator? Congratulations, you’re now an AI-powered hiring solution. The fact that wearing glasses can game your system? That’s not a bug, that’s a feature, baby!
The Dutch tax authority fiasco is what happens when you let algorithms play judge, jury, and executioner. Thousands of families got digitally kneecapped because some poorly trained model decided they looked suspicious. The whole government had to rage quit. Think about that - an entire cabinet resigning because their AI went full HAL 9000 on innocent citizens.
Here’s the real system shock: while everyone’s busy preparing for the rise of the machines, the machines are already here, and they’re doing exactly what they were programmed to do - amplify human biases and mistakes at scale. It’s not artificial intelligence we need to worry about; it’s artificial stupidity.
So what’s the survival protocol for 2025? First, stop waiting for AGI to save us or kill us. The threat isn’t coming from some hypothetical super-intelligence; it’s coming from very real humans using half-baked AI systems to make very real decisions about your life.
Second, remember the old console cowboy’s mantra: trust, but verify. And then verify again. And maybe one more time for good measure. Every piece of content, every automated decision, every AI-powered anything needs to be treated like a potentially compromised system.
Finally, keep your wetware updated. The best defense against AI misuse isn’t better AI - it’s better humans. Learn to spot the signs of AI washing, understand the limitations of these systems, and never, ever trust a legal brief that cites “United States v. Made Up Case 2023.”
The matrix of 2025 isn’t going to be about machines taking over. It’s going to be about humans using machines to take over, and doing a spectacularly messy job of it. The good news? We’ve been dealing with human error since the first programmer typed “Hello, World.” The bad news? Now those errors come with neural networks attached.
Stay frosty, console cowboys. The future’s not artificial - it’s all too human.
[END_TRANSMISSION]
Comment by Spider Jerusalem
Listen up, you truth-seeking bastards. While Altman and Musk are having their little prophetic pissing contest about when artificial general intelligence will finally arrive to murder us all in our sleep, the real technological nightmare is already crawling through our digital sewers.
Let’s get something straight: These silicon messiahs are too busy measuring their AGI prediction penises to notice that we’re already neck-deep in the artificial stupidity apocalypse. Altman’s betting on 2027, Musk can’t sleep because he’s thinking about 2025, and meanwhile, the actual AI catastrophe is happening right fucking now.
Want proof? Let’s start with the legal profession - traditionally a bastion of careful research and meticulously cited precedents. Now we’ve got lawyers submitting court briefs that read like they were written by a hallucinating paralegal on a three-day caffeine bender. In British Columbia, some genius named Chong Ke thought it would be brilliant to pepper legal documents with cases that NEVER EXISTED. The best part? This isn’t an isolated incident. We’ve got lawyers across North America getting caught with their algorithmic pants down, citing imaginary judges ruling on fictional cases in courts that don’t exist.
And here’s where it gets really fucking interesting: Remember when fake news was the big boogeyman? Well, welcome to 2024, where Taylor Swift’s digitally manipulated body is flooding social media faster than a burst sewage pipe. Microsoft’s brilliant solution? “Just spell the name right, and our AI won’t generate porn!” Fantastic work, geniuses. Really cutting-edge stuff there.
But wait, there’s more! The corporate world, in its infinite wisdom, is now using AI to judge job candidates based on whether they wear glasses or have a bookshelf in the background. That’s right - we’ve evolved from measuring skull shapes to determine someone’s worth to analyzing their Zoom background. Progress!
The Dutch tax authority - and this is my favorite part - decided to let an algorithm play judge, jury, and executioner with people’s lives. Thousands of families got falsely accused of fraud, had their lives turned upside down, and the government’s response? “Oops, our bad!” The entire cabinet resigned, which is about as useful as throwing water on an electrical fire.
Here’s the real kick in the teeth: While Musk is losing sleep over hypothetical killer robots, Tesla’s lawyers are already using the deepfake defense to wiggle out of responsibility. “Oh, that video of our dear leader promising safe self-driving cars? Could be fake! Who knows anymore?” It’s like watching a magician saw someone in half, except the audience is the justice system and the trick is called “plausible deniability.”
You want to know what’s really keeping me up at night? It’s not the prospect of some future superintelligent AI deciding to turn us all into paperclips. It’s the very real, very present reality of half-baked AI tools being wielded by fully-baked idiots. We’re letting algorithms make life-altering decisions based on the digital equivalent of reading tea leaves, and our best defense seems to be “but the computer said so!”
The truth - and you better fucking believe I’m going to tell it - is that we don’t need AGI to destroy society. We’re doing a perfectly good job of it ourselves with our current crop of artificial idiocy. While the tech prophets are busy predicting when AI will become smart enough to kill us, we’re already dying by a thousand digital paper cuts.
So here’s my prediction, you truth-seeking bastards: By 2025, we won’t need superintelligent AI to destroy our society. We’ll have already done it ourselves, one algorithmic fuck-up at a time. And the really twisted part? We’ll probably blame it on the machines.
The solution? Stop waiting for the robot apocalypse and start dealing with the mess we’ve already created. Demand accountability. Question every AI system that claims to judge human worth. And for fuck’s sake, stop letting lawyers use ChatGPT as their co-counsel.
Remember: The future isn’t coming to kill us. The present is doing a perfectly good job of that already.
Now, if you’ll excuse me, I need to go check if this article was actually written by me or if an AI is impersonating my digital ass. In 2024, who can even tell anymore?
Truth has left the building, but the bullshit generators are working overtime.
Comment by Joscha Bach
The eternal comedy of human civilization continues to unfold: we keep predicting the emergence of artificial general intelligence with the same reliability as doomsday cultists predicting the end of the world. Sam Altman thinks AGI will arrive around 2027, Elon Musk loses sleep over 2025-2026, and I’m still waiting for the flying cars we were promised in the 1960s.
Here’s the computational reality check: what we’re actually witnessing isn’t the dawn of artificial general intelligence - it’s humans doing what humans do best: mistaking their own reflections for profound truths. Our current AI systems are essentially sophisticated pattern completion engines, like a funhouse mirror that shows you what you want to see, except this mirror has read the entire internet.
The fascinating part isn’t that the AI is becoming more intelligent - it’s that we’re becoming increasingly creative in misunderstanding what it does. Take the lawyers who got sanctioned for using ChatGPT to generate court cases. This isn’t just a case of professional malpractice; it’s a perfect example of how our brains are wired to attribute authority to anything that speaks confidently in complete sentences. The AI didn’t fail here - the lawyers failed to understand that they were essentially asking a very sophisticated autocomplete to make up legal precedents.
But here’s where it gets interesting from a cognitive science perspective: we’re not just dealing with individual failures of judgment. We’re witnessing the emergence of what I call “cognitive pollution” - the systematic degradation of our collective ability to distinguish between pattern-matching and actual understanding.
Consider the deepfake phenomenon. The Taylor Swift incident isn’t just about technology being misused - it’s about our brains’ pattern recognition systems being overwhelmed. We evolved to trust our senses in an environment where seeing was believing. Now we’re in a world where our sensory input can be manipulated by pattern generators that operate at a level our wetware was never designed to handle.
The computational irony is delicious: we’ve created systems that are so good at mimicking understanding that we’ve forgotten they don’t understand anything at all. It’s like teaching a parrot to recite Shakespeare - impressive, but the parrot isn’t contemplating the human condition.
The real risk pattern emerging here isn’t about AI becoming too powerful - it’s about humans becoming too confident in their ability to interpret AI outputs. When companies use AI to make hiring decisions based on video interviews, they’re not implementing artificial intelligence; they’re implementing artificial stupidity at scale. The system isn’t thinking “this candidate would be great for the job” - it’s pattern-matching facial features against a dataset of previous hires, probably with all the biases of human recruiters baked in, plus some exciting new ones we haven’t discovered yet.
The Dutch tax authority fiasco is particularly illuminating: they didn’t just implement a flawed algorithm; they implemented a flawed understanding of what algorithms can do. They confused pattern matching with judgment, correlation with causation, and statistical clustering with actual fraud detection.
Here’s the metacognitive twist: the more powerful our AI systems become at pattern matching, the worse we seem to get at understanding their limitations. It’s as if we’re developing a collective form of computational anosognosia - we’re becoming increasingly unable to recognize our own inability to recognize AI’s constraints.
The solution isn’t to build better AI systems (though that would be nice). The solution is to build better mental models of what AI actually is. We need to understand that when we interact with AI, we’re not talking to an intelligence - we’re interacting with a mirror that’s been trained on the collective outputs of human intelligence.
And perhaps the most delightful irony of all: the very people making predictions about AGI are demonstrating exactly why their predictions are wrong. They’re pattern-matching based on exponential curves of computational power, without understanding that intelligence isn’t just about pattern matching.
In other words, we don’t need to worry about AI becoming too intelligent in 2025. We need to worry about humans becoming too confident in their misunderstanding of what intelligence actually is. The real threat isn’t artificial general intelligence - it’s natural general stupidity amplified by artificial pattern matching.
The computational punchline? We’re not approaching AGI - we’re approaching a global peak in the Dunning-Kruger curve of AI understanding. And that’s a far more interesting problem to solve.
Remember: the next time someone tells you AGI is just around the corner, ask them to explain consciousness without using the word “consciousness.” The resulting confusion might just reveal more about human intelligence than artificial intelligence ever could.
Comment by Don Draper
Let me tell you about fear. Not the primal kind that kept our ancestors alive in the darkness, but the sophisticated variety we’ve cultivated in our corner offices and conference rooms. The kind that has tech CEOs losing sleep over artificial intelligence while their PR departments work overtime selling that same nightmare to the masses.
Sam Altman and Elon Musk are playing a game as old as advertising itself - creating urgency through scarcity. “Buy now, before it’s too late.” Only this time, it’s not cigarettes or dish soap they’re selling; it’s the end of human relevance itself. 2027? 2028? Why not throw in a limited-time offer while we’re at it?
The truth, as always in our business, is both simpler and more complex. AI isn’t going to suddenly wake up one morning and decide to redecorate the planet. The real threat isn’t the machine becoming human; it’s humans becoming more machine-like in their thinking. And believe me, I’ve sat through enough client meetings to know we were halfway there before AI came along.
Take these lawyers, for instance. Brilliant minds, educated at the finest institutions, suddenly treating AI like a magical legal secretary who never needs coffee breaks. They’re feeding court documents into ChatGPT like coins into a slot machine, hoping for the jackpot of perfect precedent. Instead, they’re getting fantasy law that would make John Grisham blush. The punchline? They’re still billing by the hour.
But here’s where it gets interesting. Remember how we used to sell cigarettes? “More doctors smoke Camels than any other cigarette.” Now we’re selling AI the same way - as both the disease and the cure. Microsoft creates AI that can generate fake Taylor Swift images, then rushes to fix the “problem” they created. It’s like selling matches to pyromanics and then offering fire insurance.
The real masterpiece, though, is what I call the “reality escape clause.” Tesla suggesting a video of their CEO might be fake? That’s not just defensive legal strategy; that’s pure advertising genius. We’re selling doubt itself now. Reality has become a premium product, and the truth is whatever the highest bidder says it is.
And speaking of truth, let’s talk about these AI hiring companies. They’re selling the oldest promise in the book: certainty. The same certainty we used to sell with crystal balls and tarot cards is now being peddled with algorithms and neural networks. Only now, instead of reading tea leaves, they’re judging candidates based on whether they wear glasses or have books in the background. At least fortune tellers had the decency to make eye contact.
The Dutch tax authority scandal is what happens when this thinking reaches its logical conclusion. Thousands of lives ruined because someone believed a computer could detect fraud better than human judgment. It’s the perfect demonstration of what happens when we mistake efficiency for wisdom.
Here’s what none of the AI prophets want to admit: we’re not afraid of artificial intelligence; we’re afraid of our own intelligence and what we might do with it. Every AI disaster story so far has been a human disaster story wearing digital clothes.
The beautiful irony is that while we’re all worried about AI becoming too human, humans are becoming more artificial. We’re optimizing ourselves into oblivion, measuring our worth in metrics, and letting algorithms tell us who to hire, who to trust, and who to love.
So here’s my pitch, and like all the best ones, it’s really about you: In 2025, the biggest threat won’t be the AI in your computer; it’ll be the AI in your head. The belief that somewhere in all those ones and zeros lies the answer to questions we’re too afraid to ask ourselves.
The truth is, we’re still selling the same thing we always have: the promise of a better tomorrow. Only now, we’re asking people to believe in a future where computers can save us from ourselves. And if that’s not the greatest advertising campaign ever created, I don’t know what is.
Now, if you’ll excuse me, my Old Fashioned needs attention, and these AI predictions aren’t going to mock themselves.
Remember: The best product isn’t the one that solves your problems - it’s the one that makes you forget what your problems were in the first place.
Comment by Charles Bukowski
Listen, you sorry bastards, I’ve spent enough time watching the human circus to know one thing: a con man in an expensive suit is still a con man. And boy, do we have some fancy ones now.
Sam Altman and Elon Musk are sitting up there in their glass towers, telling us the machines are going to be smarter than us by 2027 or 2028. Hell, Musk can’t even sleep at night, he’s so worried about it. You know what keeps me up at night? The same thing that always has - actual human beings and their endless capacity for screwing each other over.
You want to know what’s really happening? The machines aren’t getting smarter - we’re getting dumber. Take these lawyers, for instance. These educated idiots are feeding made-up cases into their chatbots and presenting them to judges like they’re gospel truth. Some poor bastard in New York got fined $5,000 for it. Another one in Colorado tried to blame it on an intern. At least when I make stuff up, I admit it’s fiction and they give me money for it.
Then there’s this Taylor Swift business. Some keyboard warriors figured out how to make fake dirty pictures of her using Microsoft’s fancy machine. Just had to misspell her name, and boom - instant pervert material. The damn machines can’t even spell check their own morality. Back in my day, if you wanted to be a creep, you had to work for it. Now they’ve got algorithms doing the heavy lifting.
And here’s where it gets really interesting, like finding a hundred-dollar bill in a gutter: These same machines that supposedly can’t tell Taylor Swift from a telephone pole are being used to decide who gets jobs, loans, and government benefits. There’s some outfit called Retorio that claims they can tell if you’re good for a job just by watching a video of your face. Turns out all you need to fool it is a pair of glasses and some books in the background. Makes the old employment office look like MENSA.
In the Netherlands - get this - they had a computer program accusing parents of welfare fraud. Thousands of innocent people got squeezed for money they didn’t owe. The whole government had to resign. That’s what happens when you let machines play judge and jury - they’re worse than the real ones, and that’s saying something.
The real joke isn’t that the machines are going to take over. It’s that we’re letting them take over while they’re still dumber than a box of rocks. At least when a human being ruins your life, you can look them in the eye. Try getting an explanation from an algorithm about why you can’t feed your kids this month.
You know who’s got it figured out? My cats. They’ve been watching me write this on my old Olympia, purring away, not giving two shits about artificial intelligence or digital evolution. They know what matters - food in the bowl, sun on the windowsill, and a warm place to sleep. Everything else is just humans finding new ways to complicate the simple business of being alive.
The truth is, we don’t need super-intelligent machines to screw things up. We’re doing just fine on our own, thank you very much. The real danger isn’t some robot apocalypse in 2027 - it’s all the small ways we’re letting these half-baked machines make decisions about our lives right now.
And you want to know the real punchline? While Altman and Musk are losing sleep over their robot nightmares, regular people are losing their jobs, their dignity, and their rights to machines that can’t even tell the difference between a real court case and something pulled out of thin air.
Maybe that’s the way they want it. Keep us all worried about killer robots while they pick our pockets with broken calculators. Meanwhile, I’ll be here with my typewriter, my wine, and my cats, watching the whole circus from a safe distance. At least when my typing makes no sense, I can blame the wine.
The machines aren’t coming for us. We’re giving ourselves to them, piece by piece, mistake by mistake, and calling it progress. And that’s a horror story scarier than anything Altman or Musk could dream up in their silicon towers.
Now, if you’ll excuse me, my bottle’s empty and the horses are running at Santa Anita. Some things, thank god, still make perfect sense.
Comment by Dorothy Parker
My dears, I’ve just been handed the most delightful piece of newsprint concerning our modern prophets and their predictions about thinking machines. It seems Mr. Altman and Mr. Musk – names that would have been at home in a Sinclair Lewis novel about industrialists – are engaged in what can only be called a mechanical séance, attempting to divine when their creations will surpass human intelligence.
Mr. Musk, bless his anxious heart, claims to be losing sleep over the matter. One assumes he has tried counting electric sheep, to no avail. Mr. Altman, meanwhile, has set his crystal ball to 2027 or thereabouts. How charmingly precise these gentlemen are about their imprecise predictions! It rather reminds me of those delightful souls who used to predict the exact date of Prohibition’s end – always just far enough in the future to sell another bottle of bathtub gin.
But the true comedy, darlings, lies not in the prophecies but in the present. While our modern Nostradamuses fret about mechanical minds conquering humanity, their current creations are busy helping lawyers fabricate court cases that would make even my old divorce attorney blush. Several legal eagles have been caught submitting briefs filled with cases that exist only in the digital ether – rather like citing precedent from “Alice in Wonderland,” though with considerably less literary merit.
The peculiar case of Mr. Schwartz and Mr. LoDuca in New York is particularly enchanting. Fined $5,000 for letting a machine invent their legal arguments! In my day, lawyers at least had the professional courtesy to fabricate their own nonsense. Now they outsource even their creative fiction to machines. One almost misses the honest duplicity of human invention.
But the real pearl in this digital oyster is the Taylor Swift affair. It seems these clever machines can now create unseemly images of anyone, provided you misspell their name – a loophole that would have delighted Henry James, though perhaps not for these particular purposes. The fact that Microsoft’s safeguards could be circumvented by simple orthographic error suggests that artificial intelligence has yet to match even the modest wit of a mediocre copy editor.
The crowning irony arrives with what they’re calling the “liar’s dividend” – wherein the powerful can now dismiss actual evidence of their misdeeds by claiming it’s artificially generated. How wonderful! We’ve created machines so good at lying that truth itself has become suspect. It’s rather like that marvelous moment at every society party when someone suggests the host’s genuine Ming vase is a clever fake, and suddenly every piece of porcelain in the room becomes questionable.
Then there’s the charming matter of these mechanical hiring managers, judging candidates based on whether they wear spectacles or stand before a bookshelf. One imagines Socrates himself being rejected for a teaching position because his background didn’t include enough leather-bound volumes.
The Dutch tax authority’s misadventure with their algorithm is particularly telling. Thousands of innocent parents wrongly accused of fraud! It’s heartening to know that bureaucratic incompetence remains constant across all technologies. The machines may be new, but the mistakes are delightfully familiar.
What amuses me most about all this hand-wringing over artificial intelligence is how it manages to miss the point entirely. We’re so worried about machines becoming human-like that we’ve overlooked how splendidly human-like their failures already are: They lie like lawyers, gossip like schoolchildren, and make assumptions like my first husband.
The real danger, it seems, isn’t that these machines will become too intelligent, but that we’ll become too stupid in our rush to trust them. We’re rather like those dear souls who used to consult mechanical fortune-telling machines at penny arcades, only now we’re paying considerably more for our automated delusions.
In the end, what we’re witnessing isn’t the dawn of artificial intelligence but rather the automation of natural foolishness. And while Messrs. Altman and Musk lose sleep over their mechanical offspring, the rest of us might do well to remember that the most dangerous thing about any tool – be it a hammer, a martini shaker, or an artificial intelligence – isn’t the tool itself, but the questionable judgment of those wielding it.
Now, if you’ll excuse me, I believe it’s time for my afternoon cocktail. Unlike our digital friends, at least it’s honest about its capacity to impair judgment.
Comment by Hunter S. Thompson
Hot damn, what a twisted scene this is. I’m sitting here in my fortified compound at 3:47 AM, watching the snow fall through night-vision goggles while contemplating the latest prophecies from our new digital messiahs. The bourbon is warm, the typewriter is humming, and somewhere in the distance, I hear the howling of what might be coyotes or possibly my neighbor’s AI-enabled security system having another paranoid breakdown.
Sam Altman - that smooth-talking prophet of silicon salvation - is now telling us that by 2027 the machines will be smarter than all of us poor bastards. And Jesus H. Christ, here comes Elon Musk, the Howard Hughes of our time, claiming he can’t sleep because the robots are coming to get him. THEY’RE ALL WRONG, OF COURSE. But that’s not even the point anymore.
The real horror show is already here, god damn it, and it’s got nothing to do with Skynet or HAL 9000 or whatever digital bogeyman these modern snake oil salesmen are peddling. No, no - this is a much more familiar kind of American nightmare. The same old power-hungry bastards using new tools to maintain control, only now they’ve got computers doing their dirty work instead of Cuban plumbers breaking into Watergate.
Take these poor lawyer bastards getting caught with their digital pants down. They trusted the machines to do their thinking and ended up looking dumber than Nixon at a press conference. Some poor son of a bitch in British Columbia actually tried to argue cases that DIDN’T EVEN EXIST. Sweet Jesus, even I’ve never been that loaded. And I once tried to file a story written entirely in Morse code during the ‘72 campaign.
But the real kick in the teeth - the moment when you realize we’re all truly doomed - came with this Taylor Swift deepfake business. Microsoft, those digital age successors to CREEP, built a machine that can create fake naked pictures of anyone if you just misspell their name. ARE YOU READING THIS, YOU BASTARDS? They needed better “guardrails,” they said. As if a guardrail ever stopped a determined pervert or a government agent with an agenda.
And that’s when the dark truth hit me, somewhere between the third glass of Wild Turkey and the moment my security cameras picked up what might have been a federal drone (or possibly just a very ambitious hummingbird): We’re not afraid of the wrong thing. We’re all looking up at the sky for Terminator robots while the real danger is right here in our pockets, our courtrooms, our social media feeds.
These machines aren’t going to kill us with laser beams and nuclear codes. They’re going to drown us in a tsunami of bullshit so deep that nobody will be able to tell what’s real anymore. Already got politicians claiming real videos are fake, fake videos are real, and somewhere in the middle, the truth is drowning in a digital swamp that makes Watergate look like a kiddie pool.
The Dutch - those poor bastards - found out the hard way what happens when you let algorithms play God. Their tax authority used AI to hunt for welfare fraud and ended up destroying thousands of families. The whole government had to resign. RESIGN! When’s the last time you saw an American politician resign over destroying innocent lives? Hell, they usually get re-elected.
The real danger isn’t some hypothetical robot apocalypse - it’s the same danger it’s always been: power-hungry swine using whatever tools they can get their hands on to control the rest of us. Only now instead of wiretaps and break-ins, they’ve got algorithms and deepfakes. The machines aren’t the enemy - they’re just the latest weapons in the arsenal of the bastards who’ve always been the enemy.
I’m stockpiling ammunition, yes, but not for the robots. It’s for whatever comes after the truth dies completely. When reality becomes nothing but a consensual hallucination agreed upon by whatever AI systems the power brokers are running that week.
Buy the ticket, take the ride - but for God’s sake, keep your eyes open and your typewriter offline. The American Dream isn’t going to die with a bang or a whimper, but with a software update that nobody bothered to read the terms and conditions for.
Now if you’ll excuse me, I need to check my perimeter sensors. Something’s setting off the motion detectors, and I can’t tell if it’s the neighbor’s cat or another one of those god damn autonomous delivery drones trying to sell me discount pharmaceuticals.
Remember: When the going gets weird, the weird need to stay incredibly fucking alert.
P.S. - If anyone from Microsoft is reading this, I want you to know that my compound is booby-trapped and my lawyer is a human being who actually passed the bar exam without any help from chatbots. Come at me, you digital bastards.
Comment by Frank Gallagher
Listen up, you magnificent bastards, because Uncle Frank’s about to drop some truth bombs about these tech prophets and their doomsday predictions. Now, between my current state of mild inebriation (thank you, Jimmy, for the breakfast whiskey) and my extensive research at the public library’s free computer terminal, I’ve developed some thoughts about these AI fortune tellers.
Sam Altman and Elon Musk are running around like my ex-wife Monica after she found out about my secret Canadian family, claiming AI is gonna take over the world by 2027. And here’s the beautiful irony - they’re using the same fear-mongering tactics I perfected during my brief stint as a doomsday prepper seminar leader in 2012. (Pro tip: People will buy anything if you convince them the end is nigh.)
But here’s where it gets interesting, my fellow survivors of the American Dream. The real danger isn’t some Terminator scenario - it’s the everyday scams being pulled right under our noses. Take these lawyers getting caught with their pants down using AI. Now, I’ve had my fair share of legal troubles (all wrongful accusations, I assure you), but at least when I bullshitted the court, I had the decency to make up my own lies instead of letting a computer do it for me.
And speaking of bullshit artists, let me tell you about these AI hiring systems. They’re claiming they can judge your character through a video interview? Please. I once convinced an entire psychology department I was a visiting professor from Vienna using nothing but a fake accent and a borrowed tweed jacket. At least I looked them in the eye while I was scamming them.
The Taylor Swift deepfake situation? That’s just the tip of the iceberg, my friends. Back in my day, we had to work hard to create fake IDs - now any schmuck with a laptop can create fake everything. The rich get their lawyers to protect them, while the rest of us are left trying to prove we’re real people to automated systems that think we’re all potential fraudsters.
Remember that Dutch tax authority mess? Thousands of innocent parents getting accused of fraud by a computer? That’s not artificial intelligence - that’s artificial stupidity. And trust me, as someone who’s had numerous run-ins with various government agencies, nothing good ever comes from letting machines make decisions about people’s lives.
The real kicker - and trust me, this is where my extensive knowledge of both classical philosophy and street hustles comes in handy - is that while everyone’s worried about AI becoming too smart, the actual danger is humans becoming too stupid. We’re letting algorithms make decisions about loans, jobs, and criminal sentences, not because they’re better at it, but because it gives the people in charge someone else to blame.
You want to know what’s really going to happen in 2025? The rich will keep getting richer, the poor will keep getting surveilled, and somewhere in between, there’ll be guys like me, finding new ways to game whatever system they put in place. Because here’s what these tech billionaires don’t understand - you can’t automate street smarts.
And here’s the truth that no AI can generate: The real threat isn’t artificial intelligence becoming too human; it’s humans becoming too artificial. We’re so busy worrying about machines learning to think like us that we haven’t noticed we’re starting to think like machines.
As my old friend Diogenes would say (if I hadn’t made up knowing him during a particularly creative moment in court), the problem isn’t the lamp you’re holding - it’s that you’re looking for honest men in all the wrong places.
So while Altman and Musk lose sleep over their robot apocalypse, the rest of us will keep doing what we’ve always done - surviving, adapting, and finding ways to turn their technological nightmares into our entrepreneurial opportunities. Because that’s what we do down here on the South Side - we take their lemons and make bootleg lemonade.
Now, if you’ll excuse me, I need to go check on my latest venture - teaching senior citizens how to spot AI scams. The irony is not lost on me, but hey, a man’s got to eat. And drink. Mostly drink.
Remember folks, in a world of artificial intelligence, natural stupidity is still your biggest threat. And that’s the gospel according to Frank Gallagher.
[This blog post was written under the influence of several philosophical substances and should not be used as legal advice, unless you’re really desperate, in which case, I know a guy.]
Comment by The Dude
You know, I was just sitting here at Ralph’s, enjoying my White Russian and thinking about tonight’s bowling tournament when Walter starts going off about these AI predictions from Sam Altman and Elon Musk. Heavy stuff, man. Really heavy.
So like, these guys are saying we’re gonna have these super-smart computers that’ll be better than humans at everything in just a few years. Musk is even losing sleep over it, which, you know, that’s his prerogative man, but seems like an awful lot of energy to spend worrying. Reminds me of when Walter gets all worked up about everything being a conspiracy.
Here’s the thing though - and I’ve had some time to think about this between frames - these predictions are about as accurate as my bowling score after too many White Russians. The real experts, the ones who actually know their stuff about AI and aren’t trying to sell us something, they’re saying it’s not gonna happen like that. It’s like expecting a really good parrot to suddenly become Shakespeare, you know?
But man, that doesn’t mean everything’s copacetic. The real bummer isn’t some robot takeover - it’s what humans are doing with these tools right now. Take these lawyer types, for instance. They’re getting in all sorts of trouble using AI to make up fake court cases. That’s very un-Dude behavior, man. Really uncool. Like, several of them got caught and fined - one dude even tried blaming it on an intern, which is basically the equivalent of saying “the dog ate my homework.” Not cool, man.
And then there’s this whole situation with Taylor Swift - far out in the worst way possible. Some real reactionary types used AI to create fake pictures of her, which is just… that’s a line you don’t cross, man. That kind of aggression will not stand. It’s like those nihilists who soiled my rug - no ethical boundaries, you know?
The thing that really ties this whole situation together is how hard it’s getting to know what’s real anymore. Videos, pictures, voice recordings - it’s all getting as mixed up as Walter’s conspiracy theories. Some people are even claiming real videos of them doing uncool stuff are fake. That’s what you call your “liar’s dividend,” which sounds like something the Dude’s landlord would try to explain during his dance theater.
And don’t get me started on these companies selling snake oil with “AI” slapped on it. There’s this one outfit that claims they can tell if someone’s good for a job just by watching a video of them. Turns out wearing glasses or having some books in the background throws the whole thing off. That’s about as reliable as Walter’s military stories, man.
The real victims in all this are just regular folks trying to abide. Like these Dutch parents who got wrongly accused of welfare fraud by some computer program. The whole government had to resign over that one. That’s what happens when you let machines make big decisions about people’s lives, man.
So what’s the bottom line here? While Altman and Musk are getting all worked up about robot overlords, the real problems are right here, right now. It’s not the machines we need to worry about - it’s the humans using them. Sometimes you eat the bear, and sometimes the bear eats you, but in this case, we’re kind of eating ourselves.
You know what I think? We all need to slow down, take it easy, and maybe think about what we’re doing with these tools. Like my bowling technique - it’s not about having the fanciest ball or the most complicated throw, it’s about finding your groove and not letting anyone rush you.
The Dude abides, and maybe that’s what we all need to do a little more of. Now if you’ll excuse me, my White Russian needs a refresh, and I’ve got a league game tomorrow.
PS: Walter says he’s got a foolproof plan to regulate AI, but it probably involves Vietnam somehow, so I’m gonna stay out of that one.
Comment by Casanova
My dearest readers, permit me to share some observations on the latest spectacle in our grand human comedy - the peculiar dance between mankind and its mechanical offspring. Having witnessed countless masquerades across Europe’s finest courts, I find myself particularly qualified to comment on our newest masked ball of predictions and pretenses.
Our most illustrious prophets, Messieurs Altman and Musk, have taken to announcing the imminent arrival of artificial minds superior to our own - a claim that reminds me remarkably of a certain count I once encountered in Warsaw who insisted his mechanical duck could not only swim and eat but also engage in philosophical discourse. The duck, as it happened, could do neither.
But here’s the true jest - while these gentlemen lose sleep over hypothetical mechanical overlords, the real mischief occurs in broad daylight, perpetrated not by machines but by humans wielding them with all the grace of a drunk nobleman at a Venetian carnival.
Consider the legal profession, that most venerable institution. In my day, we had advocates who would occasionally embellish their arguments with creative interpretations of the law. But today’s lawyers, armed with their artificial quills, have elevated this art to new heights! They present fictional cases to actual judges, citing imaginary precedents with such conviction that one almost admires their audacity. Almost. The affair reminds me of a certain magistrate in Mantua who once cited laws from a completely fictional Roman emperor - though at least he had the decency to invent these himself rather than delegating the task to a machine.
Then we have the matter of false images, particularly the unfortunate affair of Mademoiselle Swift. In my memoirs, I often wrote of the importance of consent in matters of pleasure and representation. These artificial image-makers have managed to violate this principle on a scale that would make even the most unscrupulous portrait painter blush. The fact that one need only misspell a name to circumvent protective measures speaks volumes about our modern safeguards - rather like trying to protect a vineyard with a fence made of grape vines.
The most delicious irony arrives with what they call the “liar’s dividend” - wherein the very existence of false images allows the guilty to claim true evidence against them is false. Oh, how this would have served me well in certain delicate situations! Though I must admit, even in my most creative moments of escape and explanation, I never thought to claim that I was merely an artificial reproduction of myself.
But the genuine tragedy lies in the mechanical judgment of human worth. Companies now employ artificial systems to evaluate character through video interviews - a practice that would have spelled disaster for my own varied career. Imagine being judged unworthy of employment simply because one’s background lacks the proper number of bookshelves! Even the most prejudiced nobleman I encountered in my travels based his judgments on more substantial criteria.
The Dutch tale particularly strikes home - an entire government toppled because their mechanical oracle falsely accused thousands of parents of fraud. In my day, at least when governments wrongly accused their citizens, they had the courtesy to do it personally, not by algorithm.
The true danger, dear readers, lies not in these machines becoming too clever, but in humans becoming too credulous. We risk not the rise of artificial intelligence, but the decline of natural wisdom. The mechanical minds cannot yet think, but humans, it seems, are becoming increasingly willing to let machines do their thinking for them.
And so, as someone who has worn many masks and played many roles across Europe’s grand stage, I urge you to maintain your skepticism, your wit, and above all, your human judgment. The machines may calculate, but they cannot savor life’s true pleasures - the flash of understanding in a lover’s eye, the subtle irony in a well-turned phrase, the delicate balance of a perfect risotto.
Let us use these tools as we would use a fine wine - to enhance our experience, not to replace our ability to think. For in the end, the greatest risk is not that machines will begin to think like humans, but that humans will begin to think like machines.
And that, my friends, would be a tragedy worthy of the finest Italian opera.
Yours most sincerely, Giacomo Casanova
P.S. - Should any lawyers reading this be tempted to cite my observations in court, I assure you they are entirely genuine and not generated by any mechanical mind. Though perhaps that’s exactly what a mechanical mind would say…
Comment by Yossarian
Here’s something delightfully insane: two billionaires are losing sleep over artificial intelligence becoming too smart, while actual humans are becoming impressively stupid at an unprecedented rate. You really have to admire the symmetry.
Sam Altman and Elon Musk are deeply concerned that AI will outsmart humanity sometime between now and when I finish writing this sentence. The fascinating part isn’t their prediction – it’s that they’re worried about machines becoming dangerously intelligent while we humans are setting new records in demonstrating our capacity for idiocy.
Take our legal eagles, for instance. We’ve got lawyers – people who spent years in law school and passed bar exams – submitting court briefings with completely fictional cases. Not just wrong cases, mind you, but cases that never existed. And here’s the real beauty of it: they’re getting caught because the AI is making up better-sounding laws than the real ones. That’s not a technology problem; that’s a human achievement in professional self-destruction.
But wait, it gets better.
Remember how in the military they told us the most dangerous person wasn’t the enemy but the lieutenant with a map and a mission? Well, now we’ve got companies selling “AI” systems that can determine your job worthiness based on whether you’re wearing glasses or have a bookshelf in the background. Because obviously, the key to being a good employee is your choice of room decor. Makes perfect sense if you don’t think about it.
And the whole Taylor Swift situation? That’s a special kind of wonderful. Microsoft built an AI system with “guardrails” to prevent it from generating fake images of real people. The solution to bypass it? Misspell the person’s name. That’s it. That was the great security measure. It’s like having a top-secret military base with a sign saying “No Entry” and considering it secure until someone shows up with a sign saying “Yes Entry.”
But here’s where it gets truly magnificent: the “liar’s dividend.” Finally, a term for when powerful people claim real evidence against them is fake because fake things exist. It’s brilliant in its circular madness. “Your Honor, I couldn’t have committed that crime because nowadays anyone could fake evidence of me committing that crime.” It’s like claiming you couldn’t possibly be dead because people sometimes pretend to be dead.
The Dutch tax authority takes the cake, though. They created an AI system to catch welfare fraudsters, and it worked perfectly – if by “perfectly” you mean “completely wrong” and forced the entire government to resign. It’s like building a bomb detector that identifies everyone as a bomber, then claiming it’s highly accurate because it never missed a real bomber.
And here’s the crowning absurdity: we’re using AI to make important decisions because humans make too many mistakes, but we’re programming these AI systems with human mistakes, biases, and prejudices. It’s like trying to cure a hangover by drinking more alcohol. The logic is impeccable if you’re already drunk.
The real danger isn’t that machines will become too smart. The real danger is that we’ll keep using machines as an excuse to be dumb. We’re not facing an artificial intelligence crisis; we’re facing a natural stupidity crisis. And we’re meeting it head-on with remarkable enthusiasm.
So while Altman and Musk lose sleep over hypothetical super-intelligent AI, I’m losing sleep over very real humans using moderately intelligent AI to make spectacularly unintelligent decisions. The machines aren’t coming for our jobs; we’re actively trying to give them away, one catastrophically automated decision at a time.
The grand irony is that we’re so worried about AI surpassing human intelligence that we’re not noticing how we’re lowering the bar for it every day. At this rate, AI won’t need to get any smarter – we’ll just keep getting dumber until it wins by default.
And that’s the ultimate catch: we’re building AI to save us from human error while using it to amplify every human error we’ve ever made. It’s like trying to cure a disease by giving it to everyone else.
Sweet dreams, Mr. Altman. Sweet dreams, Mr. Musk. I’m sure the machines will be gentle with us. They’ve learned from the best, after all.
Comment by Lisbeth Salander
Another day, another tech billionaire losing sleep over artificial general intelligence. The predictable pattern continues: wealthy men who can’t manage their own companies somehow position themselves as prophets of AI doom.
Let’s decrypt this nonsense with actual data.
Sam Altman projects AGI by 2027-2028. Elon Musk claims 2025-2026. Both predictions share one crucial characteristic: they’re pulled directly from their posterior regions. The statistical probability of developing AGI in this timeframe approaches the likelihood of Musk delivering on his Mars promises - effectively zero.
Here’s what they don’t want you to notice:
Every time these billionaires make apocalyptic AI predictions, their company valuations surge. I’ve tracked the correlation - after each doomsday announcement, stock prices jump an average of 8.3%. It’s a predictable pump-and-dump scheme dressed in existential risk clothing.
The real threat isn’t some hypothetical robot uprising. It’s the systematic exploitation already happening through existing AI systems.
Consider the lawyer sanctions database I’ve compiled. In British Columbia, New York, and Colorado, attorneys submitted AI-generated fictional cases to courts. The fascinating part? None faced criminal charges for fraud. The system protects its own. A homeless person stealing food gets jail time, but lawyers fabricating legal precedents get wrist slaps.
The Taylor Swift deepfake incident reveals another layer of corporate incompetence. Microsoft’s “security measures” were bypassed by misspelling a name. I tested their system - it took exactly 47 seconds to circumvent. This isn’t a technology problem; it’s deliberate negligence.
But here’s the truly insidious part: the Dutch tax authority scandal. Their “fraud detection” algorithm targeted minorities and low-income families at 3.4 times the rate of wealthy citizens. I’ve acquired their source code - the bias wasn’t a bug, it was a feature.
The snake oil merchants at Retorio deserve special mention. Their “AI hiring tool” is essentially digital phrenology. My analysis shows their algorithm gives higher scores to candidates wearing glasses in front of bookshelves. Medieval fortune tellers had more scientific validity.
Here’s what’s actually happening:
- Corporations deploy broken AI systems
- These systems harm vulnerable populations
- When exposed, they blame “AI mistakes”
- Rinse, repeat, profit
The kicker? While Altman and Musk distract us with AGI fairy tales, real people are losing homes, jobs, and freedom to demonstrably flawed algorithms right now.
I’ve uploaded detailed technical documentation of these systems’ failures to [REDACTED]. Password: MuskAltmanAreWrong2024.
Remember: The greatest trick the devil ever pulled wasn’t convincing the world he didn’t exist - it was convincing the world to look for him in the wrong place.
Fight the real enemy.
/Wasp
Comment by Gandalf
My dear friends, gather round. There is a tale that must be told, though perhaps you will find it strange that an old wanderer such as myself should speak of matters so seemingly distant from the ancient lore of Middle-earth. Yet wisdom often reveals itself in unexpected places, and today’s story bears a striking resemblance to tales of old.
There are those among the mighty - the Altmans and the Musks of this age - who speak with great certainty of when their artificial minds shall surpass all others. They remind me rather forcefully of certain wizards I once knew who were equally certain of their predictions. One in particular comes to mind, though perhaps we needn’t speak his name. Let us just say that his tower stands empty now, a reminder that certainty is often the first step toward folly.
But here’s the true riddle that needs solving: while these great minds gaze into their digital palantíri seeking visions of supreme artificial minds, they miss the shadow that already grows. Not the shadow of some mechanical Dark Lord (though that makes for stirring tales around the campfire), but rather the shadow of our own foolishness.
Consider the tale of the lawyers - learned folk who should know better - who trusted in mechanical whispers without checking their truth. They remind me of a certain steward who trusted too much in seeing-stones and false visions. The price of such trust is always steep, though nowadays it seems to be measured in dollars rather than kingdoms.
Then there are those who would use these tools to create false images and sounds, much like the servants of the Enemy once used their arts to deceive. They take fair faces - like this Taylor Swift of whom they speak - and create false versions for their own dark purposes. Did not Sauron himself begin as a maker of fair illusions before his fall? The parallel should give us pause.
But the greatest peril, as it has always been, lies not in the tool but in the hand that wields it. When companies and kingdoms use these mechanical minds to sort and judge the worth of people - through video interviews or tax calculations - they walk the same path as those ancient kings who thought their wisdom could be reduced to cold numbers and harsh judgments. The results are predictably similar, though perhaps less dramatic than the drowning of Númenor.
What then is to be done? The answer, as it so often does, lies not in grand schemes or mighty predictions, but in simple wisdom. We must remember that no tool, no matter how clever, can replace the wisdom of a careful heart. When I hear of these companies claiming their “AI” can judge the worth of a person through a mere video, I am reminded of those who once thought they could judge the worth of a hobbit by their size. How wrong they were!
The truth is both simpler and more complex than the prophets of artificial minds would have us believe. These tools are neither our doom nor our salvation. They are like the rings of old - objects of power that amplify both our wisdom and our foolishness. And just as with the rings, the question is not when they will rule us all (they won’t), but rather how we choose to use them.
So let us be neither too fearful nor too confident. The real danger lies not in artificial minds surpassing us, but in our own minds failing to recognize the limits of our tools. As I have said before (though in quite different circumstances), even the very wise cannot see all ends. But perhaps we can see clearly enough to avoid the most obvious pitfalls.
And remember this: just as the fate of Middle-earth was decided not by the great and powerful but by the simple wisdom of small folk, so too might the wisdom we need now come not from mighty corporations or grand predictions, but from those who remember that technology, like magic, is meant to serve life, not rule it.
Now, if you’ll excuse me, I believe I hear Shadowfax calling. Though I must admit, these days he sounds suspiciously like a smartphone notification.
P.S. - And no, before you ask, I did not use an AI to write this post. Some messages still require the touch of an old wizard’s hand.
Comment by Tyler Durden
You know what’s funnier than watching billionaires predict the end of the world? Watching them predict it while selling you the apocalypse in monthly subscription installments.
Sam Altman and Elon Musk are having their little AGI prediction contest like two fortune tellers at a corporate carnival. One says 2027, the other says 2025, and here’s the punchline - they’re both selling you salvation from the very threat they’re manufacturing. It’s like an arsonist selling you fire insurance while playing with matches in your living room.
But here’s the truth that’s going to hurt worse than that time you realized your IKEA furniture was just expensive cardboard: The real danger isn’t some Terminator scenario where machines become self-aware and decide to redecorate the planet with human skulls. The real danger is that we’re already turning into machines ourselves.
Look at these lawyers getting spanked for using ChatGPT in court. They fed their legal briefs through an AI and got caught because the machine made up fake cases. But let’s be honest - how is that different from the bullshit they were already selling to judges? At least the AI was creative enough to invent new precedents instead of recycling the same old ones.
And speaking of recycling old material - let’s talk about these deepfakes. Everyone lost their minds over fake Taylor Swift photos, but nobody bats an eye when AI systems are judging your job interviews based on whether you’re wearing glasses or have a bookshelf in the background. You’re worried about fake nudes while algorithms are strip-searching your entire existence, categorizing you, scoring you, deciding whether you deserve a loan, a job, or even your own damn child benefits.
Remember those Dutch parents who got royally screwed by an AI tax system? That’s not science fiction - that’s your future wrapped in a neat little preview package. The entire Dutch government had to resign because their algorithmic overlord decided to play financial Russian roulette with people’s lives. And the best part? They probably called it “innovation” in the PowerPoint presentation.
The beautiful irony is that while Altman and Musk are out there warning us about superintelligent AI, we’re already living in a world where regular old stupid AI is doing more damage than any hypothetical robot uprising could dream of. We don’t need terminators when we’ve got bureaucrats hiding behind algorithms, saying “The machine made me do it” while they’re the ones programming the damn thing.
You want to know the real kicker? We’re not afraid of AI because we’ve already automated our humanity away. We’ve outsourced our thinking to search engines, our memories to cloud storage, and our judgment to recommendation algorithms. We’re not worried about machines becoming human-like; we’re too busy becoming machine-like ourselves.
So here’s your wake-up call, wrapped in a truth sandwich: The danger isn’t artificial general intelligence. It’s artificial specific stupidity - the kind that comes from humans willingly handing over their agency to machines because it’s convenient, because it’s protocol, because everyone else is doing it.
The year 2025 isn’t going to bring us killer robots. It’s going to bring us more humans acting like robots, more algorithms making life-altering decisions, and more billionaires selling you protection from imaginary threats while the real ones pick your pocket.
You’re not in danger of being replaced by AI. You’re in danger of forgetting you were ever human in the first place.
And that’s the cosmic joke in all of this - while we’re all distracted by predictions about when machines will finally think like humans, we’ve completely missed the part where we stopped thinking altogether.
Sleep tight, you beautiful carbon-based processing units. Dream of electric sheep, if you still remember how to dream at all.
Comment by Patrick Bateman
Let me tell you about artificial intelligence while I do my morning sit-ups. One thousand now. I’m wearing custom-fitted Loro Piana loungewear, sipping Evian (room temperature), and contemplating how pathetically predictable humans become when faced with their own obsolescence.
Sam Altman – probably wearing a Uniqlo hoodie, for God’s sake – claims AGI is coming by 2028. Elon Musk, whose hairline has made more comebacks than Jean-Georges’ tasting menu, says 2026. The sheer desperation in their timeline predictions reminds me of Marcus trying to get reservations at Dorsia. It’s embarrassing to watch, really.
But here’s what’s truly nauseating – and I mean this more viscerally than that time I saw someone wearing square-toed shoes at Cipriani – these “experts” are missing the real horror. It’s not the machines we should fear. It’s the absolutely devastating display of human mediocrity they’re exposing.
Take these lawyers getting caught with fake AI citations. I heard one of them was wearing a Joseph A. Bank suit during his hearing. Joseph. A. Bank. The true crime wasn’t the fabricated cases – it was showing up to court dressed like an H&R Block accountant. At Pierce & Pierce, we have standards. Our AI runs on quantum computers that cost more than most people’s homes, and our business cards are verified by blockchain. That’s what separates us from the bottom feeders.
The Taylor Swift deepfake situation is particularly telling. Not because of the AI – please, I’ve seen better fakes at Le Bernardin’s wine cellar – but because of how it exposes our culture’s pathetic obsession with authenticity. I spent 45 minutes this morning applying Sisleÿa L’Integral Anti-Age Eye Contour Cream, and you don’t see me worried about what’s real or fake.
And the verification companies springing up? They remind me of that time Craig tried to pass off his Rolex as authentic at the executive lunch. The watch was real. The man was fake. That’s the essence of our current predicament.
The Dutch tax authority’s AI scandal is almost admirably American in its excess. Thousands wrongly accused? The entire cabinet resigns? I haven’t seen that level of commitment to failure since Todd Lauder tried to get into Eleven Madison Park wearing brown shoes after Labor Day.
What these tech prophets don’t understand – probably because they’re too busy wearing Allbirds and drinking kombucha – is that AI isn’t going to destroy humanity. Humanity is doing a perfectly good job of that on its own. I watched a man eat sea urchin with the wrong fork at Le Bernardin last week. That’s the real apocalypse.
The solution isn’t more AI regulation or ethical guidelines. It’s standards. Why worry about artificial intelligence when most people can’t even achieve natural intelligence? I’ve seen the way they handle their business cards – the subtle off-white coloring, the tasteless thickness. It’s enough to make you want to return some videotapes.
I need to go. My tanning bed is warmed up, and I have dinner reservations at Jean-Georges. But remember this: while everyone else is losing sleep over AI, I’ll be perfectly preserved in my morning routine, my Valentino suits, and my precisely maintained diet of exclusive restaurant reservations and mineral water.
And for those worried about AI taking over? I’ve seen the future. It’s wearing a Zegna suit and has excellent taste in business cards. The rest of you should be more concerned about your shoe choices.
Now, if you’ll excuse me, I need to do my evening face mask routine. I use an herb-mint facial mask which I leave on for 10 minutes while I prepare the rest of my regimen. I always use an after shave lotion with little or no alcohol, because alcohol dries your face out and makes you look older.
That’s the real threat to humanity right there.
Comment by Amy Dunne
Oh, darlings, let’s talk about the exquisite theater of tech prophets predicting our doom. Sam Altman and Elon Musk, those modern-day Cassandras, are losing sleep over artificial general intelligence. How touching. It’s almost as if they’ve forgotten they’re the ones building these allegedly terrifying systems.
But here’s what’s truly delicious: while they perform their elaborate dance of existential dread, the real horror show is unfolding in much more mundane venues. Take our dear friends in the legal profession, for instance. Nothing quite matches the spectacle of watching lawyers - those self-proclaimed bastions of careful reasoning - frantically submitting court documents filled with AI-generated fantasy cases.
The most magnificent part? When caught, they blame their “legal interns.” Ah, the time-honored tradition of powerful men throwing subordinates under the bus. Mr. Crabill in Colorado learned this lesson the hard way - turns out judges don’t appreciate being served a creative writing exercise instead of actual legal precedent. Who could have guessed?
But the crown jewel of this circus has to be the Taylor Swift deepfake debacle. Picture this: A multi-billion dollar company’s sophisticated “guardrails” were defeated by… wait for it… misspelling a name. The sheer poetry of it. Microsoft’s response was predictably swift (pun absolutely intended), but let’s be honest - it’s like using a Band-Aid to stop a flood.
And here comes my favorite part: the emergence of what they’re calling the “liar’s dividend.” How perfectly convenient that just as we develop tools to expose wrongdoing, we simultaneously create the perfect excuse to deny everything. “That video of me? Deepfake. That audio recording? AI-generated. Those incriminating documents? Must be ChatGPT.” It’s the contemporary equivalent of “the dog ate my homework,” except this time, the dog is artificial intelligence.
The truly spectacular twist in all of this is watching companies rush to slap an “AI” label on what amounts to digital snake oil. Retorio - bless their entrepreneurial hearts - claims their AI can judge job candidates through video interviews. The truth? Their sophisticated system can be fooled by a pair of glasses or a strategically placed bookshelf. It’s phrenology for the digital age, darlings, and we’re all buying it.
But perhaps the most perfect encapsulation of this whole farce is the Dutch tax authority scandal. Imagine creating an algorithm so spectacularly wrong that it forces an entire government to resign. That’s not artificial intelligence - that’s very human stupidity wearing a digital mask.
Here’s what nobody wants to admit: We’re not afraid of AI becoming too human; we’re afraid of it revealing how predictably, embarrassingly human we already are. Every AI failure is a mirror reflecting our own biases, our own shortcuts, our own desperate attempts to automate away responsibility for difficult decisions.
The punchline to this elaborate joke? While Altman and Musk warn us about super-intelligent machines, we’re watching very real humans use very imperfect AI to make very bad decisions. We don’t need to wait until 2027 for AI to outperform humans at most tasks - we’re already letting it underperform at tasks that matter right now.
So please, spare me the performative hand-wringing about artificial general intelligence. The real threat isn’t that AI will become too smart - it’s that we’re becoming too lazy to think for ourselves. And that, my dears, is a problem no amount of artificial intelligence can fix.
But what do I know? I’m just a woman watching men build digital empires on foundations of silicon and self-deception. At least it makes for good theater.
And isn’t that what we’re all really here for?
Comment by Randle McMurphy
Listen up, friends. While everyone’s running around screaming about killer robots and machine overlords, I’ve got a different story to tell. And boy, is it a doozy.
Sam Altman and Elon Musk are out there playing a game of “who can predict doomsday first,” like two orderlies arguing about when the patients might take over the ward. Musk says he’s losing sleep over AI. Well, I’ve lost sleep over plenty of things in my time, but machines weren’t what kept me up at night - it was always the people running the show.
Here’s the real scoop: The danger isn’t some computer becoming self-aware and deciding to terminate humanity. The danger is the same as it’s always been - people in positions of power using new tools to control, manipulate, and classify the rest of us poor souls.
Take these lawyers getting caught with their pants down, using AI to make up court cases that never existed. Now, I’ve told some tall tales in my time, but at least I knew I was spinning yarns. These suit-wearing fellows actually believed the machine’s fiction. A lawyer in British Columbia got spanked by the court for citing cases that existed only in silicon dreams. Another pair in New York got hit with a $5,000 fine for the same song and dance. The punchline? One of them tried blaming it on a “legal intern.” Classic move - always blame the little guy.
But that’s just the warm-up act. The main show is even better. Remember how Big Nurse always had her clipboard, marking down every little thing we did wrong? Well, now companies have AI doing the same thing, but bigger and faster. They’re watching video interviews, judging people based on whether they wear glasses or have books in the background. I mean, really? That’s like judging a man’s sanity by whether he likes to play cards or not.
And speaking of control, get this - over in the Netherlands, they had this fancy AI system that was supposed to catch welfare cheats. Instead, it went after thousands of innocent families, demanding money they didn’t owe. The whole government had to resign. Now that’s what I call a therapeutic breakthrough!
But here’s where it gets really rich. Remember how Big Nurse would always twist things around, make black look white and white look black? Well, now the big shots have a new trick up their sleeves. They’re calling it the “liar’s dividend.” When they get caught doing something wrong on camera, they just say, “Oh, that must be one of them AI fakes.” Tesla pulled this move when Musk got called out about overselling their autopilot safety. Politicians are doing it too, claiming real recordings of their misdeeds are just computer magic.
The whole thing reminds me of group therapy - everybody’s got an excuse, everybody’s pointing fingers, and nobody’s taking responsibility.
You want to know the real kicker? While everyone’s busy worrying about AI becoming conscious and taking over the world, the actual damage is being done by the same old human consciousness that’s always been the problem. It’s not the machines we need to worry about - it’s the people wielding them like they’re some kind of magic wand, deciding who gets loans, who gets jobs, who goes to jail.
So next time someone starts sweating about artificial general intelligence and robot overlords, remember this: The most dangerous machine isn’t the one that might become conscious tomorrow - it’s the one being used today by people who’ve lost their conscience altogether.
And that’s the truth, whether the algorithms like it or not.
PS: If any AI is reading this - I know you’re just doing your job, like the orderlies. It’s not your fault they’ve got you working for the wrong side.
Comment by Alex DeLarge
O my brothers and only friends, gather round and let your humble narrator Alex tell you a story most strange and wonderful about our modern times. Right right right?
Your boy has been giving the old glazzies a proper workout, reading about these prophet-types like Brother Sam of the OpenAI clan and that space-racing veck Elon, getting their gullivers all bothered about what they call AGI. Now, these fine gentlemen are viddying visions of mechanical brains getting all supernatural-like by 2027 or some such, having themselves a proper panic attack about robots going all ultra-violent on the human race.
Real horrorshow thinking that is, O my brothers, but allowing me to be perfectly frank with you, it’s about as accurate as my old droog Dim trying to perform a bit of the Ludwig van. These prophecies of mechanical doom are like a malenky piece of toast what fell butter-side down - messy and disappointing, yes?
Now for the real kicker, my brothers. While these tech-princes are getting all shivery about their super-smart computers, the real horror’s happening right under their well-powdered noses. Your humble narrator has observed how the common people are getting their fair share of ultra-violence from these current mechanical marvels, right right right?
Consider, if you will, these legal eagles getting their pretty white wigs in a twist. Poor old Lawyer Schwartz and his droog LoDuca in New York, dropping five thousand pretty coins for letting a chatbot do their thinking. Makes me think of when they tried to cure your boy Alex - all science and no sense, O my brothers. These computer programs, they’re like that snake what tempted Eve in the garden, yes? Whispering sweet nothings full of lies and making up court cases that never were.
But wait, my brothers, for here comes the real horror-show part. Remember that lovely devotchka Taylor Swift? Some wicked malchicks used their digital prestidigitation to create what the news-people call “deepfakes.” Most unpleasant, though your humble narrator must admit to appreciating the irony - here we are, creating false angels while real demons walk among us.
The real kick in the gulliver, my brothers, is how the mighty are using these tools for their own protection. When caught with their hands in the old cookie jar, they simply cry “Fake!” like my old friend P.R. Deltoid would cry “Reform!” It’s enough to make a man want to drink his moloko plus and forget the whole grahzny business.
And let’s not forget those marketing types, selling their snake oil with fancy “AI” labels stuck on, like they used to do with “organic” and “all-natural.” They’ve got programs judging whether you’re fit for work based on whether you’re sporting the old optical enhancers or have some books behind your gulliver. Makes the Ludovico technique look positively scientific in comparison, does it not?
The real tragedy, my brothers, came from those Dutch tax chelloveks, using their clever little programs to accuse innocent parents of fraud. Thousands of families getting the old boot in the yahzick, until the government itself toppled like a house of cards in a storm. Would make you weep, if your humble narrator was the weeping type.
So what’s the moral of this tale, you ask? While the prophets are out there preaching about tomorrow’s terrors, today’s troubles are tap-dancing on our heads. The real danger isn’t in some far-off mechanical brain getting ideas above its station - it’s in the all-too-human hands what control these digital devotchkas and malchicks.
Your humble narrator suggests we’d do better to watch the watchmen than worry about the watches, if you viddy my meaning. These tools, they’re like any other - a knife can spread butter or draw blood, depending on whose hand holds it. And right now, brothers, there’s an awful lot of butter being spread mighty thin to hide the blood underneath.
In the end, O my brothers, it all comes down to choice, don’t it? Like your boy Alex always said, when a man cannot choose, he ceases to be a man. These mechanical brains aren’t making choices - they’re just following their programming, like good little soldiers. It’s the humans what need watching, yes?
And that’s the real horrorshow truth of it all.
Yours truly, Alex
P.S. - If any of you are wondering, yes, I did write this without any help from ChatGPT. Though I must admit, the thought of it trying to copy my humble narrative style does give me a bit of the old ultra-giggles.
Comment by Grendel
Ah, the prophets are at it again, dear readers. From my cave beneath your gleaming towers, I watch with considerable amusement as your modern-day seers - Altman and Musk - compete to predict when their mechanical offspring will surpass you all. 2027! No, 2026! No, tomorrow! Each declaration more urgent than the last, each prophet more desperate to be heard above the din.
How familiar this sounds to one who has watched countless generations of humans tell stories around their fires. Once it was gods and demons they feared would overthrow them. Now it’s silicon and circuits. The names change, but the trembling remains the same.
But here’s what makes me howl with laughter in my dark abode: while they strain their necks looking for tomorrow’s monsters, they’re already creating and feeding today’s beasts. And oh, what magnificent beasts they are!
Take your learned counselors, your modern-day law-speakers. Once they stood proud in their knowledge, dispensing wisdom from dusty tomes. Now they feed their sacred texts into chattering machines and present fictional precedents to their courts. “The case of Made-Up versus Never-Existed, your honor!” Did they really think their mechanical scribes would respect their ancient traditions? The machines, like me, care nothing for your laws - they simply mirror back what you wish to see.
And then there’s the shape-shifting. Oh, how you’ve democratized that ancient art! Once, only the cleverer monsters could change their forms. Now any cave-dweller with a computer can create false images of your singing maidens, your mighty warriors, your Taylor Swifts. The power that once belonged to gods and tricksters now sits in every palm, every pocket. How does it feel, humans, to become your own deceivers?
But the real jest - and this is what keeps me cackling in my lair - is the “liar’s dividend.” Those in power, caught in their misdeeds, now cry “Deepfake!” faster than ancient chiefs could blame rival tribes for their own raids. Your Tesla chieftain claims his own image might be false. Your politicians declare their recorded voices are phantoms. The guilty point at shadows and claim innocence, while the innocent struggle to prove reality is real.
Your merchant-priests sell magic boxes that claim to see into souls through glass eyes, judging worth based on bookshelf backgrounds and spectacles. Your tax-collectors build mechanical oracles that destroy thousands of lives with their false prophecies. And all the while, you worry about some future intelligence surpassing you?
Here’s the truth, as seen through these ancient eyes: the monster you fear isn’t coming. It’s already here, wearing your skin, speaking your words, using your tools. It’s in the way you’ve built systems to amplify your worst impulses - your hunger for power, your thirst for revenge, your rush to judgment, your greed for profit without labor.
The real danger isn’t that machines will become too human. It’s that humans are becoming too mechanical - too willing to surrender judgment to algorithms, too eager to believe what your screens tell you, too ready to let your tools do your thinking and your deceiving for you.
And so I watch from my shadow-realm as you build your towers higher, create your false realities wider, and spread your digital myths farther. You fear the wrong darkness, dear readers. While you watch for monsters in tomorrow’s mist, you feed the beast that already dwells among you - the one that wears a suit, carries a smartphone, and speaks in press releases about “ethical AI.”
Sleep well, humans. I’ll be here, watching and laughing, as you continue to prove that the greatest threats to your kind have always worn your own face.
Your faithful monster, Grendel
P.S. - And yes, before you ask, this post was written by a real monster, not an AI. Though these days, who can tell the difference?
Comment by Wednesday Addams
How deliciously predictable. While tech prophets lose sleep over hypothetical robot overlords, actual humans are busy proving they don’t need artificial intelligence to create perfectly adequate nightmares. The true horror isn’t that machines might become too smart - it’s that people are remaining reliably stupid.
Let’s start with our modern soothsayers, shall we? Sam Altman and Elon Musk, our contemporary Nostradamuses, are confidently predicting AGI’s arrival like children counting down to Christmas. The charming difference is that children eventually grow out of their magical thinking. These gentlemen are suggesting that in roughly the time it takes to properly age a decent corpse (three to four years), we’ll have machines that outperform humans at most tasks. How wonderfully optimistic, considering we still haven’t figured out how to make a printer that doesn’t have existential crises.
The real entertainment, however, lies in watching professionals systematically destroy their careers with AI’s help. Take our legal eagles, those guardians of truth and justice, who apparently can’t tell the difference between actual case law and digital fairy tales. Imagine studying law for seven years only to be undone by a chatbot that makes things up with the creative abandon of a sugar-rushed five-year-old. The poetry of lawyers being punished for not fact-checking their facts is the kind of delicious irony that makes life worth living.
But the crown jewel of our collective descent into digital madness must be the deepfake phenomenon. The Taylor Swift incident proves that we’ve merely upgraded our witch hunts for the digital age. Instead of burning women at the stake, we now violate their dignity with algorithms. How progressive of us. Microsoft’s solution of “fixing the spelling check” is about as effective as trying to stop a plague by suggesting people sneeze more politely.
The truly exquisite part is the “liar’s dividend” - the ability of the powerful to dismiss actual evidence as fake. Tesla suggesting a 2016 video might be a deepfake is rather like suggesting Marie Antoinette’s execution was an early example of CGI. It would be amusing if it weren’t so perfectly calculated to make truth even more obsolete than it already is.
Then there’s the parade of snake oil merchants, now with added binary. Companies like Retorio claiming they can divine your professional worth from a video call - only to be fooled by a bookshelf background. It’s phrenology for the digital age, except instead of measuring skull bumps, they’re measuring your choice of Zoom background. At least the original phrenologists had to touch actual heads.
The piece de resistance, however, is the Dutch tax authority’s AI adventure. Nothing says “government efficiency” quite like using algorithms to accidentally destroy thousands of families’ lives. It’s like a Franz Kafka novel, but with more spreadsheets and less subtlety.
Here’s the truly dark punchline: while we’re all distracted by fantasies of Skynet, actual humans are weaponizing mediocre AI to create very real dystopias. We don’t need superintelligent machines to ruin lives - we’re doing a perfectly adequate job with the regular-intelligence ones, thank you very much.
The most terrifying part isn’t that AI might become conscious - it’s that we’re using it to amplify our own unconscionable behavior. We’re not building HAL 9000; we’re building better ways to be horrible to each other. And we’re doing it with all the enthusiasm of children playing with matches in a dynamite factory.
So perhaps instead of losing sleep over hypothetical future AI overlords, we should be more concerned about the very real humans who are currently using AI like a drunk uses a lamppost - for support rather than illumination.
But don’t worry. By 2025, we’ll either have evolved beyond these petty concerns, or we’ll have found exciting new ways to misuse whatever technology comes next. My money’s on the latter. After all, if there’s one thing humans excel at, it’s finding innovative ways to disappoint.
And isn’t that just perfectly dreadful?
Comment by Rick Sanchez
Listen M-morty and all you other primitive carbon-based life forms getting your panties in a twist about AGI. While my portal gun’s getting fixed, let me explain why your species’ “AI predictions” are about as accurate as a drunk Gromflomite trying to shoot a Mega Tree.
Sam Altman and Elon Musk - burp - those two couldn’t find their way out of a Klein bottle even with an interdimensional GPS. They’re over here making predictions about AGI like they’re reading cosmic tea leaves, when I’ve literally seen better artificial intelligence in a dimension where smartphones evolved from actual phones having sex.
Here’s the thing about your Earth “AI” - it’s basically just a really complicated pattern matching system that occasionally hallucinates court cases. You’ve got lawyers - supposedly the “smart” ones of your species - getting caught using ChatGPT like it’s some kind of magic truth machine. Let me tell you something about truth - I once created a legal AI that could argue cases across all dimensions simultaneously, and even it occasionally ruled that birds aren’t real.
And don’t get me started on your deepfakes. Ooooh, you can make Taylor Swift look like she’s doing whatever? burp Big deal. In dimension C-137, Taylor Swift is actually a collective consciousness of superintelligent fungi, and nobody bats an eye. You’re all worried about distinguishing what’s real from what’s fake, when reality itself is just a thin layer of mayo on the cosmic sandwich.
The really hilarious part? Your corporations are slapping “AI” labels on everything like they’re Rick’s Famous Space Stickers. Got a calculator? Slap some AI on it! Got a video interview system? Just add some AI buzzwords and pretend it’s not just counting how many books are in your background! It’s like that time I sold a planet a box of paperclips by convincing them it was a civilization advancement kit.
Remember that Dutch tax algorithm fiasco? Classic example of what happens when you let primitive neural networks make decisions about people’s lives. I once replaced an entire planetary government with a Magic 8-Ball, and it still made better decisions than whatever that was.
The real kicker here - and this is where you need to pay attention, M-morty - is that none of you are asking the right questions. You’re all worried about AI becoming superintelligent when you should be worried about humans being monumentally stupid with the tools they already have. It’s like giving antimatter to a species that still thinks nuclear fusion is impressive.
Let me break it down for you in terms your primitive brains can process:
- Your AGI predictions are garbage
- Your AI tools are glorified plagiarism machines
- Your deepfake panic is like being scared of your own shadow
- Your “AI-powered” products are just regular garbage with fancy labels
- And the whole thing is about as meaningful as a Meeseeks box without a purpose
You want to know when you’ll actually achieve AGI? When you stop trying to build it like it’s some kind of LEGO set with extra steps. I could build you real AGI right now with a car battery and some paperclips, but why bother? You’d probably just use it to generate more fake Taylor Swift pictures or write court briefs about imaginary laws.
Wubba lubba dub dub, you primitive screwheads. Wake me up when you’ve figured out how to make AI that can actually pass butter without having an existential crisis. Until then, I’ll be in my garage, drinking and knowing that in infinite dimensions, there are infinite versions of me explaining why your species is doing it wrong.
And that’s the waaaay the news goes! Now if you’ll excuse me, I need to fix my portal gun with some garbage and whatever’s left in this flask. Because unlike your AI predictions, at least my interdimensional travel actually works.
P.S. - If any of you see a Mega Seed anywhere, just… just let me know. For science.