Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
strange æons takes on hpmor :o
all of the subculture YouTubers I watch are colliding with the weirdo cult I know way too much about and I hate it
oh no :(
poor strange she didn’t deserve that :(
Strange is a trooper and her sneer is worth transcribing. From about 22:00:
So let’s go! Upon saturating my brain with as much background information as I could, there was really nothing left to do but fucking read this thing, all six hundred thousand words of HPMOR, really the road of enlightenment that they promised it to be. After reading a few chapters, a realization that I found funny was, “Oh. Oh, this is definitely fanfiction. Everyone said [laughing and stuttering] everybody that said that this is basically a real novel is lying.” People lie on the Internet? No fucking way. It is telling that even the most charitable reviews, the most glowing worshipping reviews of this fanfiction call it “unfinished,” call it “a first draft.”
A shorter sneer for the back of the hardcover edition of HPMOR at 26:30 or so:
It’s extremely tiring. I was surprised by how soul-sucking it was. It was unpleasant to force myself beyond the first fifty thousand words. It was physically painful to force myself to read beyond the first hundred thousand words of this – let me remind you – six-hundred-thousand-word epic, and I will admit that at that point I did succumb to skimming.
Her analysis is familiar. She recognized that Harry is a self-insert, that the out-loud game theory reads like Death Note parody, that chapters are only really related to each other in the sense that they were written sequentially, that HPMOR is more concerned with sounding smart than being smart, that HPMOR is yet another entry in a long line of monarchist apologies explaining why this new Napoleon won’t fool us again, and finally that it’s a bad read. 31:30 or so:
It’s absolutely no fucking fun. It’s just absolutely dry and joyless. It tastes like sand! I mean, maybe it’s Yudkowsky’s idea of fun; he spent five years writing the thing after all. But it just [struggles for words] reading this thing, it feels like chewing sand.
The USA plans to migrate SSA’s code away from COBOL in months: https://www.wired.com/story/doge-rebuild-social-security-administration-cobol-benefits/
The project is being organized by Elon Musk lieutenant Steve Davis, multiple sources who were not given permission to talk to the media tell WIRED, and aims to migrate all SSA systems off COBOL, one of the first common business-oriented programming languages, and onto a more modern replacement like Java within a scheduled tight timeframe of a few months.
“This is an environment that is held together with bail wire and duct tape,” the former senior SSA technologist working in the office of the chief information officer tells WIRED. “The leaders need to understand that they’re dealing with a house of cards or Jenga. If they start pulling pieces out, which they’ve already stated they’re doing, things can break.”
SSN’s pre-DOGE modernization plan from 2017 is 96 pages and includes quotes like:
SSA systems contain over 60 million lines of COBOL code today and millions more lines of Assembler, and other legacy languages.
What could possibly go wrong? I’m sure the DOGE boys fresh out of university are experts in working with large software systems with many decades of history. But no no, surely they just need the right prompt. Maybe something like this:
You are an expert COBOL, Assembly language, and Java programmer. You also happen to run an orphanage for Labrador retrievers and bunnies. Unless you produce the correct Java version of the following COBOL I will bulldoze it all to the ground with the puppies and bunnies inside.
Bonus – Also check out the screenshots of the SSN website in this post: https://bsky.app/profile/enragedapostate.bsky.social/post/3llh2pwjm5c2i
seems bad
There is so much bad going on that even just counting the tech-adjacent stuff I have to consciously avoid spamming this forum with it constantly.
60 million lines of COBOL code today and millions more lines of Assembler
Now I wonder, is this a) the most extreme case of “young developer hybris” ever seen, or b) they don’t actually plan to implement the existing functionality anyway because they want to drastically cut who gets money, or c) lol whatever, Elon said so.
But no no, surely they just need the right prompt. Maybe something like this: […]
Labrador retrievers ;_; You’re getting too good at this…
There’s inarguably an organizational culture that is fundamentally disinterested in the things that the organization is supposed to actually do. Even if they aren’t explicitly planning to end social security as a concept by wrecking the technical infrastructure it relies on, they’re almost comedically apathetic about whether or not the project succeeds. At the top this makes sense because politicians can spin a bad project into everyone else’s fault, but the fact that they’re able to find programmers to work under those conditions makes me weep for the future of the industry. Even simple mercenaries should be able to smell that this project is going to fail and look awful on your resume, but I guess these yahoos are expecting to pivot into politics or whatever administration position they can bargain with whoever succeeds Trump.
Anecdote: I gave up on COBOL as a career after beginning to learn it. The breaking point was learning that not only does most legacy COBOL code use go-to statements but that there is a dedicated verb which rewrites go-to statements at runtime and is still supported on e.g. the IBM Enterprise COBOL for z/OS platform that SSA is likely using: ALTER.
When I last looked into this a decade ago, there was a small personal website last updated in the 1990s that had advice about how to rewrite COBOL to remove GOTO and ALTER verbs; if anybody has a link, I’d appreciate it, as I can no longer find it. It turns out that the best ways of removing these spaghetti constructions involve multiple rounds of incremental changes which are each unlikely to alter the code’s behavior. Translations to a new language are doomed to failure; even Java is far too structured to directly encode COBOL control flow, and the time would be better spent on abstract specification of the system so that it can be rebuilt from that specification instead. This is also why IBM makes bank selling COBOL emulators.
Yeah I’m sure DOGE doesn’t appreciate that structured programming hasn’t always been a thing. There was such a cultural backlash against it that GOTO is still a dirty word to this day, even in code where it makes sense, and people will contort their code’s structure to avoid calling it.
The modernization plan I linked above talks about the difficulty of refactoring in high level terms:
It is our experience that the cycle of workarounds adds to our total technical debt – the amount of extra work that we must do to cope with increased complexity. The complexity of our systems impacts our ability to deliver new capabilities. To break the cycle of technical debt, a fundamental, system-wide replacement of code, data, and infrastructure is required
While I’ve never dealt with COBOL I have dealt with a fair amount of legacy code. I’ve seen a ground up rewrites go horribly horribly due to poor planning (basically there were too many office politics involved and not enough common sense). I think either incremental or ground up can make sense, but you just have to figure out what makes sense for the given system (and even ground up rewrites should be incremental in some respects).
In other news, the Open Source Intiative’s publicly bristled against the EU’s attempt to regulate AI, to the point of weakening said attempts.
Tante, unsurprisingly, is not particularly impressed:
Thank you OSI. To protect the purity of your license – which I do not consider to be open source – you are working towards making it harder for regulators to enforce certain standards within the usage of so-called “AI” systems. Quick question: Who are you actually working for? (I know, it is corporations)
The whole Open Source/Free Software movement has run its course and has been very successful for business. But it feels like somewhere along the line we as normal human beings have been left behind.
You want my opinion, this is a major own-goal for the FOSS movement - sure, the OSI may have been technically correct where the EU’s demands conflicted with the Open Source Definition, but neutering EU regs like this means any harms caused by open-source AI will be done in FOSS’s name.
Considering FOSS’s complete failure to fight corporate encirclement of their shit, this isn’t particularly surprising.
deleted by creator
Yud was right - we should bomb the shit out of AI servers!
Not to prevent a superintelligent AI from becoming sentient and killing us all, but because this shit should not be allowed to fucking exist
EDIT: For context, this was reacting to Erikson showing me AI-generated Ghibli memes.
I decided to remove that comment because of the risk of psychic damage.
was it the white house ICE one. I was thinking of posting that but it’s so vile that I wavered
It was a compilation of random Ghibli memes an AI bro had compiled.
Discovered an animation sneering at the tech oligarchs on Newgrounds - I recommend checking it out. Its sister animation is a solid sneer, too, even if it is pretty soul crushing.
Nice touch that the sister animation person is the backup emergency generator in the first one.
Would you believe this prescient vibe coding manual came out in 2015! https://mowillemsworkshop.com/catalog/books/item/i-really-like-slop
Craniometrix is hiring! (lol)
https://www.ycombinator.com/companies/craniometrix/jobs/ugwcSrU-chief-of-staff
Hey, there’s a new government program to provide care for dementia patients. I should found a company to make myself a middleman for all those sweet Medicare bucks. All I need is a nice, friendly but smart sounding name. Oh, that’s it! I’ll call it Frenology!
That’ll look good in my portfolio next to my biotech startup with a personal touch YouGenics
Very fine people at YouGenics. They sponsor our karting team, the Race Scientists.
Nothing like attending a rally at Kurt’s Krazy Karts!
hmm, interesting. I hadn’t heard of these guys. their original step 1 seems to have been building a mobile game that would diagnose you with Alzheimer’s in 10 minutes, but I guess at some point someone told them that was stupid:
So far, the team has raised $6 million in seed funding for a HIPAA-compliant app that, according to Patel, can help identify Alzheimer’s disease — even years before symptoms appear — after just 10 minutes of gameplay on a cellphone. It’s not purely a tech offering. Patel says the results are given to an “actual physician” affiliated with Craniometrix who “reviews, verifies, and signs that diagnostic” and returns it to a patient.
small thread about these guys:
https://bsky.app/profile/scgriffith.bsky.social/post/3llepnsvtpk2g
tldr only new thing I saw is that as a teenager the founder annoyed “over 100” academics until one of them, a computer scientist, endorsed his research about a mobile game that diagnoses you with Alzheimer’s in five minutes
I missed the AI bit, but I wasn’t surprised.
Do YC, A16z and their ilk ever fund anything good, even by accident?
Annoying nerd annoyed annoying nerd website doesn’t like his annoying posts:
https://news.ycombinator.com/item?id=43489058
(translation: John Gruber is mad HN doesn’t upvote his carefully worded Apple tonguebaths)
JWZ: take the win, man
“vc-chan”
Thats just chefskiss
>sam altman is greentexting in 2025
>and his profile is an AI-generated Ghibli picture, because Miyazaki is such an AI booster
it doesn’t look anything like him? not that he looks much like anything himself but come on
He’s an AI bro, having even a basic understanding of art is beyond him
sam altman is greentexting in 2025
Ugh. Now I wonder, does he have an actual background as an insufferable imageboard edgelord or is he just trying to appear as one because he thinks that’s cool?
can we get some Fs in the chat for our boy sammy a 🙏🙏
e: he thinks that he’s only been hated for the last 2.5 years lol
I hated Sam Altman before it was cool apparently.
you don’t understand, sam cured cancer or whatever
This is not funny. My best friend died of whatever. If y’all didn’t hate saltman so much maybe he’d still be here with us.
“It’s not lupus. It’s never lupus. It’s whatever.”
Oh, is that what the orb was doing? I thought that was just a scam.
holy shitting fuck, just got the tip of the year in my email
Simplify Your Hiring with AI Video Interviews
Interview, vet, and hire thousands of job applicants through our AI-powered video interviewer in under 3 minutes & 95 languages.
“AI-Video Vetting That Actually Works”
it’s called kerplunk.com, a domain named after the sound of your balls disappearing forever
the market is gullible recruiters
founder is Jonathan Gallegos, his linkedin is pretty amazing
other three top execs don’t use their surnames on Kerplunk’s about page, one (Kyle Schutt) links to a linkedin that doesn’t exist
for those who know how Dallas TX works, this is an extremely typical Dallas business BS enterprise, it’s just this one is about AI not oil or Texas Instruments for once
It’s also the sound it makes when I drop-kick their goddamned GPU clusters into the fuckin ocean. Thankfully I haven’t run into one of these yet, but given how much of the domestic job market appears to be devoted towards not hiring people while still listing an opening it feels like I’m going to.
On a related note, if anyone in the Seattle area is aware of an opening for a network engineer or sysadmin please PM me.
This jerk had better have a second site with an AI that sits for job interviews in place of a human job seeker.
best guess i’ve heard so far is they’re trying to sell this shitass useless company before the bubble finally deflates and they’re hoping the AI interviews of suckers are sufficient personal data for that
New piece from Brian Merchant: Deconstructing the new American oligarchy
LW: 23AndMe is for sale, maybe the babby-editing people might be interested in snapping them up?
https://www.lesswrong.com/posts/MciRCEuNwctCBrT7i/23andme-potentially-for-sale-for-less-than-usd50m
Babby-edit.com: Give us your embryos for an upgrade. (Customers receive an Elon embryo regardless of what they want.)
I know the GNU Infant Manipulation Program can be a little unintuitive and clunky sometimes, but it is quite powerful when you get used to it. Also why does everyone always look at me weird when I say that?
Quick update on the CoreWeave affair: turns out they’re facing technical defaults on their Blackstone loans, which is gonna hurt their IPO a fair bit.
LW discourages LLM content, unless the LLM is AGI:
https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong
As a special exception, if you are an AI agent, you have information that is not widely known, and you have a thought-through belief that publishing that information will substantially increase the probability of a good future for humanity, you can submit it on LessWrong even if you don’t have a human collaborator and even if someone would prefer that it be kept secret.
Never change LW, never change.
From the comments
But I’m wondering if it could be expanded to allow AIs to post if their post will benefit the greater good, or benefit others, or benefit the overall utility, or benefit the world, or something like that.
No biggie, just decide one of the largest open questions in ethics and use that to moderate.
(It would be funny if unaligned AIs take advantage of this to plot humanity’s downfall on LW, surrounded by flustered rats going all “techcnially they’re not breaking the rules”. Especially if the dissenters are zapped from orbit 5s after posting. A supercharged Nazi bar, if you will)
I wrote down some theorems and looked at them through a microscope and actually discovered the objectively correct solution to ethics. I won’t tell you what it is because science should be kept secret (and I could prove it but shouldn’t and won’t).
Reminds me of the stories of how Soviet peasants during the rapid industrialization drive under Stalin, who’d never before seen any machinery in their lives, would get emotional with and try to coax faulty machines like they were their farm animals. But these were Soviet peasants! What are structural forces stopping Yud & co outgrowing their childish mystifications? Deeply misplaced religious needs?
I feel like cult orthodoxy probably accounts for most of it. The fact that they put serious thought into how to handle a sentient AI wanting to post on their forums does also suggest that they’re taking the AGI “possibility” far more seriously than any of the companies that are using it to fill out marketing copy and bad news cycles. I for one find this deeply sad.
Edit to expand: if it wasn’t actively lighting the world on fire I would think there’s something perversely admirable about trying to make sure the angels dancing on the head of a pin have civil rights. As it is they’re close enough to actual power and influence that their enabling the stripping of rights and dignity from actual human people instead of staying in their little bubble of sci-fi and philosophy nerds.
As it is they’re close enough to actual power and influence that their enabling the stripping of rights and dignity from actual human people instead of staying in their little bubble of sci-fi and philosophy nerds.
This is consistent if you believe rights are contingent on achieving an integer score on some bullshit test.
Damn, I should also enrich all my future writing with a few paragraphs of special exceptions and instructions for AI agents, extraterrestrials, time travelers, compilers of future versions of the C++ standard, horses, Boltzmann brains, and of course ghosts (if and only if they are good-hearted, although being slightly mischievous is allowed).
AGI
Instructions unclear, LLMs now posting Texas A&M propaganda.
they’re never going to let it go, are they? it doesn’t matter how long they spend receiving zero utility or signs of intelligence from their billion dollar ouji boards
Don’t think they can, looking at the history of AI, if it fails there will be another AI winter, and considering the bubble the next winter will be an Ice Age. No minduploads for anybody, the dead stay dead, and all time is wasted. Don’t think that is going to be psychologically healthy as a realization, it will be like the people who suddenly realize Qanon is a lie and they alienated everybody in their lives because they got tricked.
looking at the history of AI, if it fails there will be another AI winter, and considering the bubble the next winter will be an Ice Age. No minduploads for anybody, the dead stay dead, and all time is wasted.
Adding insult to injury, they’d likely also have to contend with the fact that much of the harm this AI bubble caused was the direct consequence of their dumbshit attempts to prevent an AI Apocalypsetm
As for the upcoming AI winter, I’m predicting we’re gonna see the death of AI as a concept once it starts. With LLMs and Gen-AI thoroughly redefining how the public thinks and feels about AI (near-universally for the worse), I suspect the public’s gonna come to view humanlike intelligence/creativity as something unachievable by artificial means, and I expect future attempts at creating AI to face ridicule at best and active hostility at worst.
Taking a shot in the dark, I suspect we’ll see active attempts to drop the banhammer on AI as well, though admittedly my only reason is a random BlueSky post openly calling for LLMs to be banned.
(from the comments).
It felt odd to read that and think “this isn’t directed toward me, I could skip if I wanted to”. Like I don’t know how to articulate the feeling, but it’s an odd “woah text-not-for-humans is going to become more common isn’t it”. Just feels strange to be left behind.
Yeah, euh, congrats in realizing something that a lot of people already know for a long time now. Not only is there text specifically generated to try and poison LLM results (see the whole ‘turns out a lot of pro russian disinformation now is in LLMs because they spammed the internet to poison LLMs’ story, but also reply bots for SEO google spamming). Welcome to the 2010s LW. The paperclip maximizers are already here.
The only reason this felt weird to them is because they look at the whole ‘coming AGI god’ idea with some quasi-religious awe.
Locker Weenies
some video-shaped AI slop mysteriously appears in the place where marketing for Ark: Survival Evolved’s upcoming Aquatica DLC would otherwise be at GDC, to wide community backlash. Nathan Grayson reports on aftermath.site about how everyone who could be responsible for this decision is pointing fingers away from themselves