i don’t get horror movies, why doesn’t the protagonist just act in a way which will prevent a story from being told or piece of entertaining media made
generative AI literally makes me feel like a boomer. people start talking about how it can be good to help you brainstorm ideas and i’m like oh you’re letting a computer do the hard work and thinking for you???
There are many difficult things that were replaced with technology, and it wasn't a bad thing. Washing machine replaces washing clothes by hand. Nothing wrong with that. Spinning wheel replaces drop spindle. Nothing wrong with that.
Generative AI replaces thinking. The ability to think for yourself will always be important. People that want to control and oppress you want to limit your ability to think for yourself as much as possible, but continuing to practice it allows you to resist them.
"This tool replaces thinking," is a technology problem we (humans) have faced before. It's a snark that I've seen pro-AI contenders take as well: I bet these same people would have complained about calculators! And books!
Well. They did, at the time.
We have records from centuries -- even millennia back -- of scholars at the time complaining that these new-fangled "books" were turning their students lazy; why, they can barely recite any poems in their entirety any more! And there are people still alive today who remember life before widely available calculators, and some of them complained -- then and now -- that bringing them into schools dealt a ruinous blow to math education, and now these young people don't even know how to use a slide-rule.
And the thing is:
They weren't wrong.
The human brain can, when called on, perform incredible feats of memorization. Bards and skalds of old could memorize and recite poems and epics that were thousands of lines long. This is a skill that is largely lost to most of the population. It's not needed any more, and so it is not practiced.
There is a definite generational gap, between the people who were trained on slide-rules and reckoning and the generation that was taught on calculators. There came a year, when that first generation grew up and entered the workforce, when you suddenly started encountering grown adults who could not do math -- not even the very basic arithmetic needed to count down from one hundred. I would go into a shop, buy an item for sixteen dollars, give the cashier a twenty and a one because I want a fiver back, and have them stare at the money in incomprehension -- what do? They don't know how to subtract sixteen from twenty-one. They don't know how to calculate a fifteen-percent tip. They did not exercise the parts of their brain that handle this, because they always had a calculator to do it for them.
Nowadays, newer point-of-sale machines compensate for this; they will automatically calculate and dispense the change, no subtraction necessary on the part of the operator. Nowadays everyone carries a phone, and every phone carries a calculator, so if you need to do these calculations, the tool is right there. As more and more transactions go electronic and card, and cash fades further and further out of daily life, these situations happen less and less; it's not a problem that most people can't do math (until it is.)
The people who complained that these tools-that-replace-thinking would reduce the ability of the broad population to exercise these cognitive skills weren't wrong. It's simply that, as the pace of life changed, the environment changed so that in day-to-day life these skills were largely unnecessary.
So.
Isn't this, ChatGPT and Generative AI, just the latest in a long series of tool-replaces-thought that has, broadly, worked out well for us? What's different about this?
Well, two things are different.
1) In the previous instances of tool-replaces-thinking, the cognitive skill that it replaced was a discrete and, on a day-to-day basis, unnecessary outlay of energy. Most people don't need to memorize thousands of lines of poetry, or anything else for that matter. Most people don't need to do more than cursory levels of math on a day to day basis.
This, however, is different. The cognitive skill that is being obsoleted here is more than "how to write essay" or "identify what is the capital of Rhode Island." It encompasses the entire field of being able to generate new thoughts; of being able to consider and analyze new information; of being able to follow logical trains to their conclusions; of being able to order your thoughts to construct rational arguments; or indeed of being able to express yourself in any structured way. These cognitive tools are not occasional use; they are every day, all the time.
2) In the previous instances of tool-replaces-thinking, the tool was good at what it did.
Calculators may have replaced reckoning, but calculators are also pretty good at what they do. The calculator will, as long as you give the right input, give the right answer. ChatGPT cannot be relied on to do this. ChatGPT will tell you, confidently and unhesitantly and dangerously, that 2+2=5, and it will not care that it is wrong.
Books may have replaced memorization, and books certainly could be wrong; but a fact, once in a book, is pretty stable and steady. There is not a risk that the Guy Who Owns All The Encylopedias might wake up one day and decide -- to pick a purely hypothetical example -- that the Gulf of Mexico is called something else, and suddenly all the encyclopedias say that.
Generative AI fails on both these counts. It fails on every count. It's inaccurate, it's unethical, it's unreliable, it's wrong.
---
I remember some time ago seeing someone say (it was a video about medieval footwear, actually) that "humans have a great energy-saving system: if we can be lazy about something, we are."
This is not a ethical judgment about humans; this is how life works. Animals -- including humans -- will not do something the hard way if they can do it the easy way; this basic principle of conservation of resources is universal and morally neutral. Cognition is biologically expensive, and though our environment is not what it once was, every person still goes through every day choosing what is valuable enough to expend resources on and what is not.
Because of this, I don't know if there is any solution, here. I think pushing back against the downhill flush of the-easy-way-out is a battle both uphill and against the tide.
So I'll just close with this warning, instead:
Generative AI is a tool that cannot be trusted. Do not use it to replace thought.
“They’re trying to convince people they can’t do the things they’ve been doing easily for years – to write emails, to write a presentation. Your daughter wants you to make up a bedtime story about puppies – to write that for you.” We will get to the point, she says with a grim laugh, “that you will essentially become just a skin bag of organs and bones, nothing else. You won’t know anything and you will be told repeatedly that you can’t do it, which is the opposite of what life has to offer. Capitulating all kinds of decisions like where to go on vacation, what to wear today, who to date, what to eat. People are already doing this. You won’t have to process grief, because you’ll have uploaded photos and voice messages from your mother who just died, and then she can talk to you via AI video call every day. One of the ways it’s going to destroy humans, long before there’s a nuclear disaster, is going to be the emotional hollowing-out of people.”
Justine Bateman on AI in this article from The Guardian
Georgia O’Keeffe: Living Modern Exhibition at the Brooklyn Museum.
Georgia O’Keeffe’s Closet at Abiquiú & her After Alexander Calder “OK” brooch.
Woah. has anyone tried smoking weed and listening to music after? I think this might be big
THE ENTIRE WEST IS BEING PUT UP FOR SALE AND I AM BEGGING YOU TO CALL YOUR SENATORS
Trump’s budget bill has many, many things in it, but buried amongst it is the MILLIONS OF ACRES OF PUBLIC LAND FOR SALE.
This is the entirety of the Arizona state forests, the entire Cascades mountain range. Swathes of pristine desert around the national parks in Utah. On the doorstep of Jackson Hole.
THIS BILL IS BIG, BUT IT CAN BE AMENDED AND ABSOLUTELY MUST NOT PASS AS IS please.
If you have ever enjoyed the wilderness, we stand to lose it all forever.
CALLING your senators - NOT JUST IN THE WEST. ALL SENATORS, is CRUCIAL.
Outdoor alliance has a great resource for reaching out.
I don’t have a huge following but please, everywhere I have ever loved, the forests I grew up playing in, the land I got married on, is all at risk and I am begging.
the original paper, for anyone interested
"Here's the terrifying part: When researchers forced ChatGPT users to write without AI, they performed worse than people who never used AI at all.
It's not just dependency. It's cognitive atrophy.
Like a muscle that's forgotten how to work.
THIS IS THE METAPHOR THAT I KEEP USING ALL THE TIIIIIIIME
I 100% believe that Nathan Fielder made a deliberate choice in focusing the episode around footage of him interacting with two autism "advocates" who are ultimately ableist and reductive in their understanding of autism. A congressman who doesn't even know what masking is, and an advocacy organization founder who uses outdated tests and won't acknowledge that not-autistic folks might benefit from rehearsing difficult social situations? That's not an accident.
If you look up Doreen Granpeesheh, you'll see that she is known for promoting the idea of autism "recovery," and that she has a history of publicly supporting the claim that there's a link between vaccines and autism. Her Wikipedia page makes very clear that she is a problematic figure whose work has been critiqued, and that she should not be taken seriously. Fielder, along with his writers and producers, would have known her reputation when booking her for the show.
A screenshot from Granpeesheh's website. Yes, it would appear she is actually proud of this headline.
And I think he's using the meeting with Cohen as a commentary on how autistic folks (and minoritized people in general, most likely) are treated by people in authority. Instead of masking and politely leaving the room, instead of picking up signals that Cohen is wrapping up the meeting without wanting to announce he's doing it on camera, Fielder purposely doesn't "take the hint" so that Cohen has to flounder and keep trying to wrap up the meeting in a way that is ultimately vague, dismissive, and rude. The longer the audience has to sit and watch that dynamic play out, the more likely we are to recognize Cohen as the bad guy in the situation rather than Fielder. It's brilliant.
And it's the exact same strategy he's using by spending the first half of the season ostensibly focusing on the first officer in those cockpit interactions, while deliberately giving screen time to guys like the "banned from every dating app" pilot to make it clear who is actually the source of the problem (and to hopefully trigger an FAA sexual harassment investigation in that one instance). In all three of these situations, he's showing us how a problematic person in power holds all the cards and is unwilling to budge.
I know there are differing opinions on what aspects of the show and his character are exaggerated or performed. As a very self-aware autistic comedy writer, this is my assessment: I think he's semi-deliberately not filling silences with masking behaviors, and asking questions he probably knows are uncomfortably direct, to create a space where others (often the neurotypical folks in these situations) have no choice to fill in the silence, which ultimately makes them say or do something relevant. I think he also acts like an unaware, unbiased observer in situations where he has a strong idea of what's going on. So whenever he says "I didn't know why" or "I didn't understand," he probably mostly does know and understand, but he knows that performing the role of an unbiased observer is a stronger strategic choice to get his message across.
He's basically playing the role of a journalist who knows that two of the most effective tools in his toolkit are a) silence when he wants a subject to reveal crucial information, and b) an "unbiased" narrative frame that makes the audience feel as if they're coming to a conclusion on their own, rather than being told what to think.
It's a nuanced approach but I think it's a smart one, especially considering that autistic-coded folks are very easily dismissed when speaking truth to power. And yeah, he's not gonna get his Congressional hearing. But pointing a camera at the problem and airing it for a massive audience, while saying "Me? I don't have an agenda; this data just presented itself in response to my neutral, unbiased question" is a pretty autistic—and often effective—approach to problem-solving.














