74 Comments
User's avatar
Wes Collingsworth's avatar

The short term thrill I had thinking that Andy Weir had written a new book that I somehow hadn’t heard about was absolutely crushed by the point of this article! 🤣

Expand full comment
Lee Thomas's avatar

Same!

Expand full comment
Beth Ross's avatar

SAME!! And then the reality hit and I was sad. Haha.

Expand full comment
Morgaine's avatar

I knew a lot of this already because I’m so wary of AI and how it’s breaking down our brains. Students are using it to do assignments, homeschoolers are using it to lesson plan, workers are using it instead of their brains to read, etc etc. “But it saves time,” they all say. Bah! I feel like a grumpy old lady about it.

Not to mention the horrible environmental impact.

I hate that I can’t escape AI when I google something, too! Like, let me just use my brain to parse out what link I want, thank you.

Expand full comment
Beth Ross's avatar

Grouchy-about-AI middle school teacher here! It is indeed frustrating and discouraging, especially as the short term impact looks good! "Bur students are writing better!" "Teachers get to save time--have AI grade!"

I tried to explain to my students that it's about a skills deficit. If you are not developing the skills for writing or research at a young age (or an old age) you're missing out and will need to depend on AI even more in the future. We're largely going back to notebooks next year as well as handwritten drafts, and I'm very excited about it!

Expand full comment
Morgaine's avatar

Yes!! I was reading that a lot of college professors are doing the blue book exams. Love it.

Expand full comment
Sara's avatar
2dEdited

I learned from making mistakes and having truly well-intentioned and brilliant teachers, explaining the problem and how to fix it. Those are the lessons I remember. We truly are forgetting the human and emotional impact of teaching / learning, and it's devastating to me. I'm not a teacher, but good teachers made learning a lifelong passion (my health problems had other ideas, but still). Not to mention that the creativity of figuring out how to get through to students must be a satisfying part of the job (it was for me when I taught).

On top of that, in college, I had several years of advanced math that was made up of me and only 1 or 2 other students. We did everything at home, including exams. Because this was too early internet days, resources for cheating just didn't exist. We worked so hard, harder than we would have in a normal classroom setting, and learned such valuable lessons. I imagine that professors couldn't even allow this kind of thing anymore, because it's too easy to simply have it solved for you.

Sorry, I'm just rambling, but I feel for you and hate the situation.

Expand full comment
Beth Ross's avatar

I appreciate hearing your perspective, Sara! I'm right there with you! Bringing the human element and relationship and fun back to education is my number one goal. :) There are defo those of us who are refusing to conform!!

Expand full comment
Sara's avatar

Teachers are the best. You guys can change everything for the better. I wish you the luck of the entire universe.

Expand full comment
Cynthia's avatar

I’ve listened to a couple of podcasts about AI in schools, and I still don’t understand what the new educational goals are for kids who are being told that AI is going to take any potential job they could have (Is this right?) This must be a wild time to be a teacher.

Expand full comment
Beth Ross's avatar

This definitely isn't the rhetoric that our kids are receiving. The jobs that are in danger from AI are present, but they still have so many opportunities. Definitely not easy!!

Expand full comment
Cynthia's avatar

I’m very glad to hear that kids aren’t being told that their hard work and acquired skills are all for naught. I was worried about them after listening to the aforementioned podcasts. I still can’t imagine being that age and being able to use your phone to generate an essay for a class. In the 90s, we had an attitude about “grammar check,” and the pager was the coveted technology of 12 year olds everywhere. These kids are living in a totally different reality.

Expand full comment
Amanda's avatar

Oh interesting! Is the choice to go back to notebooks happening in just your classroom, district, or state? I watched as my youngest siblings started integrating laptops into their middle school curriculum, and wondered if it would become the norm or if it would revert back at some point --after witnessing a rise in online gaming and social media.

Expand full comment
Paige's avatar

This past school year, my son's sophomore English teacher made them write every bit of their essays in class on paper (and those drafts never went home). Then they transferred their writing to the computer, also during class time, to turn it in. This method means the teachers don't have to worry about who generated what online and trying to bust students who may or may not have used AI. My friend teaches junior English and says all of the online tools to help teachers find AI are junk, so she basically gave up on trusting them. If she suspects someone used AI, she'll confront them, and if they stick to their story of not using it, there's nothing she can do.

Expand full comment
Amanda's avatar

Oh, I like that idea because the students rely on their own writing skills AND they get to practice their typing skills-- it's like language arts and typing class combined into one. Very thoughtful solution! 👍

Expand full comment
Beth Ross's avatar

Definitely!!

Expand full comment
Sara's avatar

I absolutely applaud this and it also makes me sad. My favorite way to write was super late at night, with dictionaries and my thesaurus all around me. I completely understand why teachers have to do this, but it also shrinks down the creativity and diversity of the learning process. I was terrible at doing things in a classroom, but flourished when I had my routines and late-night creativity by myself. There is no winning, I guess :)

Expand full comment
Beth Ross's avatar

It's largely a decision of my team at our school, and it is certainly not district wide. The main reason that we started the one on one was during Covid when the kids actually needed laptops, but now it's come back to bite us a little bit. They are also far less social with them, and now more than ever we need to be helping kids forge meaningful bonds with one another!

Expand full comment
Ellie's avatar
2dEdited

I really love this thread and I really think that middle, high school and potential even college kids should be learning to write and read all on their own without AI. Absolutely.

The only slight pushback I’ll give; I’m 43 and I am dyslexic, discographic and have a couple other auditory processing issues. I love to write. I love it. My friends all tell me I’m a beautiful writer, I just don’t always use the correct sentence structure, and correct spelling. Until recently, my best friend who is a publisher would always edit my work for me, now I have AI do it. My friend has children, a life and work of her own; I don’t need to bother her with my writing.

AI has given me a way to express myself that is at such a different level than before. I can write and write and write, and it can correct my work, and I’m learning as it corrects my work. It is a freeing feeling to finally feel like I’m getting what’s in my head onto paper so to speak, for those around me to digest.

I hope that teachers are eventually able to help kids learn how to do both. Know how to write, read and digest information. That way they can use AI to enhance their work, not replace them doing the work.

Expand full comment
Beth Ross's avatar

I am with you 100%! All of my kiddos with dyslexia obviously have access to their Chromebook, and I find that it's never them that are making the bad decisions with it. I love how you said the idea of using it as a tool once they know how to write versus building those skills poorly. They are also wonderfully creative, and it's obviously going to be a tool that's at their disposal later, but teaching literacy in that regard is so important to me!

Expand full comment
Ellen Mae's avatar

I learned recently that you can add a space and then -ai after your Google search and the you won't get the ai generated crap. But its Annoying that you have to do it every time. I am considering using a different search engine.

Expand full comment
Jennifer Adams's avatar

I wish there was a setting in Google to just turn it off if you don't want it.

Expand full comment
Jaime's avatar

If you google “turn off google ai” it will tell you how to do so 🙂

Expand full comment
Theresa Jones's avatar

That is helpful. Thank you

Expand full comment
Elle Doorman's avatar

I hate to tell you this, but you can escape AI when googling. Just add a curse word (I find the F-word particularly effective), and then it won’t search with AI. 🙃

Expand full comment
Morgaine's avatar

I know this! But I can’t do it at work 😂

Expand full comment
Elle Doorman's avatar

Not with that attitude! 😆

Expand full comment
Sara's avatar

I love that word and will now use it even more liberally! Thank you.

Expand full comment
Ellie's avatar

This is amazing!

Expand full comment
Sonya's avatar

I work in a field where I should probably know more about AI than I do. However I have put my energy toward understanding what it is, not what it can do for me, and also how to spot the fakes. While there are some fun things out there with AI, I'm too wary of the bad impacts it has. Plus it just seems it's going to give the people who want to rewrite things more ways to lie.

Expand full comment
Sara's avatar

I refuse to use it. I know that a lot of what I do is fed into AI systems, but I refuse to use it for anything. I think it can be a brilliant tool in the right situations, but mainly it's just another thing to turn people useless.

Expand full comment
Timothy Patrick's avatar

I just wanna take a moment to give journalists the standing ovation they deserve. Not only is this article so important for people to understand (thanks, Sharon!), but I don’t think enough people realize how much we need investigative journalists calling BS on the government. If not for journalism, we would be having laws passed based on hallucinated, fake science compiled by HHS. And given how much funding has been stripped away from journalism in the past few decades, how much of the BS is going undetected?

It makes me wonder how we got to this point where people have been saying they trust “the media” so little. How do we get the image of what “the media” describes to be less talking heads 24-hour news and more the investigative journalism that’s going to be more and more essential to democracy as our institutions continue to crumble?

Expand full comment
Gina S Meyer's avatar

Timothy, thank you! Journalism matters!

Expand full comment
Amber's avatar

One way to combat this is local news and newspapers. I know local newspapers are not available everywhere. My town is extremely lucky to have a vibrant local paper. I need to start subscribing to our regional paper too. Maybe this is the kick in the butt I need to do it!

Expand full comment
Ellie's avatar

Very well said!

Expand full comment
Cynthia's avatar

This is a topic I want to hear more about. Right now I’m reading books and listening to podcasts about AI, but I’m trying to figure out how to responsibly navigate the digital world with this flawed but incredibly powerful technology being unleashed upon us. I’m more focused on not ending up like Sandra Bullock in The Net than I am in incorporating AI in my life, but I definitely want to hear more on the Governerd take on AI. 🤖 Thanks, Sharon.

Expand full comment
Sonya's avatar

I've focused on understanding what it is and it's impact, plus spotting the fakes/incorrect information. I don't have any desire to incorporate it into my life, but I do wish it wasn't creeping into productivity tools that I don't need it in.

Expand full comment
Erin Dutka's avatar

Between the hallucinations and the HUGE environmental impacts of AI, I try to steer clear as much as possible. The environmental impact acts alone make my stomach turn. And it is being pushed on us everywhere. Every site or app has their own little AI helper or tool now that pops up. I don’t want it!! I’ve seen directions for turning these “tools” off and it usually isn’t an easy task. You have to go dig and a know specifically where to look to turn these bots off. I really fear what AI has done to people/society already in such a short period of time.

Expand full comment
Jane's avatar

Except AI is not new. Some of the ways you can access and use it are new. People have also been using AI for YEARS. And most likely used it a couple of times today whether they think about it or not. The algorithm that powers any search engine = AI. Siri & Alexa = AI. Your phone = AI (unless you have a very, very, very basic phone and even then some of it is AI) If you got served a "you might also like" suggestion after you bought something = AI. Where things are placed in grocery stores = AI. What is sold in grocery stores = AI. We are just talking about it more now, which is good. People need to understand what it is and how to properly use it as mentioned above. And how to have a healthy skepticism.

Expand full comment
Allison Adkins's avatar

I want to hear more about the environmental impacts of AI, as well as the language that is in the BBB prohibiting states from regulating AI for the next 10 years. I imagine a (not very distant) future where humans are competing with AI not only for jobs but for resources like electricity. I feel like this could/should be a truly bipartisan issue. We need more information and more education about AI, and we need it now. We are still trying to figure out how to live with social media and smartphones and what those things are doing to us, and here comes AI to mess us up even more! Please keep writing about this topic, Sharon.

Expand full comment
Ellie's avatar

I agree. I’d love to hear more.

Expand full comment
Amber's avatar

AI has many interesting uses. But like with the HHS report a human is still needed at some point in the process. And not a human looking to rubber stamp something. A human is needed to provide quality checks and edit. It makes you wonder if the people at HHS tasked with writing this report were not capable. Or if their opinions were not yes, man enough for the people who wanted it written. I would love to think they have learned their lesson, but I highly doubt it.

Expand full comment
Timothy Patrick's avatar

Even though the focus of this piece (AI in general) meant we couldn’t linger on the HHS report in depth, I love that it did succeed in quickly pointing out the real takeaway: it’s not necessarily the “citation errors” as their coverup wording suggests. It’s that it proves RFK’s motivations are insincere.

Some studies were invented to make it look like doctors were overprescribing meds to kids. Others used real studies but invented conclusions that were anachronistic: for instance saying a 2007 study was blaming 2013 policy for negative trends. Those are just a couple of examples.

So it’s not the “oopsie daisy” mistake they want you to believe, that this is just a few minor citation errors. It reveals that someone writing this report had a preconceived agenda and then requested something like “find me some scientific studies that fit my goal of convincing people that medicinal science is out of control.”

If your movement is based on something real, then WHY would you need AI to give you examples to illustrate your point?

I wonder how a self-proclaimed “MAHA Warrior” would react if you asked them about this. Does it not, at the very least, call into question the expertise of these officials? And just as likely, does it not prove that these people have been using circular logic to reenforce their worldview long before AI could be misused?

Expand full comment
Amber's avatar

I agree, it should make us question those officials. Just like any scientific paper I read I'm going to look at what other science they have read and thought about prior to making their hypothesis and then forming their research. You don't write your conclusion first and then back track. And that seems to be how they went about this paper and many of their claims.

Expand full comment
Gina S Meyer's avatar

Amber, yes. The way they went about this paper and many of their claims is called lying.

Expand full comment
Gina S Meyer's avatar

Timothy, yes. Please call into question their expertise and ethics!

Expand full comment
Selina S's avatar

The number of people I know who are convinced that AI is the future? Terrifying. If what you want is a future full of things and people who will say and do anything to appease whoever they perceive (or are programmed) is the individual (or organisation) to be appeased.

Trying to reason with them? No dice. They enjoy it. It's fun! It's pretty! The fact that it takes an order of magnitude more computing power than non-AI internet searches? People can't comprehend that, any more than they can comprehend the order of magnitude of difference between a million and a billion.

Or they just don't care. They have children and sometimes grandchildren, and they have no concept or care that they're basically destroying the clean water of their desccendents solely so they can have their pretties and their fun.

Expand full comment
Sara's avatar

It has been a huge blow to my faith in humanity to realize how many people are like this. They care for themselves in the immediate moment only. I was not raised with that worldview and it hurts to see.

Expand full comment
justkima's avatar

I think my age is showing in that I just really don't want to deal with AI. 🤣 I know that it's coming, heck, it's here, but all I think about is the movie WALL-E. I know it will be able to be used for great good, but with that comes the opposite too. I just am not sure we (the government/Congress) are up to the task of navigating and understanding the complexities of it and to be able to rationally discuss the implications and make laws and regulations protecting us. Frankly, I don't have much or any faith in Congress these days.

Expand full comment
Carolyn Spivak's avatar

Sharon is on point about the dangers of misuse. But it’s not the whole story, even for us members of the general public.

If you don’t use AI, you are cutting yourself off from a learning tool that if properly used will help you understand complicated concepts in fields you are not trained in. I know we all love learning here so I feel like this is important!

AI is quite useful to quickly get an overview of a complicated topic I am ignorant about. (I like Claude because Anthropic is less evil than the others, also Claude is reasonable guardbanded.) Of course you have to verify all the details!

Because AI indeed will tell you wrong facts. (Sharon’s examples are of wrong facts.) But if what you want is general consensus, to help guide you in (for example) reading a legal document or science article, it’s great. Also medical reports (be extra careful, but it’s great to help you come up with a list of questions to ask your doctor when you have a new diagnosis). You can ask it to explain stuff you don’t understand and it patiently does that without making you feel like an idiot.

Again, it’s just a first step. You have to follow up by using human sources!! (expert, like your doctor, a carpenter, a lawyer, a mechanic, or a real science article in a reputable journal). But it will help you to talk to them in a more informed and less confrontational way.

Going back and forth between a research article, a legal brief, a medical report, a contract etc. and an AI is extremely empowering. You can get the gist of very complicated topics and it can help in dealing with life’s problems. Which are complicated.

I’ve said this before, but think of an AI as your old wise grandpa who knows a lot but has dementia and gets facts wrong.

Expand full comment
Amber's avatar

This to me is why we need regulations around AI. Not to deter use but to make sure there are safeguards. This reminds me of the rise of the internet for millennials/gen X. No one taught us how to use it, we just did. And then realized later on that if we wanted the next generations to effectively use it someone needed to teach them how and provide some regulations. Hopefully we learned our lesson there and we teach people how to use AI while also regulating it.

Expand full comment
Ellie's avatar

This is exactly how I feel. You summed it up perfectly! Thank You.

A perfect example:

we got a rescue puppy this winter, who was five months old and had a plethora of issues. My husband and I are both very well-versed with puppy training, but because of family circumstances, we were really overwhelmed with this new pup. I started running questions through AI and it reminded me of training I’d forgotten. It helped me breakdown, separation anxiety, abuse patterns, anxiety issues and illnesses, so I could chat with our vet and help our pup. It was so helpful, as I already knew the information, I had just forgotten I knew the information. It was such an easy tool rather than spending $500 on a personal trainer to come to the house and help us with the dog (he had pneumonia and Giardia and he was not allowed in any group settings for quite a long time). We were able to help him through all of his quirky issues and he is now a thriving, goofy one year-old pup. He’s still freaking quirky, but that’s just his personality 😜 thank goodness he’s adorable.

Expand full comment
Carolyn Spivak's avatar

Yes such a great example! including chatting with the vet as a sanity check. Here’s another one: I’m in the process of putting in a meadow on a spare bit of land. I know zero about meadows or farming, but the AI has been very helpful in filling in background info. So when I talked to the person at the seed company, I was at least somewhat educated, so I could ask her about putting in a cover crop without having to take up hours of her time.

It allows you to talk to actual human experts more comfortably and efficiently. This is a real issue for a lot of people and I think is one source of people’s mistrust of experts.

Expand full comment
Ellie's avatar

Exactly. My husband‘s been using AI to organize and plan our vegetable garden; he’s a classically trained chef, and it’s a very particular. It’s helping him choose the varieties and organize how to yield the best return. It’s going incredibly well. I hope your meadow goes well! It will be beautiful once it’s all done!

Expand full comment
Jane's avatar

People have also been using AI for YEARS. And most likely used it a couple of times today whether they think about it or not. The algorithm that powers any search engine = AI. Siri & Alexa = AI. Your phone = AI (unless you have a very, very, very basic phone and even then some of it is AI) If you got served a "you might also like" suggestion after you bought something = AI. Where things are placed in grocery stores = AI. What is sold in grocery stores = AI. We are just talking about it more now, which is good. People need to understand what it is and how to properly use it as mentioned above. And how to have a healthy skepticism.

Expand full comment
Carolyn Spivak's avatar

We’re talking about large language models here (LLM). They are quite different from previous AI. Also, they are not algorithmic in the way I understand the term. I used to test software for a living so bear with me here, I don’t mean to be pedantic but the nature of an LLM really does bring special challenges. For example, it’s impossible to figure out why they do what they do. That makes it super hard to fix bugs.

Expand full comment
Gina S Meyer's avatar

“The Big Beautiful Bill” allocates $500 million for AI-driven modernization of federal IT systems.

It also has a provision that would create a 10-year moratorium on state and local AI regulations. This means states would be prohibited from enforcing any laws that "limit, restrict, or otherwise regulate" artificial intelligence models, systems, or automated decision systems for a decade.

What could go wrong? Please discuss.

Expand full comment
Sonya's avatar

And after reading this about the stuff it makes up, you just know it's not going to be used faithfully and will just promote more lies.

Expand full comment
Krause Kim's avatar

While I don’t argue that AI can be helpful, it’s only useful if the people using it have morals and character. Elon knew it was wrong, used it anyway, and hoped people wouldn’t notice. Just one more reason we need to stay vigilant against this administration and not take anything at face value!

Expand full comment
Meredith Mackin Rilley's avatar

I’ll be forwarding this article to everyone I know who uses AI, even those who use it only sparingly, and will pray they don’t use AI to summarize the article (though that would be an interesting exercise)!

Expand full comment
Stephanie Longhi's avatar

We still NEED humans making sure the checks and balances are remaining strong. And we need to vote for people who at least want to understand AI and to make sure it's being used appropriately by the government.

Expand full comment
Sara's avatar

This harkens back to the same old problem - AI is not a replacement for expertise, intelligence, professionalism, and attention to detail. It can be a tool used by qualified people, but it can't turn an imposter into an expert. Lord knows a lot of idiots are trying, though.

Expand full comment
Theresa Jones's avatar

We need common sense leadership for this issue- something sorely lacking today. I just don’t get why we would be ramping up using such a mistake riddled tool that is also enormously expensive. It is nonsensical.

Expand full comment