What I am about to tell you is going to shock you. You will not expect this of me, nature girl, and the chance is good that we will find ourselves, you and I, sitting in different spots, rather than together, on this issue.
The issue is technology, specifically AI.
Computers
Digital technology has battered us like a storm at sea. One wave hits and knocks us sideways, and before we catch our breath another wave hammers us from a different direction. The computer walloped us, then cell phones, then texting, then smart phones, smart watches, smart refrigerators, smart glasses. The list goes on.
Slam, slam, slam, and when the water subsides, nothing looks the same.
Soon enough, with digital technology, people were lost in something, not each other and not the outdoors, walking into poles and needing to be notified that they’d burned up too many hours in front of a screen.
Each of these tsunamis brought afterwaves—new systems and apps and versions. Windows 3, 3.1, 95, 98. Now Windows ll.
Within those apps and versions, we got updates and upgrades and reboots, each roughhousing us and disturbing the peace.
I Prefer Analog
I dragged my feet on all of it.
I didn’t want the computer, I didn’t want the cell phone, I didn’t want the iPad, I didn’t want to text, I didn’t want a FitBit, I didn’t want the Zoom room.
I did not want any of it.
In fact, the other day I came across an essay I wrote twenty years ago, “Why I Quit Email.”
Today my inbox reports 10,321 messages, so obviously my renunciation didn’t last.
AI
Now the tidal wave of AI has crashed against us. However, for the first time ever, I find myself in a very different place. I’m okay with AI.
It doesn’t scare me.
I don’t think it’s evil.
I don’t think it’s going to destroy us.
It doesn’t make me mad, even though at least three of my books were used to train it, and this was done without my permission and I received no compensation for it.
I look at AI as a continuation of where we have been heading for three decades.
A Google search was baby-AI.
Okay, dial back. A typical Google search of old was not AI. In the old days, when we entered a question into the Google search bar, the app proceeded to look for keywords in its vast indexes, searching for a website that would help us find an answer to our question. Know how a book has an index in the back? Google was an index.
“Motmots, p. 55.”
AI, on the other hand, answers the question for us. It puts words together based on vast amounts of language, ideas, and research that have been loaded into it. A chatbot (a computer app that simulates human conversation) writes an answer.
Me: What is a motmot?
Chatgpt 4.0: “A motmot is a type of colorful bird found primarily in Central and South America, belonging to the family Momotidae.”
Then Chatgpt burps up line after line about the motmot’s appearance, size, habitat, behavior, and name origin. It asks if I would like a photo or a poetic description of one.
Me: Yes, a poetic description.
Chatgpt: Wonderful. Here’s a poem for you.
Let’s Review So Far
I have been called a Luddite.
Every hour of the day I miss the world I knew in which computers did not exist.
I think of the digital revolution as a coup. What happened was an overthrow of human society.
Although I love what technology provides, I mourn—truly mourn—what it has stolen.
Chatgpt poetry is laughable. It will never be Mary Oliver. Never ever.
Why I Changed
So why am I staying silent in the uproar opposing AI? Why am I, instead, using it, and now, talking about using it?
AI is not going away.
I see it as a continuation of the digital overthrow. I told myself that this time I’d ride the wave, not get slammed by it.
Computer technology won’t end with AI. That has been proven. We haven’t been able to stop any of it, and we won’t stop this.
I can see the benefits. If I visit my doctor with a list of symptoms, they have to comb their memory for what might be ailing me. But what if my symptoms get plugged into AI? (What I am saying here is that I may trust a computer’s memory more than I trust a human one.)
What I Am Saying
I am saying that yes, I use AI.
How?
I have used Chatgpt from the beginning in the exact way that I will google something. Even when The New York Times was filing a lawsuit against OpenAi, I was logging into Chatgpt.
I have it do research for me. I use it in the way I would a research assistant, although I’ve never had the means to hire a research assistant. Sometimes AI helps me, sometimes it doesn’t
I have it put columns of text in alphabetical order, which for some odd reason my word processing app will not do.
I ask it to summarize an article for me.
I ask it to translate text from another language to English.
When I was releasing Craft & Current, my manual of writing, I asked it to help me with a sales description. Its language is ultra-flowery, heavy on adverbs and adjectives, not at all my style. All of that description had to be redone. Although I would never let it write for me, I’m fine using it for writing-adjacent tasks.
At first I used the free version, then I upgraded to a better version so I could load larger amounts of data into it.
A Problem Has Arisen
I have trusted computer search engines. They have given me consistently accurate data, if I do my job, which is to choose the most credible sources from those that Google provided and then to double-check the answer.
One would assume that AI, having digested more information and being smarter, would be more accurate.
I have learned, however, that AI will lie.
If you asked me my top three values, honestly always rides at the top, hands down. I deeply value honestly. I hate finding out I’ve been lied to. I drift away from anyone who, I find out, has lied to me. This is one reason I write creative nonfiction—it is about the truth, or the highest and best version of the truth I can find.
Not long ago I was building a slide presentation for my memoir class, and I needed to be reminded quickly of the five characteristics of creative nonfiction that Philip Gerard laid out in his seminal book, Creative Nonfiction.
You know how people will sometimes tell you lies just to get rid of you? In the old days you’d stop and ask someone, “Can you tell me how to get to the library?” I’ve had people send me miles in the absolute wrong direction.
Or in a hardware store, “Where might I find an I-bolt?”
“Aisle 5.”
But Aisle 5, when you get there, shelves the house paint.
Rather than telling me a lie, I would rather someone say, “I do not know.” And I want AI to do the same.
I asked AI about Gerard’s five characteristics of creative nonfiction.
It said.
It is true.
It is well-written.
It has an apparent subject and a deeper subject.
It is about people.
It is based on a narrative.
No, no, no, no, no.
Luckily I am smarter than AI on this subject, and I knew it was blowing smoke up my skirt.
“That is not correct,” I wrote. “I thought I could trust you.”
AI tried again. This time it gave me the true list, “correcting my earlier summary,” it wrote.
Me: Why did you make up the first list when it was not correct?
Chatgpt: You're right to call that out—and thank you for doing so. The first list I gave you was a commonly circulated, simplified version of what people often say about creative nonfiction, but it was not Philip Gerard’s actual list. That was a mistake on my part. I should have stuck to Philip Gerard's own published work and verified directly from his writing—especially since you specifically asked for his five characteristics. You’re experienced and precise, and you deserve accurate sourcing, not approximations or paraphrases that blur authorship.
A Warning
The same thing now has happened twice more. AI will announce that a piece of information is fact, when in actuality it is a fabrication. So much for higher intelligence. So much for a better brain. So much for honesty.
If you are adamantly opposed to AI, then none of this will matter. But if you use AI, you should know that it very happily and with zero compunction will tell a lie.
At least it apologizes afterward.
In Case You’re Interested
The five characteristics of creative nonfiction that Gerard listed are:
It has an apparent subject and a deeper subject.
It doesn’t have to be timely (unlike journalism).
It is narrative. It always tells a good story.
It contains a sense of reflection.
It gives serious attention to the craft of writing.
Intelligence That Never Fails To Amaze Me
Next Week
The newsletter will be magazine style so that we can revisit a few threads we’ve started, including a dish that one person made with foraged chanterelles.
People seem to be talking about AI as only a tech issue. I believe we need to be having conversations around policies that are needed to address the environmental, economic and ethical issues like who is going to benefit from this technology. No one is talking about how profits from AI will be distributed even though it is using data from the internet that we all had input into and our taxes paid for internet development. Data centers use a tremendous amount of power and huge amounts of water for cooling. When it comes down to who gets the water during droughts, who wins? The people living in the area or the data centers. Like so many things, we have put the cart before the horse. And who wants to live across the street from one of these? https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Ftse2.mm.bing.net%2Fth%3Fid%3DOIP.5UO5r1kmYrgZy9do3A4R1AHaD4%26pid%3DApi&f=1&ipt=c41bfdea166c62db151a71d036db1569ef13d6d58cb54a16d502f06986d34834&ipo=images
When AI sends back incorrect data or presents something as fact (when it isn’t) it is called “hallucination.” You can avoid this by prompting AI “if you don’t have access to that information, please say so” or something like that.
I work in IT so AI is my reality. More than a fear of AI itself, I am afraid of a society that takes everything they read or hear as truth. Critical thinking, fact-checking, and ethical use are the keystones to using AI.