Learn How to Jam

Writing in the Age of Chaterina

AI-figure-thinking

No. 132 | By Christine Carron

In one of my entrepreneurial groups, members are enthusiastically swapping prompts they’ve fed into ChatGPT in order to speed up content generation. Prompts to direct the tool to: brainstorm ideas, organize thoughts, draft marketing copy, etc. One entrepreneur said that the tool had helped them “write a book in three hours,” which they were now using as a free giveaway. 

That post generated a particular swell of excitement. Especially, when the entrepreneur shared the exact prompts they gave “Chaterina” to generate the book, along with the book itself. 

I took a peek at the end product. The entrepreneur had listed their name on the cover as the author with no indication that ChatGPT/Chaterina was involved. 

I could not join the kudos-fest. I was experiencing a snit of righteous indignation and judgment.  

Luckily, I hadn’t lost all my reasoning skills, despite the twirl up. I knew that part of my emotional response was due to the “write a book in three hours” bit. The other part was the passing the book off as their work, no acknowledgement of AI’s involvement.

The third part of my spin-up was and is, of course, my internal struggle to find my own ethical solid ground when it comes to AI. 

For example, I wrote this post, then fed the draft into ChatGPT, and asked it to spit out ten titles. The published title is one of those suggestions. I, like many of entrepreneurs on those threads, admit to feeling a twinge of guilt about this.

"This feels a little like cheating," some admit in those Facebook threads. 

But is that true? Are we cheating? Should I feel guilty for generating a title when many publishing houses determine a title of a book for an author?

Should the entrepreneurs using it to generate an outline feel guilty when many authors and professors ask a research assistant to do the very same thing?

Should the entrepreneur who "write a book in 3 hours" with ChatGPT's help when many big name authors use co-writers to generate more books faster?

And in regard to admittedly the biggest sticking point for me as writer around that AI-generated book: what is the line we draw around what is our work and what is not? 

These kind of questions and situations are just a few of many ethical dilemmas the world is facing with AI in pretty much every industry you can think of. 

Chaterina has Entered The Building

AI has been in the technology ether since the 1940’s, but it’s only in relatively recent times with the advent of tools like ChatGPT that it’s gone fully mainstream, putting it in the hands of regular folk like us. According to a UNESCO article about AI, “ChatGPT is estimated to have over 100 million users globally and is, by many measures, the fastest spreading digital application of all time, surpassing the vertiginous growth of social media applications, such as Instagram, Snapchat and others.” 

That’s a whole lot of people playing with a powerful technology without a lot of guidance or guardrails. We are not alone. Even organizations and institutions are scrambling to catch up. 

For example, in the same UNESCO article, which is a report on a recent global survey the organization did about AI standards, the survey results indicate that “fewer than 10% [of schools and universities] have developed institutional policies and/or formal guidance concerning the use of generative AI applications.”

UNESCO did produce the first-ever attempt at a global standard on AI ethics in 2021, The Recommendation on the Ethics of Global Intelligence, but there is no real consensus. Even if  there were, we would still have to find our own ethical solid ground.

Whether we do it consciously or not, we are all in the process of setting personal boundaries around if we use AI in our writing or not. And, if we do use it, establishing how, in what ways, and to what degree we do so.

I generally prefer to be thoughtful about technology adoption. Following are the principles I'm using in my dance with AI. These are the same principles I use when I am in the process of adopting any new technology. 

Principle #1: Handle Your Emotional Charge

Like most technology, AI is filled with wonderful possibility. It is also highly disruptive and sometimes unsettling. I suspect most of us are going to be discombobulated many times over as we experiment with AI and/or are on the receiving end of others’ (companies, institutions, individuals) experiments with the technology. 

Living through this kind of upheaval reminds me on a daily basis how important it is to be in charge of our emotional equilibrium. To be able to recognize when we’ve spun out and then taking the action we need to take to recalibrate.

For me, that might be a walk with the Wonder Dog, or a dance break, or simple a pause on learning about AI, reading posts about AI prompts, etc.

Reflection questions: What are the ways you get yourself back to center? What are the ways you get yourself to actually take those actions to get back to center?

Principle #2: Go at Your Pace, Your Priorities

When we feel behind we often try to go faster . . and faster . . . and faster. That is often the worst thing we can do, because in all the rush we can lose connection to our own priorities. 

Even if I could drop everything and spend every waking minute trying to understand and grasp the full possibilities and ethical implications of AI, I still would be hopelessly behind. 

Instead of that stressing me out, it frees me up.

Since there is no way I could wrap my non-AI-enhanced brain around it all anyway, I don’t have to put myself into some kind of extreme panic learning mode to try to do so.

I can slow down and proceed at a pace that matches my priorities, which at the moment are: I absolutely do want to pay attention and learn, but I also want to live my life, write, be of service, etc. 

Reflection questions: Where does AI currently fit into your priorities? How much time do you want to dedicate to exploring it?

Principle #3: Remember Tech Serves You

The big zeitgeist fear around AI seems to be the possibility that we are finally stepping into a Terminator’esque dystopia where tech is sentient and intent on exterminating us. For the moment, however, technology is still supposed to be serving us. 

Yet, with the constant bombardment of new apps and updates to existing apps, it can feel like, even now, we are there to serve it. Chase after it, catch up with it, embrace it, and adopt it immediately or be the laughable Luddite. 

For that reason, I greatly appreciate voices like Jane Rosenzweig, the director of the Writing Center at Harvard College and the author of the “Writing Hacks” newsletter, who recently tweeted:

“All these months in, I still worry about how much educators are having to focus on how education has to change to accommodate AI instead of focusing on whether AI actually solves real teaching and learning problems and helps students learn/think.”

Perspectives like Rozenzweig's gently remind us to ask questions like: What is my purpose for using this tool? What problem will it solve for me? Does it really save me time? 

And perhaps most importantly, at least for me: How do I ensure I use this tool in a way that aligns with my integrity and values? 

Reflection questions: See above.

Principle #4: Experiment and Play

I don’t want you to walk away from this post thinking I am anti-AI, which is why I decided to put this principle last.

I’ve always found that the most effective (and fastest) way to learn tech is to experiment. Simply play around with it. And to do so with an air of curiosity and openness.

When we approach something with playfulness, it immediately changes the dynamic. It adds a lightness to our engagement. In the case of all the actual (as well as existential churn) AI is creating, I assert that a little lightness is a really good thing. 

Reflection Questions: How could you infuse more playfulness and curiosity into how you are engaging with AI? If you are not engaging at all yet, but want to, what are three ways you could imagine starting off your AI journey with more fun and curiosity?

AI is Not Leaving the Building 

AI is here. It is upon us. It is not leaving the building, but I am going to stay hopeful that we are have not yet reached Terminator times. If my hope is on point, then it follows that we are not leaving the building either.

As writers, there is any range of options available to us in regard to AI and our personal standards for engaging with it: from attempting to not engage at all to embracing it whole hog with nary a thought for the ethical implications. 

Most of us will fall somewhere in the middle, slowly sorting out our boundaries as we go. That middle ground is often the place of less certainty. The murky gray area where clarity and surety are more tenuous.

That’s why I find principles like the ones I shared so useful. Emotional recalibration, clear priorities, remembering tech’s purpose, and a sense of playfulness give us anchor points that can help us stand in the midst of all this uncertainty with more confidence and ease. 

Which I think is a really good thing, because despite its inherent uncertainty, the middle ground is also the place of exploration, learning, and growth. And, of course, the place where we can figure out how we want to proceed without losing ourselves—our inherent sense of who we are and how we show up in the world—in the process. 

Steady on, y'all. You've got this!

Don't miss a single dollop of Goodjelly

Subscribe for the Latest Blog Posts & Exclusive Offers!

You can easily unsubscribe at any time.

Plan your writing year, Goodjelly-style!

Learn the annual planning approach writers are calling "remarkably useful," "hopeful," and "REAL." Powerful process smarts you can integrate ANY time of the year.

Learn More