‘Lean In. Test These Things Out.’ A Lot Is Being Written by and About AI; Here’s Some Help

In March CEO Nicholas Thompson wrote a note to his Atlantic business-side staff about AI. “The first point I want to make is that we should all be curious. Lean in. Test these things out. We will be getting some company subscriptions to GPT-4… Secondly, I want us to experiment.” Joe Amditis, who has written a publishers ChatGPT guide, said, “If we don’t pay attention to this… we’re going to get tricked ourselves, and we’re going to lose credibility with our audiences.” It’s a lot to navigate right now.

I just looked at an impressive, though a bit impersonal, video posted on LinkedIn by Jeremiah Owyang. He gave this prompt to AI: “Write a short story, in first person, about a girl moves to the big city, launches a business, overcomes challenges, and finally succeeds. 300 words.”

He then spent a brief time selecting video clips and music. “I wrote zero of the script. I estimate it would have taken me 20-30 hours to create; it took 15 minutes.”

Reactions ranged from: You can now add to it (drop it into munch or opus, then use Midjourney to create an avatar and ChatGPT for the description); to outrage (“We are going to be flooded with fake human experiences!”); to it’s crazy what we can now do.

Flooded is probably the right word for where we are now in AI-land. At our Editorial Council meeting in April we heard from one editorial director who’s all in on letting AI create stories and another who, for now, is committed just to tasks such as ideating and reading through gobs of text to come up with questions. Others are learning and much more wary of potential accuracy, ethical and bias problems.

(I’m planning for the next Editorial Council meeting to take place on June 22 about editorial uses for AI. Stay tuned.)

Meanwhile, here are five resources that I’ve come across:

Experiment. Join the waiting list for Bard, wrote The Atlantic’s Thompson. Try Poe and Bing. Read about Anthropic. See if you can get human hands to look good in MidJourney five. Learn how to be a good prompt engineer… We’re already trying to use these systems to help tag stories. Next, maybe we can build a bot to help us onboard new subscribers. Maybe we can build a bot that helps guide people to the archives. Maybe we can create a more efficient and personalized recommendation engine…”

A full-on test case. On NiemanLab last week, new staff writer Sophie Culpepper wrote this excellent story: “Can AI help local newsrooms streamline their newsletters? ARLnow tests the waters.” Scott Brodbeck is the founder of Virginia-based media company Local News Now. He already had an automated afternoon newsletter but wanted “a morning email with more voice. [He] began experimenting with a completely automated weekday morning newsletter comprising an AI-written introduction and AI summaries of human-written stories. Using tools like Zapier, Airtable, and RSS, ARLnow can create and send the newsletter without any human intervention.” Now he wants to do a daily update on YouTube and is “experimenting with using AI to look for typos and other errors in newly published articles; categorize articles into positive, neutral and negative buckets for potential social media purposes; and drive a chatbot to help clients write sponsored articles.”

An AI for Editorial handbook. Joe Amditis, an assistant director for products and events at the Center for Cooperative Media at Montclair State University, has put out the “Beginner’s prompt handbook: ChatGPT for local news publishers.” The guide takes users through creating the best prompts; talks technology terms; tells how to clean up transcripts and create outlines to “red-team” your story ideas. It also advises how to use AI as institutional memory for your newsroom. “Picture this,” Amditis writes in an article on Medium. “An AI model trained on your newsroom’s archives and its entire body of work, along with any of the other community- or org-specific reports, information, documentation, and data you can find and upload. By analyzing this vast trove of data, an LLM could identify patterns and connections that might not be immediately apparent to human analysts.”

Hugging Face? Why not. Earlier this month, Team Twipe put out this article: “Navigating the AI Dust Storm: A guide for publishers.” There are full definitions of concepts and tools like Hugging Face, which “allows you to locally download multiple LLM models and provides datasets to train them​. It also provides courses and educational materials​.” And it had this tidbit: “OpenAI has recently introduced a new function called ‘Code Interpreter’ that allows users to upload and download files such as tables or code and use GPT-4 to evaluate, modify, and save them locally…”

Tests show inconsistency. On The Atlantic site, Ian Bogost wrote this article: “We Programmed ChatGPT Into This Article. It’s Weird. Please don’t embarrass us, robots.” “So I started testing some ideas on ChatGPT (the website) to explore how we might integrate ChatGPT (the API),” he writes. “One idea: a simple search engine that would surface Atlantic stories about a requested topic… In some of my tests, ChatGPT’s responses were coherent, incorporating ideas nimbly. In others, they were hackneyed or incoherent. There’s no telling which variety will appear above. If you refresh the page a few times, you’ll see what I mean.” One thing is for sure, Bogost writes, “You can no longer assume that any of the words you see were created by a human being.”

Comments are closed.