‘Moderation Remains Key for Us’; Graham Uses AI to Build a Better Comments Page

MediaNews Group has shut down all of its comment sections. And earlier this year, Gannett ended online commenting for most of its news sites. B2B media organizations have gone up and down over the years with their comments sections. But done right, those sections can bring engagement, audience and even revenue. Enter AI and Graham Media Group’s First Dibs Comment Bot.

One of my favorite sections of The Washington Post is their weekly Free For All comments page. This past Saturday, comments included not spelling out acronyms in their first use, ignoring Canada Day, National Geographic’s layoffs, and calling the late Alan Arkin’s character (Yossarian) in Catch 22 “paranoid” in his obituary. (“The obituary had inadvertently reversed the entire meaning of the book and movie.”)

But this Free For All is obviously closely guarded. While comments sections can be positive and conversation-enhancing—especially on webinars—they can also become unruly and time-heavy to moderate. Looking at past articles on the topic this morning, I found adjectives like “toxic,” “attacks,” “vile” and “personalized.”

“It’s something we’re all grappling with,” Boston Globe editor Brian McGrory said last year on NiemanLab, but “there is an expectation that people should have a say.” Added Gabriel Escobar, editor and VP of the Inquirer: “Engagement and the time that people spend on the site is an important digital metric, and if people are allowed to comment, they’re obviously on the site longer.”

While more places shut down their comments, one entity is working hard to continue their comments safely and successfully. And they want to do it through AI and ChatGPT. Say hi to the First Dibs Comment Bot.

Graham Media Group’s Dustin Block recently gave a presentation of their First Dibs project at the AI & Local News Challenge Demo Day sponsored by NYC Media Lab. Here are some details from that video presentation and Sophie Culpepper’s story on NiemanLab.

Now is the time. Comments can be valuable. The “cookie-pocalypse” is here, making first-party data critical for publishers. “Commenting is a huge driver for user registration and on-site registration [for us],” Block said. [But] without quality guidance, “users disengage because of lack of quality discourse. And getting talent to participate in comments… It has to be safe and quality conversation.”

It’s nicer than you might think. The First Dibs Comment Bot posts the first comment on stories to guide conversations, grow participation and drive users deeper into engagement funnels, Block said. It’s crafted just to post questions and be friendly. It finds the theme of the story and asks an open-minded and positive question making for consistently good prompts. Including “please” in a prompt for a question “substantially changed the results,” Culpepper wrote, “and explicitly asking the AI to keep questions positive ‘really helps the conversation.’”

It loses the “hallucinatory.” One of the obvious criticisms about ChatGPT is that it can sometimes seemingly make stuff up. “Hallucinatory” has been the most popular word to describe these misadventures. But because it is being told here to just phrase questions, not statements, “it’s less prone to hallucination.” “It hasn’t asserted any facts, so it hasn’t given any wrong information,” Michael Newman, Graham’s director of transformation, said. (He would have fit perfectly in my story on interesting job titles last week.) No hallucinations or missteps so far.

It treats sensitive stories well. “We’re very concerned about sensitive stories,” Block said. “We didn’t think the bot would handle that well. We’ve been a little surprised at how well it does.” On a sensitive story about a heroin overdose, the bot asked: “What steps should law enforcement take to ensure the safety of both officers and individuals who may be experiencing a drug overdose during an arrest?” “Not bad—probably better than our human editors would do, certainly faster,” Block said. “It has repeatedly been able to take these difficult stories and at least not be wildly out of line for what we’re looking for,” and do it much faster.

You still need people. “Moderation remains key for us,” Block said. “The bot would be moderated [by our] external moderation team.” Feedback has also “led to a practice of asking the bot to generate five different prompts, instead of just one,” Culpepper writes, “and allowing the writer to pick the prompt they like best, Newman said. He’s interested to see whether this tweak ultimately helps drive more human engagement.”

Revenue is the goal. Comments mean engagement, and engagement can mean dollars. Graham Media Group wants to grow logged-in users by 10%, doubling the average revenue per user. The same goes for their user registrations. In the Q&A, Block emphasized how comments can mean much more to their “logged-in users,” as opposed to those just visiting. And that can also mean dollars.

Time will tell. First Dibs is set to go live in August and will be well-looked after. “We don’t want to put this out in the world if it’s going to do something that even offends the person who’s reviewing it before it posts up on the site,” Newman said.

 

Copy of Media Library (3)

Leading AI Companies and White House Commit to Advancing Responsible AI for a Safer Future

The following statement can be attributed to Paul Lekas, Senior Vice President, Global Public Policy, Software & Information Industry Association.

We congratulate the White House and leading AI companies – including SIIA members Amazon, Google and Meta – on the commitments announced today to manage the risks posted by AI.  We agree that in order for AI to reach its potential to improve American lives, it must be developed in a way that ensures safety, security and trustworthiness. The commitments made by these  companies set the bar for the AI ecosystem and the business of information, immediately moving the United States to the front of the global effort to advance responsible AI.

Copy of Media Library (1)

Proposed Merger Guidelines: A Departure from Antitrust Principles Threatening Consumer Welfare and Innovation in the U.S. Economy

The following statement can be attributed to Paul Lekas, Senior Vice President, Global Public Policy and Morten Skroejer, Senior Director for Technology Competition Policy, Software & Information Industry Association.

The Department of Justice (DOJ) and the Federal Trade Commission (FTC) yesterday published new proposed guidelines for horizontal and vertical mergers. The proposal represents a radical shift in how the two enforcement agencies aim to evaluate mergers. We are concerned that the proposed guidelines will create confusion, unpredictability, and diverge from years of jurisprudence, with a significant negative effect on the business of information and the U.S. innovation environment.

We agree with the White House that “[c]ompetition is imperative to the functioning of product, service, and labor markets.” It is in this spirit that the proposed guidelines’ departure from the lodestar of antitrust enforcement – consumer welfare – in favor of an untested mix of factors, including “thirteen principles that the agencies may use when determining whether a merger is unlawfully anticompetitive under the antitrust laws” marks a venture into the unknown that will undoubtedly have negative effects on consumers and businesses alike.

Indeed, the approach outlined by the FTC and DOJ appears to disregard decades of judicial precedent and, in proposing an approach to regulate “the modern economy,” draws selectively from case law, predominantly from the 1960s and 1970s.

The credibility of the merger guidelines will ultimately rest on the courts’ willingness to view them as a well-balanced explanation of existing law and attendant agency practice, rather than a political exercise. Courts are not bound by the merger guidelines, and we are concerned they will look with skepticism on a wholesale revision that disregards decades of precedent.

It seems odd that the DOJ and the FTC would double down on an enforcement strategy that, to date, has very little to show for it. Just last week, the FTC was told by a federal judge that its last-minute attempt to block a high-profile merger had no chance of success. Add to this, these proposed changes in merger enforcement come directly on the heels of another proposal that would substantially change the premerger notification regime.   

Periodic reviews of the efficacy of agency policies is prudent. But to the extent that merger enforcement has been lacking, a dubious proposition, there is no reason to think that revising the merger guidelines would change that. And this is to say nothing of the impact that bad merger guidance is likely to have on the economy more broadly. The DOJ and the FTC would do well to heed the old adage that changing too much at once rarely ends well. 

Summary Blueprint for Government Oversight and Regulation of AI.FINAL (Instagram Post (Square)) (1)

SIIA’s Blueprint for AI Regulation

The Software & Information Industry Association (SIIA) has consistently advocated for the involvement of the U.S. government in fostering the responsible implementation of artificial intelligence (AI).

In order to mitigate risks during the pre-deployment phase, SIIA recommends that the government establish guidelines and tailored requirements for AI systems that pose significant safety and rights concerns. Additionally, SIIA emphasizes the importance of advancing AI innovation, enhancing government adoption, and fostering strong public-private collaboration to effectively address the multifaceted challenges and opportunities presented by AI.

By taking the following steps, SIIA believes we can enhance oversight and regulation of AI:

  1. Establish Foundational Principles: AI policy should be based on a risk-based approach, ensuring that advancements in technology align with principles of responsibility, ethics, and trustworthiness. Responsible AI governance should go hand in hand with innovation, recognizing that fostering responsible practices is essential for the development and deployment of AI. To achieve this, effective AI policy demands a novel model of public-private collaboration to promote the development of sound governance and ensure that AI benefits society as a whole.

2. Advance a Framework for High-Risk AI System Oversight: A federal law ought to establish a foundational framework for overseeing high-risk AI systems, with a precise definition of what constitutes “high risk.” The emphasis of the regulation should be on effective oversight rather than strict enforcement. To achieve this, relying on the expertise of the National Institute of Standards and Technology (NIST) and the NIST Artificial Intelligence Risk Management Framework (AIRMF) would be beneficial.

3. Advance a Sector-Based Approach for High-Risk AI System Oversight:  To enhance governance and accountability, Congress should establish formal oversight responsibility for high-risk systems within crucial agencies. These key agencies must define the criteria for identifying high-risk systems within their respective domains. Additionally, a collaborative interagency coordination mechanism should be established to oversee the implementation and regulation of artificial intelligence. Instead of creating a new regulatory agency specifically for digital matters, existing structures should be leveraged. Lastly, it is crucial to explore and implement tailored, use-based regulations for general-purpose AI systems to ensure responsible and effective deployment.

4. Appropriate Funds for AI Oversight and Government-Led AI Innovation: Allocate adequate funding to oversight agencies responsible for regulating AI technologies to ensure they can effectively monitor and govern AI advancements. Simultaneously, commit resources to support fundamental AI research and the development of practical AI applications, fostering innovation and responsible implementation of AI across various sectors.

5. Promote Domestic and International Regulatory Alignment: It is crucial for federal law to take precedence over state law in governing AI oversight. Additionally, there should be continuous endeavors to foster global cooperation and coordination in the development of AI standards and regulations.

6. Reinforce a Tech-Neutral Approach to Intellectual Property: Copyright and patent laws offer robust frameworks to tackle the challenges posed by generative AI.

Media Library (10)

SIIA Calls on Congress to Authorize the National AI Research Resource

Since release of the National AI Research Resource (NAIRR) Task Force report earlier this year, SIIA has consistently called on Congress to authorize the NAIRR to revolutionize the AI research landscape in the United States, ensure the United States continues to lead in AI innovation and responsible AI, and address core national security challenges. SIIA and other organizations wrote to leadership of the House and Senate Science Committees to prioritize NAIRR authorization in the current session.