- The Upgrade
- Posts
- Joe Amditis Talks AI: Finding Hope Amid Journalism's Tech Tumult
Joe Amditis Talks AI: Finding Hope Amid Journalism's Tech Tumult
Plus, is AI regulation on the horizon?
Welcome to The Upgrade
Welcome to my weekly newsletter, which focuses on the intersection of AI, media, and storytelling. A special welcome to my new readers from Apple, McClatchy, Wells Fargo, and many other top organizations — you’re in good company!
In today’s issue:
The Week’s Top AI Stories 📰
🎓 Join the AI Boost course starting on February 13th!⚡️
After Biden & Swift Deepfakes, is AI legislation coming? 🧐
🎙️The Big Interview: Joe Amditis of the Center for Cooperative Media discusses AI in newsrooms!
The Week’s Top AI Stories
Top AI Headlines
Can This AI-Powered Search Engine Replace Google? It Has for Me. — The New York Times (Opinion)
AI spam is already starting to ruin the internet — Business Insider
The era of the AI-generated internet is already here — Mashable
Taylor Swift deepfakes spark calls in Congress for new legislation — BBC
Regulation & Policy
FCC targets AI-generated robocalls after Biden primary deepfake — The Hill
AI Regulation Process Is Well Underway, White House Says — Inc.
Reining in AI means figuring out which regulation options are feasible, both technically and economically — The Conversation
Ethics & Safety
X blocks searches for Taylor Swift after explicit AI images of her go viral — BBC
A candidate targeted by a deepfake has an AI warning for America — CNN (Video)
Sam Altman says OpenAI has a plan to combat election misinformation. It’s not enough — The SF Chronicle (Opinion)
Why AI needs to learn new languages — The Economist
Legal & Copyright
This new image generator is like Midjourney, except a human illustrates your prompts — Fast Company
Copyright law in the age of AI — Marketplace
Battle Between Newspaper Giant and Generative AI Boils Down to Definition of Fair Use — IPWatchdog
In the Workplace
The uncomfortable truth about AI’s impact on the workforce is playing out inside the big AI companies themselves — Fortune
OpenAI Announces Preview Fix for ChatGPT 'Laziness' — PCMag
The New York Times is building a team to explore AI in the newsroom — The Verge
Younger workers are actually using AI on the job less than Gen X and millennials — Business Insider
After Biden & Swift Deepfakes, is AI legislation coming? 🧐
The conversation around the need for AI legislation has intensified this past week following recent controversies surrounding AI-generated impersonations of public figures such as President Joe Biden and pop star Taylor Swift. These incidents, including a robocall that mimicked Biden's voice and the circulation of fake explicit images of Swift, underscore the potential for deepfake technology to spread misinformation and harm reputations. Politico reports on the challenges regulators face in clamping down on deepfakes ahead of the 2024 election, highlighting the urgency of addressing these issues.
The legislative response to the threats posed by AI and deepfakes is gaining momentum. NBC News notes that lawmakers in at least 14 states have introduced legislation aimed at combating AI-generated misinformation, reflecting a bipartisan effort to mitigate the risks associated with these technologies. This legislative push signifies a growing awareness among policymakers of the need to create a regulatory framework that can adapt to the rapid advancements in AI technology.
Developing effective legislation to regulate AI and deepfakes presents a complex challenge. Lawmakers must navigate the fine line between protecting individuals from the potential harms of AI while ensuring that regulations do not stifle innovation or infringe upon free speech. The Washington Post discusses the intricate balance required in addressing deepfakes, with at least 10 states having already enacted laws related to this issue.
The incidents involving deepfakes of Biden and Swift could serve as a catalyst for broader legislative efforts to regulate AI technologies. As the debate continues, the need for a swift and effective legislative response becomes increasingly apparent. The challenge lies in crafting laws that can keep pace with technological innovation while safeguarding democratic values and individual rights. ABC News highlights the White House's concern over AI-generated content and the call for legislation to regulate its use, emphasizing the critical juncture at which society faces AI's rapid evolution.
🎓 AI Boost: Starts in February! 💻
AI Boost for Professional Communicators and Marketers covers the essentials of Generative AI for media and marketing professionals with novice and beginner-level experience with AI tools. The live 90-minute sessions will take place on Tuesdays, starting on February 13th, at 2pm ET / 11am PT. Spots are already filling up!⚡️
SAVE 20% WITH CODE: THEUPGRADE20
🎙️Interview: Joe Amditis
Joe Amditis is Assistant Director of Products and Events at the Center for Cooperative Media at Montclair State University.
Note: This interview has been edited for brevity and clarity.
Peter: Joe, you have a reputation for being at the forefront of AI within the news industry. When did you first start experimenting with AI tools & technology?
Joe: In 2016, while pursuing my master's at the CUNY Graduate School of Journalism, I jumped into the world of early chatbots under the guidance of Professor Jeremy Kaplan. These rudimentary chatbots, essentially a series of if-then statements tied to keywords and responses, sparked my interest in the ethical implications of AI communication tools. This curiosity expanded to encompass various AI applications, from Amazon Echo's flash briefings to instant news updates, exploring the blend of audio, accessibility, and technology.
Just months before ChatGPT's public debut, I noticed a surge in large language model tools. Among them was a tool, Octi, that offered a Gen Z slang translator — a playful experimentation before it became mainstream. I’ve used Otter, an AI tech transcription service, for years. This was when the term “AI” wasn't as widely applied to tech products for marketing purposes, so it wasn’t marketing itself as AI-centered.
The release of ChatGPT caught my attention on TikTok, leading me to explore its capabilities firsthand. I tested it by seeking spoilers for Brandon Sanderson's "The Lost Metal," a book I had just finished. The experience of interacting with an AI that could discuss recent literature was astonishing, marking a significant moment in my ongoing exploration of AI's potential and its intersection with journalism and technology.
Peter: Tell me about what you've been building, working on, and experimenting with in the AI realm lately.
Joe: I'm trying to get into everything I can, keeping up with what's happening in AI. I've been chatting with many newsrooms, journalists, and editors all over, not just in the U.S. but in Canada, Mexico, and overseas. I'm wrestling with how these AI tools can serve us best, especially in the smaller newsrooms that might only have a handful of people.
The whole vibe around AI is shifting. It's less about the end-of-days fear that robots are going to take over, and more about how we can integrate these tools without losing our shirts. Sam Altman and the like have moved from doomsday predictions to a more tempered, "Hey, this won't change everything overnight, but keep funding us" stance. And it's interesting because more newsrooms are warming up to the idea of using AI, not so much for creating flashy content but for the grunt work, the back-office tasks that don't get much glory but are crucial.
So, what I'm really digging into is the practical side of AI—how it can handle the admin and documentation stuff that eats up so much time. It's about sifting through what's just noise and what's actually useful signal. That's where I see the gold in these tools for newsrooms. It's not about the shiny, front-facing AI applications but the behind-the-scenes operations that can really benefit from a bit of automation.
Peter: Given the increasing reliance on AI tools in journalism for tasks like transcribing interviews and meetings, how do you balance efficiency and accuracy, especially when factual integrity is paramount?
Joe: Jeff Jarvis recently made a point in a Senate subcommittee hearing that really stuck with me. He essentially said that AI tools aren't fact tools and shouldn't be relied on for factual accuracy. This might make any journalist pause—when are facts not crucial? But here's the thing: there are times when AI, like Otter for rough auto transcripts of webinars or press briefings, is incredibly useful. It's about setting the right expectations: these are not definitive transcripts but rough guides. You shouldn't act on this information without a thorough review and verification.
For example, I had a bit of a laugh (and a scare) with a Zoom meeting recap bot. After mentioning I overate at a staff retreat, the bot's recap suggested concern for my well-being, which was amusing but also a bit alarming when sent out unreviewed. It highlights the quirky errors AI can make, emphasizing the importance of not treating these tools as the sole source of record.
The key is not to rely on AI for critical, factual content without close scrutiny. Working alongside these tools with a clear understanding of their limitations ensures they're helpful without being misleading. It's crucial, especially when important decisions hinge on the accuracy of the information provided. This balance between leveraging AI for convenience and maintaining rigorous fact-checking protocols is vital in the current media landscape and isn't likely to change soon.
Peter: With the rapid integration of AI into search engines and the broader web, there's much discussion about its potential impacts on media and publishers. What are your main concerns regarding those ongoing developments?
Joe: My biggest worry is the integration of AI with search functions by giants like Microsoft and Google. This pairing has already fundamentally altered what was once a reliable tool for factual information. AI, designed to generate convincing content rather than factual accuracy, is now linked with search engines, setting us up for misinformation. The blend has proven problematic, as AI-generated content lacks the factual basis we expect from search results, leading to confusion and mistrust among users.
Moreover, this shift has significant implications for SEO, introducing a new layer of complexity for web and content managers. The SEO landscape is now even more opaque, making it harder for professionals to adapt and maintain visibility online. Additionally, the rise of AI-generated content risks flooding the web with low-quality, plagiarized, or outright deceptive material, exacerbating existing issues with online misinformation and scams. The ease with which AI can produce convincing scams or misinformation is alarming, threatening to undermine the integrity of online content and search reliability.
Peter: Considering the challenges AI poses to factual integrity and the spread of misinformation, how can news organizations and publishers adapt to or mitigate these risks? Is there any hope for the future of journalism?
Joe: Definitely, there's hope, but it requires strategic adaptation. A common reaction has been to erect paywalls, but 404 Media's approach offers a promising middle ground, balancing the need to protect quality journalism from dilution by freely available low-quality content. This situation creates a dilemma: high-quality journalism, crucial for an informed democracy, becomes accessible only to those willing and able to pay, potentially alienating a broader audience needing factual information.
However, I see a silver lining in the public's discernment and fatigue with AI-generated content. The novelty of AI art and similar outputs is wearing thin, suggesting a societal self-correction towards valuing authenticity and quality. This cycle of boom and bust in consumer attention can help realign focus towards reputable sources. News organizations that have invested in building genuine community relationships stand to regain trust and attention, especially at the local level. Authentic engagement, not just content generation, will be key in distinguishing valuable journalism from the sea of artificially generated nonsense. Organizations like Harkin, Outlier Detroit, and Resolve Philly exemplify this by prioritizing community engagement and transparency.
Don’t be shy—hit reply if you have thoughts or feedback. I’d love to connect with you!
Until next week,
Psst… Did someone forward this to you? Subscribe here!
Reply