• The Upgrade
  • Posts
  • Accenture tech expert: AI "already impacting how we tell stories"

Accenture tech expert: AI "already impacting how we tell stories"

Plus AI Sportscasters Flop, top headlines + ChatGPT video tutorial!

Welcome to The Upgrade

Welcome to the third edition of my weekly newsletter, which focuses on the intersection of AI, media, and storytelling. A special welcome to my new readers from The Wall Street Journal, The BBC, tech startups, and many other organizations — you’re in good company!

In today’s issue:

  • The Week’s Top AI Stories 📰

  • Case Study: AI Sportscasters Flop in the Philippines 🤖 

  • The Interview 🎙️: Researcher for Accenture's annual Technology Vision report, Naomi Nishihara, on AI trends

  • ChatGPT Video Tutorial 🎥: How to Access the Live Internet

The Week’s Top AI Stories

Gen AI Tools

  • ChatGPT can now see, hear, and speak — OpenAI

    The features will be available in 2 weeks to ChatGPT Plus and Enterprise users

  • Getty made an AI generator that only trained on its licensed images — The Verge

  • Google’s Bard Just Got More Powerful. It’s Still Erratic. — The New York Times

  • Amazon steps up AI race with up to $4 billion deal to invest in Anthropic — Reuters

Regulation & Policy

  • Everything you need to know about the government’s efforts to regulate AI — Fast Company

  • Sen. Warner on AI regulation: ‘We probably can’t solve it all at once’ — Politico

Ethics & Safety

  • Stuart Russell wrote the textbook on AI safety. He explains how to keep it from spiraling out of control. — Vox

  • The A.I. Wars Have Three Factions, and They All Crave Power — The New York Times (Opinion)

Privacy & Security

  • Should AI Surveillance Be Restricted? WSJ Readers Share Their Thoughts — The Wall Street Journal

  • Don't Want Google to Use Your Website for AI Training? You Can Now Opt Out — PCMag

Legal & Copyright

  • Authors' lawsuit against OpenAI could 'fundamentally reshape' artificial intelligence, according to experts — ABC News

  • AI Detection Startups Say Amazon Could Flag AI Books. It Doesn't — Wired

In the Workplace

  • The writers strike is over; here’s how AI negotiations shook out — TechCrunch

  • A top economist who studies AI says it will double productivity in the next decade: ‘You need to embrace this technology and not resist it’ — Fortune

  • Why Your Boss Is About to Inflict A.I. on You — Slate

Case Study: AI Sportscasters Flop 🤦 

Last week, major Filipino broadcaster GMA rolled out Maia and Marco, AI bilingual sportscasters, to a deafening roar… of outraged viewers.

The top comments responding to the announcement include:

  • “How tone-deaf and insensitive can you guys be to premiere AI sportscasters on NCAA, a college league with literally thousands of masscom students hoping to get jobs soon and not be replaced by AI?”

  • “GMA, it’s not too late to shelve this misguided attempt at using AI. This does not add anything to sports journalism. It really feels like a gimmick, and one that belittles the talent of so many sports journalists who are ready and willing to work with you.”

  • And, my personal favorite: “Perfection is boring.”

After the initial blowback, the company tried to strike a conciliatory tone. A GMA senior executive, Oliver Victor B. Amoroso, said, “Maia and Marco are AI presenters, they are not journalists, they can never replace our seasoned broadcasters and colleagues who are the lifeblood of our organization.”

However, GMA refused to pull the plug on the Maia and Marco, instead doubling down. In its statements, the broadcaster cited the pressures of an industry disrupted by rapid AI advancements and a need to innovate as justification for the AI sportscaster project. It remains to be seen who they are innovating for. It doesn’t appear to be for their audience or employees…

Special thanks to reader Jaemark Tordecilla for sharing this article!

The Takeaways:

  • Create AI guidelines for your organization

  • Decision-makers, ask yourselves, “What problem are we solving for? Why is AI the right solution?”

  • Always have human review built into any AI product development process: for example, focus group testing!

The Interview: Naomi Nishihara of Accenture

Naomi is a technology trend researcher and the content development lead for Accenture’s Technology Vision report, which forecasts key trends impacting businesses in 5-10 years.

The following responses are Naomi’s opinions, not Accenture’s, and have been edited for brevity.

Peter: How do you envision the role of generative AI in the future of storytelling? Are there any specific examples or case studies you could point to that highlight this?

Naomi: I think there are two major ways that generative AI is already impacting how we tell stories. The first is that image, video, and audio generation are significantly democratizing content creation. A single person might be able to produce something that would have taken a whole team of animators and artists in the past - so there could be serious productivity gains here. To give you an idea of what this might look like in practice, check out Bestever.ai, which is a startup producing generative AI commercials and marketing content.

The second thing is that generative AI is opening new entertainment and storytelling mediums. You could imagine training an AI chatbot with information about a certain topic, and then letting people ask it questions, or generating AI-powered characters that can interact with people and let them experience stories in new ways. An interesting example of this is Wol, by Niantic. Wol is a mixed reality experience that transforms your space into a redwood forest and lets you interact with an AI-powered owl. You can ask questions about the owl’s ecosystem, and using generative AI, it can naturally converse with you.

Peter: What are the biggest potential pitfalls or ethical considerations for storytellers integrating generative AI into their narratives?

Naomi: This is a really important question. I’m sure new considerations and challenges will continue to come up, but for now, the first things that come to me are hallucinations and copyright infringement.

Generative AI hallucinates, so we need to be extremely careful when it comes to trusting what these models tell us. You don’t want to be like the lawyers who used ChatGPT to prepare a court filing without realizing that the court cases ChatGPT cited to demonstrate precedence had all been made up. It even made up quotes from people!

Creators should also be aware of copyright infringement risks. Generative AI platforms are trained on lots of data (a lot of it scraped from the internet), and since it uses these examples to learn patterns and rules to then create new content, the relationship between generated content and the model’s training data is unclear. There have been a lot of debates about this, and George R.R. Martin and other authors recently sued OpenAI for using their books to train AI models. It’s going to be a really interesting area to watch.

Peter: The report predicts the integration of the physical and digital worlds and refers to AR in this context. How do you envision the convergence of AR and AI in practical applications?

Naomi: The convergence of AR and AI is really interesting! In some ways, the future of AR depends on AI. To give some context - a lot of the commercial AR we see today is just an information overlay on top of our surroundings that doesn’t truly interact with our environment. This can certainly be useful, but more sophisticated AR would use AI for object recognition to be able to look at the world around you in real-time and layer it with contextually useful information. A classic example is wearing AR glasses and walking through a grocery store. If your glasses could recognize what you’re looking at, they could show you relevant information like nutrition or sustainability details about products in front of you. With generative AI and AR combined, you might also be able to have a virtual assistant to observe the world and make recommendations to you.

🎥 ChatGPT Tutorial: How to Access Live Internet

🔗 You asked for a YouTube video last week, so here it is! 🎥 

Techniques to Counter Deepfakes

For those waiting for the cliffhanger from last week’s Big Think, here are three tactics for countering deepfake imagery. I will cover each in detail in future posts!

Watermarking & Fingerprinting

  • The embedding of invisible watermarks or fingerprints within images. These embedded markers can provide crucial information about the image's origin and authenticity.

Detection of AI-generated Content

  • Companies like Optic have developed specialized tools to determine whether AI has generated an image. These tools identify specific patterns or "artifacts" within the image that might indicate its AI origin.

Content Credential

  • Adobe, in collaboration with other entities, is pioneering the "content credential" system, likened to a nutrition label for digital content. This system would furnish detailed information about the image, including its creator, the date of creation, location, and any subsequent edits or modifications.

Thanks for reading! Next week, I will have some exciting news…

How was today's newsletter?

Please let me know!

Login or Subscribe to participate in polls.

If you liked this edition, please share it with someone who might enjoy reading it. 📧 

Don’t be shy—hit reply if you have thoughts or feedback. I’d love to connect with you!

Until next week,

Psst… Did someone forward this to you? Subscribe here!

Join the conversation

or to participate.