• The Upgrade
  • Posts
  • Microsoft's AI chief needs a history lesson to get off the Kool Aid

Microsoft's AI chief needs a history lesson to get off the Kool Aid

Guest post by Ricky Sutton. Plus, special AI course offer, and a new podcast drop!

Welcome to The Upgrade

Welcome to my weekly newsletter, which focuses on the intersection of AI, media, and storytelling. A special welcome to my new readers! Drop me a note here, and let’s get acquainted. 😊 

Over the next few weeks, we’re piloting a new initiative: a series of guest essays by prominent voices in AI and media! We’ll be back to our regular programming in August. Today, a fantastic essay on the theft of the open web by Big Tech authored by a long-time executive and media watchdog:

  • ✍🏻 Guest Post: Microsoft's AI chief needs a history lesson to get off the Kool Aid by Ricky Sutton, CEO of Future Media

🎓Learn AI with MindStudio Academy! 💻

Ready to learn the fastest way to build no-code AI-powered apps and automation? The Upgrade is partnering with MindStudio to lead the MindStudio Academy! ⚡️

The next cohort takes place on Saturday, July 27th. Hope to see you there!

SAVE 20% with code: THEUPGRADE20

Microsoft's AI chief needs a history lesson to get off the Kool Aid by Ricky Sutton

Will Big Tech Caesars' theft of the open web for AI repeat history's greatest tragedy, and set humanity back 1,000 years...

On the radio recently, an interviewer asked me my opinion on the comments of Microsoft’s new AI chief that everything on the open web was freeware for him to use.

I retorted that it represented the greatest destruction of human intellectual property since the Library of Alexandria was razed 2,000 years ago.

Coming off air, I mused on why Mustafa Suleyman’s comments at the Aspen Ideas Festival had so triggered me.

“With respect to content that’s on the open web, the social contract since the 90s has been that it’s fair use,” he said.

“Anyone can copy it, recreate with it, reproduce with it. It’s been freeware if you like. That’s been the understanding.”

Suleyman’s no mug. I’ve followed his progress for years. He’s the English son of a taxi driver and a nurse, so not a US tech bro.

Until now, he’s been humble, and smart enough to avoid obvious landmines like this, so what changed?

Has Microsoft’s $4 billion cheque led him to the Kool Aid? Has he pivoted to the Big Tech playbook? Has the FTC’s probe into his deal turned him to the Dark Side?

I resolved to break down his interview line-by-line to analyse just how much one of AI’s blue flame thinkers has changed.

Doing so was also insightful for me to explore how Big Tech buys time, palms off regulators, and weaponises AI to frighten politicians, so it can grow unhindered.

Surprisingly, my research exposed a fragility in Big Tech’s strategy that can be used against them, and it suggests it has only three years left before the wheels fly off.

My search for the cause of Suleyman’s change of tone began 2,309 years ago. Let’s go…

Suleyman’s Aspen interview will be remembered as one of the most enlightening and honest descriptions of Big Tech’s true thinking around AI.

It was to the point, unapologetic, brutal at times, and uncompromising. He believes AI will usher in a future where human knowledge will be created by machines, for free.

And his views matter, as he’s the most powerful person in AI, navigating the world into a new era. He literally has his hand on the tiller of history and societal change.

He’s the Alter-Altman, willing to speak openly about AI’s potential to spark war with China, and how its future may be derailed by a lack of new data.

Late in the interview, a teacher in the audience asked Suleyman if Silicon Valley had lost touch with humanity, and whether it could learn from history’s scholars?

“I’m with you on that,” Suleyman responded.

“If you look back in history, the greatest scholars have all been multidisciplinary. There wasn’t this acute distinction between science and humanities.

“I’m a great believer in both. Multidisciplinary skills are going to be the essence of the future.”

It echoed something I know about Suleyman.

Tech tends to attract people who prefer the predictability of ones and zeroes over the ambiguity of real-life humans, which he sees as a flaw for the future of AI.

He’s advocated for tech companies to embrace the humanities, to better reflect the needs of people as well as the pursuit of profits. Dollars and sense.

I had hoped that his moderate voice would help techies realise that being tone deaf to the value of news and democracy at the cost of humanity would ensure that AI fails.

But his comments in Aspen suggested a change of heart, and that he had moved closer to Big Tech’s dogmatic view that winning at AI at any cost was what mattered.

And that took me back to ancient Egypt to place a lens of history on his comments.

The Library of Alexandria was established to store the knowledge of the world.

It was the brainchild of Ptolomy I Soter, (above) trusted general of Alexander the Great, and built as an intellectual centre of excellence.

As a war veteran, Ptolomy believed that advancing mathematics, astronomy, medicine, and literature, would unite peoples.

Librarians gathered 70,000 scrolls from across the world, and the library became a magnet for scholars to research, exchange ideas, and collaborate, despite nationhood or religion, and launched a golden age of scientific and cultural learning.

HIs goal was to preserve the IP of the world for future generations to benefit, making him one of the founders of the humanities concept, that Suleyman says he so admires.

When Alexander died in 323 BC, Ptolemy installed himself as Pharaoh of Egypt, launching a 300-year dynasty that lasted until the death of Cleopatra.

Key to its longevity was its willingness to embrace diverse cultures in a fusion known as Cultural Syncretism.

Ptolomy was inspired by the Greek philosopher Plutarch, the founder of the syncretism concept.

Plutarch, also the world’s first recorded vegetarian, believed that blending distinct belief systems could result in new, unique cultures that benefited all.

This could be achieved through shared goals, respecting intellectual property, collaboration, the ethical use of knowledge, education, and partnership.

But Plutarch’s grand design came with a warning: It would fail if profit was the motive.

Plutarch wrote the biography of Lycurgus, the legendary lawgiver of Sparta (depicted in the statue above at Brussels’ law courts).

Lycurgus feared wealth’s corrupting influence would undermine Spartan society and passed strict laws requiring citizens to live austere lives and put society above profit.

It’s why we use the word spartan to describe simple or frugal today.

To cement his plan, he insisted that Sparta use iron as its currency, because it was too heavy to be carried in volumes. An ox was required to move even a small sum.

It was also forged with vinegar to make it brittle and impractical for any other use.

Making currency valueless, and difficult to transport, discouraged the accumulation of wealth and focused Spartans on equality. In essence, the humanities.

Until…

One of his generals, Lysander, brought home gold and silver from a military victory over Athens in the Peloponnesian War, and the introduction of luxury lit a fuse.

The sudden pursuit of individual wealth undermined the Spartan economy, and sparked a social shift, leading to violence and corruption.

And after 1,000 years of growth, Sparta was ruined in just 33 years.

OK, so what does this have to do with Suleyman, and his changing narrative in Aspen suggesting that stealing the world’s information from the open web is OK?

Well, the open web in the 21st Century is synonymous with a global vision for learning and social cohesion. It’s the modern day’s great library.

The decision of publishers, and billions of people, to publish their knowledge, wisdom and IP there, is actually the “social contract” that society has made to share.

It is not, and was not ever, since the 90s or any time, a licence to hand the IP of the world to Suleyman and Big Tech’s trillionaires for profit over people.

The culmination of human knowledge is not freeware for venture capitalists. Capiche?

So, let’s allow Suleyman to justify himself in his own words. These is what he said on stage, and what they mean in 2024.

Copyright, IP and the open web

Question: A lot of the information AI has been trained on came from the open web. Who’s supposed to own the IP and get value from it? Do you think AI companies have stolen the world’s IP?

“Yeah, I think that’s a very fair argument,” Suleyman responds.

“With respect to content that is already on the open web, the social contract of that content since the 90s has been that it’s fair use.

“Anyone can copy it, recreate with it, reproduce with it. It’s been freeware if you like. That’s been the understanding.

“There’s a separate category where a website, publisher, or news organisation, has explicitly said don’t scrape or crawl for any reason other than indexing me so that others can find that content.

“That’s a grey area, and that’s going to work its way through the courts.”

Question: What does that mean, it’s a grey area? 

“Well, so far, some people have taken that information - I mean, I don’t know who hasn’t - but that’s going to get litigated, and rightly so.”

OpenAI and Microsoft are both being sued by the New York Times which is demanding billions in damages. The case is due to be decided in September.

Question: Do you think IP laws should be different?

“The economics of information are about to radically change because we’re going to reduce the cost of the production of knowledge to zero.

“In 15 to 20 years’ time, we’ll be producing new scientific and cultural knowledge at almost zero cost. 

“It’ll be widely open source, and available to everybody, and it’s going to be a true inflection point in the history of our species.

“What are we as humans other than a knowledge and intellectual production engine?

“We produce knowledge (and) our science makes us better. What we really want, in my opinion, are new engines that can turbocharge discovery and invention.”

If and when will AI take over?

Question: I want to talk about artificial general intelligence. That’s what people fear? Is that on your decade-long roadmap?

“It’s not a good idea to get fixated on super intelligence, when AI can do everything that a human can, but better. 

“If you push me, I have to say that theoretically it’s possible, and we should take the safety risks associated with that super seriously. 

“I’ve been advocating for the safety and ethics of AI for 15 years, so I care, but people lunge at it as though it’s inevitable, desirable, and coming tomorrow.

“As a species, we’re going to need to figure out a global governance mechanism, so tech continues to serve us, to make us more healthy and happier, more efficient, and add value in the world.

“There’s a risk that the technologies that we develop this century end up causing more harm than good. That’s just the curse of progress. 

“We have to face that reality, because these technologies are going to spread far and wide.

“Everybody is going to have the capacity to influence people at scales previously considered to be unimaginable. That’s where you’re going to need governance and regulation.”

How should AI be regulated?

Question: Leaders like you say we need regulation, but is it sincere, or is it a protection to say: Look, we told you there could be problems? And you know you’re never going to be regulated anyway. Look at social media…

“This is a false frame. Maybe because I’m a Brit with European tendencies, but I don’t fear regulation in the way that everyone (in the US) seems to think is evil.

“There’s no tech in history that hasn’t been successfully regulated. Look at the car, and it’s not just the car itself, it’s the streetlights, traffic lights, zebra crossings... 

“So, this is a healthy dialogue that we need to encourage and stop framing it as black and white. It’s a great thing.

“Technologists, entrepreneurs and CEOs, like myself and Sam (Altman of OpenAI), are sincere. 

“There will be downsides. I’m not denying that. Poor regulation can slow us down, make us less competitive, create challenges with our international adversaries…”

This is one of the more dog-eared pages from Big Tech’s handbook, and designed to pique politicians by insinuating that blocking them hands the future to China.

Is AI safe?

Question: Microsoft has a deep relationship with OpenAI. One of the safety team said: I’m concerned we aren’t on a trajectory to get it right. What’s happening?

“I’m proud we live in a country, and operate in a tech ecosystem, where there can be whistleblowers, and they’re encouraged and supported.”

Question: When people are having this debate, what’s it over? Resources? How much money is devoted to safety? How quickly to move forward?

“Smart people can sincerely disagree about the same observations.

“Some individuals argue there have been measurable improvements. The models produce fewer hallucinations, absorb more factual information, integrate real-time information...

“People who are most scared argue they see a path over the next three to five years where that capability doesn’t slow down and gets better and better.

“The counter argument is that we’re actually running out of data. We need to find new sources of information to learn from.

“And we don't know that it’s just going to keep getting better.”

Running out of data to train is known as model collapse. I described it as Asparagus ice cream back in April, before it emerged on Google Gemini as glue on pizza.

Roughly 60 per cent of the world’s publishers have now made code changes to inform AI companies they do not want their content scraped to train AIs.

It’s why:

It’s also why AI companies, with OpenAI at the forefront, are on a strategic mission to sign specific publishers up to AI deals to silence opposition and set precedents.

News Corp ($250 million), Axel Springer ($80 million), Financial Times (£7 million), The Associated Press ($5 million) are among the most prominent.

My first article in this newsletter a year ago, urged tech and publishers to align as I predicted AI would add $2 trillion in market value. That came true a fortnight ago.

But, back to Aspen…

Question: What restrictions should be put in place?

“We have to be careful about fearing the downside of every tool.

“There are going to be shifts that we have to make, but net net, I believe we embrace this tech, and respond with governance in an agile way.

“It’s clear that some of our kids were getting phone addiction. They were spending too much time on social media, making them feel anxious and frustrated.

“Frankly, it was probably obvious to those of us in technology sooner, than we made a big fuss about it.

“So, we have to make sure that we are reacting with our interventions and governance as fast as we create the technology.”

Teenage angst, terrorism online, political polarisation, hate speech, hallucinations, are utterly obvious to Big Tech, but they are incentivised to do nothing.

When I tackled Google and Meta face-to-face on stage in the US on this, I predicted their executives would end up in jail. I still think that’s going to happen.

As I beat them up over Cambridge Analytica and addicting kids, they turned to the playbook and urged Government action, just as Suleyman does above.

If they throw the problem to the Government to fix, they know it will land in committees that are too slow, too old, too flat-footed, and too conflicted, to act.

Question: There are a number of entrepreneurs in this room. What would you advise them?

“There’s going to be a bifurcation. Those with the greatest capital infrastructure are going to be producing (AI) models which are eye-wateringly impressive.

“We’ll see knowledge spread faster than we’ve seen in the history of humanity.

“Open-source models, which are 100 per cent free, are within 18 months of (the same) quality and performance that the private space was in just a moment ago.

“(OpenAI’s) GPT3 cost tens of millions of dollars to train and is now available free and open source. You can operate it on a phone.

“That's going to make the raw materials necessary to be creative and entrepreneurial cheaper and more available than ever before.”

This infuriated me. Chat-GPT was trained on stolen open web data. Whether it cost tens of millions, or nothing at all, is moot. It’s illegal.

Telling me that what you took is now free for everyone, and that’s so kind of you and a good thing, makes the anger hotter.

The tech playbook is to say let’s wait until the New York Times case ends.

Here’s my prediction. OpenAI will milk the Times’ lawyers until the last minute and then settle, leaving the copyright conundrum undetermined - and in one of those “grey areas” or limbo so beloved of Big Tech.

Question: You’re now at one of the giants of tech. How do you think about the concentration of power as it relates to AI? You ended up at Microsoft, but there’s Google. Amazon’s trying to build AI. Even OpenAI decided to partner with Microsoft. Is this a bad thing? 

“Yeah, it makes me very anxious. Everywhere we look, we see rapid concentrations.

“Whether it’s in news media; the power of the New York Times, the Financial Times, the Economist, the great news organisations…

“Or the concentration of power around a few big metropolitan elite cities…

“Whether it’s in technology companies… 

“The fact is that over time, power compounds.

“Power has a tendency to attract more power because it can generate the intellectual resources, financial resources, to be successful and out compete open markets.

“On the one hand, it feels like an incredibly competitive situation between Microsoft, Meta, Google and so on.

“You know, it’s clearly also true that we’re able to make investments that are just unprecedented in corporate history.”

Here are some pages from the playbook.

  1. Obfuscation. Pointing to the consolidation of media companies as anticompetitive is cynical and self-serving.

But this wasn’t because media companies wanted more power, but because they could not survive alone under Big Tech’s monopoly, so it’s a furphy. Even Obama admits it.

  1. Inevitability. Power always compounds is only true in the absence of absent competition, and that’s the reason that antitrust laws exist.

Microsoft is the only Big Tech firm not to be currently facing antitrust action in the US courts, but a few more comments like this will soon change that.

  1. Monopoly. The fight between the tech firms on AI is so intense, it’s creating a competitive market.

Big Tech argues this means consumers will ultimately win through better products and lower prices. That’s a defence in antitrust. Only it’s not what’s actually happening.

  1. Exceptionalism. Big Tech argues that it’s writing big cheques that will make America great again.

But it’s doing so because its market dominance is so all-encompassing that the established economics of financial markets have been broken.

Buying companies commonly boosts their market cap so much that they make money on the spend.

Ultimately, the playbook exists to persuade politicians to maintain the status quo so Big Tech can invest in America beating the world on AI, because this is all about China, as I investigated last August.

And China made it to the agenda in Aspen, where Suleyman was uncommonly frank.

Question: Where are we relative to China?

“With due respect to my friends in Washington DC and the military industrial complex, if we approach this with an adversarial mindset, it can only be a new cold war.

“It will become a self-fulfilling prophecy. We’re going to be adversarial, so they have to be adversarial, and this is only going to escalate and end in catastrophe. 

“We have to find ways to cooperate, be respectful of them, while acknowledging that we have a different set of values.

“When I look out over the next century, peace is going to be a product of America knowing how to gracefully degrade the empire that we’ve managed over the previous century.

“(China) is a rising power of phenomenal scale with a different set of values than us.

“We have to find ways to coexist without going to war, because that would be terrible for both of us.”

On this, we agree.

US exceptionalism is baked into the American dream, and the pursuit of wealth is interwoven into that fabric.

Big Tech’s free run - and the governmental inaction that enabled it - has been a protection racket to put brakes on the shift of global power from the West to the East.

OpenAI recently dropped its objection to supplying AI tools to the US military. AI and cybersecurity are the new frontline in geopolitics.

Raising this tweaks American politicians to think that letting US tech grow unchecked, is preferable to saving the media industry that sustains democracy.

Else, America burns…

And so it was for Ptolomy’s vision for a great library that would sow social cohesion around shared knowledge.

In 48BC, Rome’s military leader Julius Caesar laid siege to Alexandria. He torched his ships to block the enemy fleet and the flames spread to the dock, and then the library.

Centuries of knowledge and wisdom - 700,000 scrolls from Assyria, Greece, Persia, Egypt, and India, were turned to cinders.

It remains the most significant loss in human knowledge in history and symbolises of the fragility of heritage and the impact war has on human achievement.

The greatest loss of IP in history saw unique and irreplaceable literature, science, philosophy, and history obliterated. (That’s CNTL-ALT-DEL geek dudes).

Scholars estimate that it took humanity 1,000 years to return to the same level of sophistication that it had before greed took over.

One. Thousand. Years.

Copilot, now run by Suleyman, says:

Consensus among historians and scholars is that the loss represented a dark age for intellectual progress.

It disrupted the intellectual continuity that could have accelerated the progress of human civilisation.

Analyses suggest that the knowledge contained within the library could have propelled advancements in various fields, potentially altering the course of history.

So…

The open web, the modern-day’s Library of Alexandria, is being raided by profiteers.

Can we learn anything from Lycurgus and Sparta’s fall at the hands of greed?

Are there parallels between how a 1,000-year-old society ended in 33 years, noting that democracy’s 2,400 years old and Google just turned 25, and Microsoft 49?

Do we run the risk that by failing to protect IP, we might remake the mistakes that set our species back 15 generations?

Might it be time to rein in today’s Caesars to safeguard global knowledge?

I’ll leave it to you, but perhaps think twice if you intend to Google it.

Ricky Sutton is a seasoned innovator and CEO of Future Media, a fast-growing Substack dedicated to exploring the intersection of Big Tech and Big Media. With a career spanning three decades, Ricky has a unique blend of experience in journalism and technology. He has reported from conflict zones, led global newsrooms, and advised major organizations such as News Corp, CNN, and Microsoft.

In the early 2000s, Ricky transitioned to the tech industry, where he founded Oovvuu, an AI-driven company that connects videos with articles to enhance storytelling. At Future Media, Ricky focuses on antitrust issues, the impact of AI, emerging business models for media, and the global antitrust movement, providing deep insights into the evolving media

Don’t be shy—hit reply if you have thoughts or feedback. I’d love to connect with you!

Until next week,

Psst… Did someone forward this to you? Subscribe here!

Future Proof Creatives"Future Proof Creatives" - A cutting-edge hub where professional creativity meets AI innovation. Dive into workshops, meetups, and trainings crafted for the avant-garde creative, all spearheaded by...

Reply

or to participate.