Breaking Things at the Speed of Thought

In the second issue of our AI+ blog series, where we aim to examine the interplay of AI with other facets of our lives, Jon Minton highlights the need to have patience instead of speed in the development of AI to protect the interests of users and optimise AI in building better infrastructures, not threaten the ones we have.

I graduated high school in 1996 when the U.S. Congress signed the Communications Decency Act. Section 230 of the law shielded companies that created software or provided internet and other computer services from liability when a third party inappropriately used their systems. This law made sense in the internet’s infancy, when we first grabbed the tiger by the tail. At the time, the internet was comparable to a bookstore since it consisted mainly of static information. By the same logic, bookstores shouldn’t be liable because they sold a book that caused harm. The analogy nobody thought of making with the internet was to a bartender selling alcohol to a minor or an automobile manufacturer using defective parts that only caused problems when driving at 80mph on the highway.

I started programming for a living in 2003, just before Facebook adopted its motto, “Move fast and break things.” In the new world of tech, companies felt breaking things was a necessary badge of honor. In the early 2000s, this was easy to swallow because the internet seemed remote, like a book, or a video game that might seem scary but couldn’t hurt you. Back then, we couldn’t have imagined that these broken things would include a rise in suicide rates (especially among children), breakdowns in democracy, and ethnic violence. We were cautiously optimistic in 2014, when Facebook’s motto changed from “move fast and break things” to “move fast but with stable infrastructure.” But he didn’t follow this up with action, and, in 2023, Zuckerburg apologized to parents for the harm his ‘stable infrastructure’ is not only continuing to cause but causing with more efficiency and profitability than it has ever had before. And we wait to see what strapping the jet engine of AI to this thing-breaking model will do to us individually and collectively.

AI is not well understood.

Most of us understand AI by utilizing applications like ChatGPT. Still, we won’t realize its potential until companies, schools, and governments use Application Programming Interfaces (APIs) to integrate ChatGPT’s engine, GPT-4, into existing software. I’m currently programming in the behavioral health industry, which is in the same bind as every other industry: struggling to understand what AI is and how to utilize it. Businesses will integrate AI as they do all technologies, securing their trade secrets from competition. Developers will learn to use AI in a bubble without help from the engine’s creators, relying on books, articles, and trial and error.

AI presents unique challenges because we’re struggling not only to understand how to implement AI but also what to do with it in the first place. The problems that Zoom and Facetime solve are easily understandable. With a hammer, for instance, we know we need one to drive a nail, and we don’t need one when the task at hand is to saw a board. With AI, questions of use aren’t nearly as clear-cut. This was clear when Amazon limited e-book publications to three per day simply as a reaction to the number of AI-generated books submitted and complaints about the quality. Part of the reason is that none of the players could agree on what problem AI should solve. Writers want something to help with their research and grammar. Publishers want a mechanism to produce cheap content. Consumers want as much high-quality content as they can get.

Silent in these debates are the creators of the engines who promise they’re gifting us a Swiss Army knife, able to tackle any household chore but not offering practical help when their technology is misused. When schools and universities realized how prolific cheating with AI had become, educators let out a collective sigh and cracked open a copy of AI for Dummies. Educators indeed have technological allies, such as AI detection software Turnitin. However, Turnitin is not an AI company, and universities had mixed results, in some cases canceling its usage entirely.

This process of diffusion and adoption is standard in technology. We watched it most recently when COVID-19 forced many into a virtual work or learning environment. That transition was seamless at large corporations, which could throw money at a problem and were already on the bleeding edge of remote work. Smaller businesses couldn’t train workers properly or invest in the necessary infrastructure. Record numbers of small and medium-sized businesses closed because they couldn’t adapt. Rolling out teleconferencing on a mass scale was challenging, but AI threatens to move faster and grow exponentially more complex.

AI runs on stolen works.

And what powers these engines? In contrast, a teleconferencing engine requires nothing more than what the programmer puts into it. An AI application like ChatGPT requires developers to create the learning model, but it also requires extensive training and study material. In 2016, Facebook claimed its facial recognition model was more accurate than the FBI’s. If that was accurate, I’m not surprised because Facebook has a ton of faces, a readily available dataset. They have vaults of faces that we entrusted them to store securely. Facebook has our voices, our words, and our thoughts. The company is sitting on mountains of our children’s photos, records of the places we’ve been, and personality-revealing patterns in the pages we’ve looked up. And when their AI engines demand the oil of big data, they’re quick to adopt America’s energy policy, “Drill, baby, drill.”

Creators of AI placate us with ethics panels and assurances of safeguards they’re putting in place. But, if those safeguards existed, why are US authors suing OpenAI and Microsoft? Despite all the technical jargon, ethics panels, and empty promises, the authors suing for plagiarism recognized that the makers of these engines had taken something that did not belong to them and called it their own. They have a legitimate claim that this is not an isolated problem with how companies utilize the technology but that AI engines run on the works of others.

Many countries have adopted sync licensing requirements to sample someone else’s music in your music or videos. They require citations when you sample other’s writing on your own. Then why are Microsoft and Open AI being sued over copyright infringement? Because creators tout AI engines as entities that can learn about a style of music or writing and be inspired to create their own instead of the next evolution in sampling. But even more worrisome is the speed at which it creates these works. Hundreds of AI-generated books forced Amazon to cap ebook publications to 3-per-day. Amazon, with all its resources, could not move as fast as AI and hit the brakes.

Humans move slowly. Computers don’t.

In 1934, the US signed the Communications Act, a replacement for the temporary Radio Act and the catalyst for the Federal Communications Commission (FCC). Despite its critics, the FCC remains a regulatory body to this day, attempting to keep the radio spectrum open and available to everyone, preventing a takeover of the airwaves by a few large entities. However, the criticism is accurate as they still fail to regulate the internet and that we’re at risk of a technological oligarchy. This is concerning if we think of the internet as having existed since the early nineties, but these technologies have existed much longer. In 1983, the Defense Data Network officially adopted TCP/IP standards (the Internet’s address system). But even this technology was just a step from the original network used since the 1960s. Forty years to figure things out and we’re still struggling. AI has been adopted at a massive scale since 2017 and it threatens to evolve and disrupt in ways that the internet hasn’t. 

The EU Artificial Intelligence Act and other international actions give me hope that the dangers of emerging AI and its applications are becoming better known and that people are demanding safeguards. But it is still a proposed law and the first real effort. Will we continue to advocate for our safety as businesses, governments, financial institutions, militaries, and criminal justice systems install their shiny new AI engines?

We stand at a crossroads, watching AI threaten to pass us by, wondering how we will keep up. But the truth is we will never be able to move fast enough, and the question should be, can we slow things down?

Beyond all the hype, AI is a tool that should help build better infrastructures, not threaten the ones we have. Despite what Microsoft and Google tell us about the need for speed, AI demands patience. 

Share this: