- 구글 클라우드, 구글 워크스페이스용 제미나이 사이드 패널에 한국어 지원 추가
- The best MagSafe accessories of 2024: Expert tested and reviewed
- Threads will show you more from accounts you follow now - like Bluesky already does
- OpenAI updates GPT-4o, reclaiming its crown for best AI model
- Nile unwraps NaaS security features for enterprise customers
A New Ai Arms Race
By Alex Fink, CEO of the Otherweb
The internet has seen its share of arms races in recent decades. The advent of viruses resulted in an ongoing battle between those who make viruses and those who make antiviruses. The increase in spam made our email accounts unusable without spam filters. The proliferation of annoying ads made ad blockers necessary to maintain any semblance of sanity while browsing the web.
What is the most likely scenario, then, with regards to the recent breakthroughs in AI technology – namely the large language models (LLMs) that most people know as ChatGPT or Bard?
Predictions vary from the catastrophic to the utopian. And to be sure, both scenarios are possible. But I would suggest that the most predictable outcome is substantially more mundane than either of these options.
The inevitability of junk
The power of large language models lies in their ability to generate text that resembles what could only have been produced by humans before. For the time being, their output can rarely be classified as original or brilliant; more often than not, it is derivative and superficial. But therein lies the rub – most of the content that humans produce is derivative and superficial, too.
Here’s an assortment of headlines from some of the best publishers of online content, to illustrate my point:
(screenshots taken by the author from cnn.com, nytimes.com, twitter.com, forbes.com and nbcnews.com)
We’ve already seen how, over the past 20 years, respectable outlets have gone from old-school journalism to elephants blowing bubbles. What do we expect to happen if the production of such bubbles takes 1/10th of the time it previously took? Or perhaps 1/100th?
As a general rule, lower cost results in higher quantity. And so, just as the increase in spam made our emails unusable without spam filters, the use of LLMs in online writing will make the entire internet unusable without junk filters.
AI vs AI
If we continue to follow the spam analogy, we might suspect that filtering junk is – in the general sense – an unsolvable problem. Every filter we create, no matter how perfect it is at a particular moment in time, will inevitably be circumvented by new tools and techniques.
Nevertheless, filtering will likely be necessary to discern any kind of signal in an ocean of noise. What tools might we use to try and compete with the various AI techniques that junk producers might employ?
This part of my prediction is less certain, but I still feel confident-enough to make it publicly: the only filtering technology that can adapt to AI-based content generation must itself be AI-based. Rule-based systems require humans to articulate the solution after the problem has already been articulated; but with generative AI, it’s often impossible to articulate why things were done a certain way. The model learns, and the thing it has learned cannot be exported in legible form.
And thus, we have a problem that keeps morphing and cannot be legibly-defined. The only toolkit with any hope of solving it is machine learning.
Low-hanging fruit
As with other kinds of filters, we are likely to encounter a pareto distribution whereby a small number of filters results in a large fraction of the filtering. The vast majority of bad content – whether created by humans or by large language models – could likely be filtered by relatively-simple systems that focus on form, style, and other simple patterns. Each additional improvement in the filters will result in diminishing returns, i.e. more and more effort will be required to improve the filtering capacity by a few percentage points.
It might make sense, then, to focus our initial attention on the low-hanging fruit. Things that are obviously bad (like the examples I provided above) should obviously be filtered out. Complex disinformation campaigns orchestrated by content-creators with a large expenditure of resources might need to be handled later, when the bottom 90% are already taken care of.
Generalizing to other spheres
In all likelihood, writers are not unique in their capacity for derivative and superficial work. As generative AI models become better, their use could likely be generalized to audio, images, videos, and other forms of content that is created exclusively by humans today.
It’s neither the end of the world nor utopia. Rather, we are entering an age of broken mirrors where most content will be fake, junk, or fake junk – and we must develop new tools to find needles in these haystacks of bad information.
There’s an obvious necessity, which typically means a market.
And the market for content filters will likely be proportional in size to the market for generated content.
About the Author
Alex Fink is the Founder and CEO of the Otherweb, a Public Benefit Corporation that helps people read news and commentary, listen to podcasts and search the web without paywalls, clickbait, ads, autoplaying videos, affiliate links, or any other junk. The Otherweb is available as an app (ios and android), a website, a newsletter, or a standalone browser extension.
FOR MORE INFORMATION VISIT: www.otherweb.com