Future of AI: A Series of Blog Posts (Future of AI #0)

Simon Baars
5 min readMay 27, 2024

--

My life in San Francisco has led me to meet some of the brightest minds in AI research. Through many discussions, I’ve become increasingly convinced that the next ten years will present a pivotal moment in human history through advances in AI.

In the next couple of weeks, I will release a series of blog posts highlighting my estimates for the future of AI:

In this article, I’ll give some context as per my predictions.

The Future of AI

My prediction: Technology will become the main driver behind most processes and systems on Earth.

I used to be quite skeptical about the speed of technological advancements. I did not quite understand what it would mean for AI to be smarter than us. In my vision, computers have always been smarter than us. I cannot do five hundred math equations in a second, but my phone can! But that doesn’t mean my phone will take over the world.

My stance changed when I critically examined the history of technological advancements and then extrapolated it into the future.

As a Software Engineer, I’ve regularly visited companies to see which employees we could most easily automate away. For example, consider an employee responsible for generating weekly financial reports for a company. I could observe their work, see which logical processes they go through to compile the report, and write a program to automate it. It’s an easy way for a company to save money.

Over the years, I’ve become faster at writing those scripts through improved tooling, and thus have a greater ability to automate processes that are currently done manually. ChatGPT gave me a huge productivity boost, allowing me to create software faster than ever!

Currently, we’re already seeing that AI is coming for lots of jobs. Writers, artists, musicians, etc. AI can do a similar job but cheaper. So, over time, fewer such positions will be necessary.

OpenAI’s announcement of real-time video chat with their GPT-4o model calls into question the future need for teachers, call centers, and a number of other social jobs.

Till recently, I was quite skeptical about AI taking physical jobs. Compute power is cheap, hardware and robotics usually come at a higher expense. But if we think about a longer period, like 5–10 years, it’s not totally unreasonable to envision AI playing a big role in that domain too.

As time goes by and technology becomes more sophisticated, it will take job after job until there’s hardly any human intervention needed.

AI vs Humans

My prediction: For the foreseeable future, AI will not possess human-level capabilities.

Sure, it will exceed humans in most cognitive tasks. But there are many tasks at which humans will continue to deliver logically distinct results.

AI is a sophisticated computer program that functions primarily as an input-output machine. Given an input, it maps it to a vector in the neural network and produces an output. By adding actuators, it can cause real-world effects, but it ultimately relies on logical processes and hardware.

Humans are an all-in-one flesh-and-blood package. The way our brain is “trained” is vastly different from how AI models are trained. AI can write text, but I, as a human, can write down my actual emotions, which are the result of my unique journey through the world.

From the outside, the difference can become incredibly subtle. But it is a difference that very much matters. AI is modeled after us, not the other way around. Unless we clone ourselves at the atomic level, AI won’t be a ‘better version of us’. It’s a different class of cognitive machine, with its own strengths and weaknesses.

No AGI

My opinion: AGI is a bad idea from a security perspective, we better create isolated “single-purpose” AIs.

In Software Engineering, I’m a big advocate for creating simple self-contained systems. They solve a large number of problems when it comes to security, transparency, and scalability.

As we scale AI to enable more capabilities and allow real-world output, increasing the ‘separation of concerns’ seems like a good idea.

Say, we’re setting up a pipeline to make paperclips. (Hmm, where did that example come from?)

Instead of having one big AI model that implements various strategies toward the optimization of the number of paperclips, we can have multiple systems work together to achieve the task:

  • Stock module: An AI system controlling the amount of stock. It doesn’t interface with the outside world; it’s contained to monitoring stock and making sure it matches demand.
  • Ordering module: Interfaces with other systems, for example, a system related to steel mining and refinery. Those systems are most likely also controlled by AI. It has an observable input/output stream for this communication.
  • Sales module: Manages the sales of paperclips. It is the only module in the pipeline with a line of contact with customers. Again, with an observable input/output stream.
  • Security module: An AI system with no incentive whatsoever for the production of paperclips. Its only purpose is to verify that the other AI models behave in ethical ways. If any such violation is detected, this system has the authority to reset or turn off other subsystems.

That such “modules” share some common logic will be inevitable. But at least the goal of the entire system isn’t to “maximize paperclips”, which is a recipe for disaster. Instead, we define a set of “submodules” that each has “bare minimum access controls” needed to do their part of the job. A security module with the sole task of ensuring ethical and secure behavior of the other subsystems could avoid situations where increasing entropy leads to undefined behavior.

So should we whip out ChatGPT, add a system prompt saying “you are a security module, govern the other systems”, and hope for the best?

No.

This is not a system prompt kind of task; it’s a model training kind of task. Only if the data does not exist is it safe to assume it doesn’t fall into bad patterns. Curating which data to include without having the quality of the model drop is a hard problem. But it’s certainly achievable.

To make this actionable, I imagine some alignment research should focus on transforming or masking data unrelated to a specific use case, while achieving the same high-quality output on requests related to the specific use case. For example, if the use case is “aiding in a medical emergency”, test the quality of the output for prompts related to that, mask unrelated data, then do the test again. Through improving either the masking or the variables of the model, we aim for near-identical results.

Upcoming

A new article in the series Future of AI will arrive at your doorstep every Monday 12 PM PST.

Stay tuned!

--

--

Simon Baars
Simon Baars

Written by Simon Baars

Yet another guy making the internet more chaotic with random content.