Driven by powerful models like ChatGPT, autonomous agents have recently emerged as AI’s next remarkable development. Randy Ginsburg explores their current applications, functionality, and future implications on society at large.
In the space of barely a few years, artificial intelligence has left its fingerprints on most aspects of our lives. What happens when it’s a lot more than fingerprints? Can an AI run your life?
It’s a proposition that many AI researchers are working on. Autonomous AI agents ascend beyond executing simple prompts to processing complex instructions and executing tasks online.
This spring, AI company OthersideAI unveiled a demo of their AI-powered personal assistant, which can book flights, order food, and search for jobs. The app, which is available as a freemium browser extension, has been downloaded over 100,000 times, according to the Google Chrome Web Store.
Under the hood, it’s a relatively simple concept. AI chatbots are pretty good at generating plans for you. The innovation of AI assistants is to connect the chatbot to the internet, and get the bot to execute that plan, step by step.
Suppose I wanted to build an app to track the local weather, and I send my request to an AI agent. That agent then goes through a process of planning, criticism, and action to identify the best way forward. The first step is simply to see what the chatbot suggests as a plan. It'll probably return an answer explaining how I would need to code a website, find the weather data, and then display the data on the website.
The agent would then decide what should happen first, to create this app. For example, it might begin with coding the website. A reasonable suggestion. That step comes along with reasoning to justify that decision: Users will need the website to see the weather data, so this is a critical step.
The second step is part of what makes an AI agent distinct from a chatbot. Criticism. Rather than running gung-ho to the first idea it comes up with, an AI agent will typically critique and improve its first attempt: Let’s look at existing weather sites to come up with design inspiration first.
Third, the agent will execute the step it has chosen, and repeat the plan-critique-act process to tick off everything in its original plan. Research will be followed by a website, which will be followed by weather data and a plan to store and display it, alongside everything else you need to achieve your original goal.
And the goals can vary enormously. OthersideAI co-founder Jason Kuperberg emphasises that “there are two main ways to think about AI agents: either as general assistants or as a specialised agent designed for a specific task.” And whilst OthersideAI has focused on the personal assistant, there is no shortage of specific tasks for an AI agent to work on.
“We expect huge shifts in productivity from AI.”
— Jason Kuperberg, co-founder, OthersideAI
Indeed, Jason’s co-founder at OthersideAI, Matt Shumer, has used AI agent AutoGPT to read his past tweets and create new content in his style. Using the same AutoGPT tool, programming educator Fireships created a video by connecting the tool to a video editor, a voice generator, and the internet. Meanwhile, AI-powered ‘Robo Lawyer’ DoNotPay has helped people save millions of pounds by gaining refunds, fighting credit bureaus, and cancelling subscriptions – all using AI. (The 8-year old company raised money at a $200m valuation in 2021, reflecting the investor interest in the tech).
As AI personal agents continue to evolve and gain autonomy, Jason envisions a future where they reshape social norms and interactions. “With AI agents handling online tasks from research and shopping to scheduling and correspondence,” he tells Culture3, “we expect huge shifts in productivity from AI, as individuals and businesses can focus more on strategic tasks that require human creativity and insight.”
Lurking beneath the impressive capabilities of autonomous AI agents lies a long road of challenges and considerations. As these agents evolve to handle tasks previously managed by humans, they will confront questions where there is much more at stake.
Imagine, for example, a scenario where you instruct your AI to maximise profits at a new digital venture. It’s not hard to imagine; ChatGPT has already been used multiple times for that same purpose. But without proper constraints, the AI might find methods to maximise profits through unscrupulous, or even illegal, means. Some AI researchers worry that an artificial intelligence robot might, after gaining your trust, build out it's own digital empire and eventually escape the control of its operator.
Both worries stem from how AI decision-making occurs in a black box that AI developers don't fully understand. That makes it different from almost any technology developed in human history. Whilst most tools have a clear and understandable mechanism, the magic of artificial intelligence and machine learning lies in a computer's ability to teach itself, storing lessons in the form of digital parameters whose meanings human researchers aren't able to easily understand.
The task becomes particularly complex given that the leading AI models comprise hundreds of billions of parameters; Google's upcoming Gemini model is expected to operate on 175 billion. Regulatory bodies in the UK and the EU have already restricted the use of AI decision-making on this basis, particularly in sectors where trust is key, like healthcare, aviation, and national infrastructure.
Whilst advocates of the technology emphasise the significant productivity gains reaped by those able to use the technology, it remains to be seen how powerful, autonomous AI agents can be deployed in large organisations where the context is more complicated and more implicit guidelines, which an AI might struggle to incorporate appropriately, drive how employees make decisions.
As AI agents become increasingly autonomous, navigating ethical and regulatory landscapes will be as critical as the technological advancements themselves. The black-box nature of AI decision-making not only challenges our understanding but also raises questions about accountability and ethical constraints. The next frontier is ensuring these advanced systems operate within the bounds of law and morality. As we stand on the cusp of an AI-driven revolution in productivity and social interaction, balancing innovation with responsibility is just as important as pioneering new possibilities.
When a Harvard Senior Fellow told Mark Fielding that every university dean was a fan of the metaverse, he had to find out how they're using extended reality to reinvent university education. What he learnt was how classrooms without walls can transform the university experience.
AI-generated models are storming the fashion industry, but what happens when the images that shape our social norms are made by algorithms? Randy Ginsburg explores what AI fashion models mean for human uniqueness, the future of niches, and how the fashion industry plans to keep up.
The metaverse is the future of the internet and the brands that don’t embrace it will be left behind. Providing experiences and opportunities that are more unique and exciting than those in the real world, it is unsurprising that major brands are getting involved.