AI and the future of robotics and automation
AI is likely to change our lives. We believe that the advances already made in the area will significantly improve productivity and create large market opportunities for the robotics and automation industry.
Artificial intelligence (AI) has come a long way since its inception in the 1950s. From the early days of rule-based systems used to perform analysis and make predictions, to more recent advances in machine learning and deep neural networks enabling its use in more creative, generative tasks, AI today is vastly more powerful and offers exciting possibilities for many industries.
In the coming years, AI will probably change our lives and likely the entire world; however, experts have difficulty in agreeing exactly how. In this article, we explore how AI may shape the future of robotics and automation, creating significant growth opportunities, as well as challenges, and providing a powerful long-term secular driver to the theme.
Key takeaways
Key takeaways
- Even if we are still many decades away from achieving general AI, the advances already achieved in the narrow AI of today enable robotics and automation systems a greater level of autonomy.
- With greater autonomy, robots become incrementally more useful and easier to use, and their addressable market increases significantly.
- As AI continues to advance, the market opportunities for the robotics and automation industry will continue to grow.
Mute and brute robots
Mute and brute robots
Factory robots are high-performance tools that operate with great precision and speed, and often run for more than ten years with very little downtime, 24 hours a day, seven days a week. This makes them extremely useful in large-scale manufacturing, where they can be programmed to perform the same task tirelessly, for a huge volume of goods. For humans, this type of work would be considered dull and repetitive – and often physically exhausting.
Such robots are found most often in car and flat panel manufacturing and food and chemical production; more specialized robotics can be found in semiconductor fabrication. However, in many other industries, there is a surprising lack of robots. The average robot density in manufacturing is just 14.1 robots to every 1,000 factory workers.1 The reason is that most robots and automation systems follow a pre-programmed set of instructions, and re-programming the robot to perform a different task can take weeks or even months.
For many manufacturers, that lack of flexibility is not practical. They need to be able to switch production from one item to another quickly and cannot afford the time or technical expertise to re-program a line of robots. In fact, for only the very few large-volume and low-variability industries (autos, food, chemicals, semis, etc.) are factory robots suitable.
Andrea Thomaz, associate professor at the University of Texas and co-founder and CEO of Diligent Robotics, describes pre-programmed robots as "mute and brute"2 that silently perform a routine of tasks repeatedly with great efficiency and precision, but are unaware of their surroundings. This makes them potentially dangerous to anyone nearby and therefore robots are typically placed in physical or virtual safety cages to avoid accidents. Inside a safety cage, however, their usefulness is reduced since they need to be able to perform the entire task without outside help. This means that for most tasks in most factories in the world, the human worker remains far more adaptable and useful than their synthetic counterparts.
A robot evolution
A robot evolution
That is now starting to change. Over the last 20 years, the power of computer processors has increased exponentially and, at the same time, economies of scale and Moore's Law have lowered the costs. Platform technologies have also evolved. Internet speeds and coverage (for mobile and fixed line) continue to increase and cloud service providers, such as AWS, Azure, Google, and Alibaba, offer data storage and compute services on demand.
Although the origins of AI date back 70 years, the recent increase in computing power, the availability of platform technologies, and the vast proliferation of data enable AI to be far more powerful today than ever before.
AI is the brain of the robot
AI is the brain of the robot
In robotics and automation, you can think of AI as the brain of the system. Before AI, robots were automatons, performing a pre-determined set of coded instructions. With AI technology, robotic systems are becoming increasingly autonomous – able to respond to and learn from changes in its surroundings. More autonomous robotics are also easier to set up and use, and are safer. Each of these benefits significantly expands the useful application of robotics and extends their addressable market potential:
1. Dynamic autonomy
In combination with sensors and machine vision, AI can give robots the ability to learn and adapt to new situations on the fly. In other words, they can make decisions based on their surroundings and adjust their behavior accordingly.
For example, a robot tasked with sorting items in a logistics center can learn how best to pick up an unknown object. An autonomous vehicle can learn to correctly identify and respond appropriately to obstructions in its path, in different driving conditions, and can share that information with all the other cars on the road.
Greater autonomy can allow robots to operate independently in places where there are no humans, or in environments that may be hazardous to humans. For example, ANYmal, the four-legged autonomous robot for industrial inspection tasks from ETH Zurich spin-off ANYbotics, is in commercial use at power and chemical plants, performing inspections and gathering data from systems in the plants.3 It can also perform customized tests such as thermal imaging to ensure that critical parts are not overheating.
2. Faster set-up
Traditional robotic systems require extensive programming to set up, which can be time-consuming and expensive. With the use of AI, robots can be trained using data and debugged through simulation software. One method is programming by demonstration (PbD), where the robot is shown the task being performed by a human and mimics it.
Most collaborative robots, or cobots, can be set up with PbD, making them suitable for small batch tasks, where frequent set-up changes are needed. In some systems, AI is used to enable the system to be controlled by voice, further simplifying set-up and operation.
3. Safer and cleaner
Robots with sensor arrays can use AI technology to detect objects in their environment and respond appropriately by slowing down or pausing operation. They can also be programmed to shut down in the event of a malfunction or emergency. These advances can reduce the risk of accidents in the workplace. Of course, robots that are safe for humans to work with can perform a wider variety of tasks; since the safety cage may no longer be required, a human worker may step in to assist the robot when needed.
AI can also enable greater precision and efficiency, and thereby lower energy consumption (see example below: Rolls Royce), reduce material waste and the production failure rate, and enable more sustainable manufacturing practices.
4. Big data in real time
In robotics we often think about physical tasks, manipulating objects in a production line, sorting parcels in a logistics center, and transporting goods autonomously. However, AI is often even more effective in the purely digital realm, where the system is not constrained by the physical limitations of the robotic system.
Pattern recognition – the ability to ingest vast amounts of data and identify relationships and anomalies very quickly and accurately – is a particular strength of AI. The US Postal Service now uses machine vision and an edge AI system to identify and track the more than 100 million letters and parcels received every day, with each server in the system processing more than 20 terabytes of image data per day.4 Two more examples of AI used in the analysis of big data are described below.
AlphaFold is an AI program developed by DeepMind, a UK start-up acquired by Google in 2014, which also developed AlphaGo. It was designed to address one of the fundamental challenges in biology: predicting the 3D structure of proteins. The team trained the software on the 170,000 proteins available in public repositories and in 2021 it published a free database of 200 million proteins to help accelerate scientific research, increasing the number of known protein structures by one thousand times.
Computational biologist and co-founder of Critical Assessment of Structure Prediction (CASP), John Moult, said that AlphaFold allowed him to determine in half an hour a protein structure that he had failed to solve for 10 years.5
Rolls Royce and other commercial aircraft engine makers monitor the health of their engines while in-flight from operation centers on the ground. Rolls Royce's Engine Health Management system monitors approximately 8,000 flights per day by measuring thousands of parameters in real-time from sensors embedded in the engines and analyzing them against the engine's performance history, other engines in the fleet, and in the context of the operating environment to spot anomalies and abnormalities. This information is used to improve fuel consumption, reduce engine wear, predict maintenance needs, and to provide critical in-flight alerts to pilots.
From narrow to general AI in robotics and automation
From narrow to general AI in robotics and automation
Just from the few examples above, we can see how current AI technologies offer significant improvements and growth opportunities for the robotics and automation industry in a wide range of fields. However, these are all examples of so-called narrow AI: AI designed to perform a specific task such as learning how to pick something up, how to navigate obstacles, or how to mimic a human. In contrast, general AI would be vastly most powerful, but it is also vastly more challenging to develop.
John McCarthy, the father of artificial intelligence, theorized that if you can describe everything in the world so that a computer can understand it, then it should be possible to automate any task, provided the physical capabilities of the system are sufficient.6 However, in practice this is extremely challenging since you would need to explain to the algorithm how the world works in terms of physics, societal norms, etc. As Stuart Russell, professor of computer science at the University of California at Berkeley, explains,7 "When you ask a human to fetch you a cup of coffee, you don’t mean this should be their life’s mission, and nothing else in the universe matters. [...] Of course, all the other things that we mutually care about, they should factor into their behavior. [And] the algorithms require us to specify everything in the objective."
Professor Russell also noted that estimates for achieving general AI average around the year 2045, but he believes it more likely to occur towards the end of the century. McCarthy concluded that,8 "Human-level AI [general AI] might require 1.7 Einsteins, 2 Maxwells and 5 Faradays…"
Human-level AI will be achieved, but new ideas are almost certainly needed, so a date cannot be reliably predicted – maybe five years, maybe five hundred years. I’d be inclined to bet on the 21th century.
John McCarthy, former professor of computer science at Stanford University⁹
Threat to our ability to learn and innovate?
Threat to our ability to learn and innovate?
Standing on the shoulders of giants10 – this metaphor represents the idea that we build on the knowledge passed down from our predecessors. We are taught their wisdom, learn from their discoveries, and build on their work. However, AI may threaten this evolutionary process of innovation. The results produced by AI systems are now based on such a vast pool of data and can involve such complex relationships and interdependencies, that it becomes increasingly difficult for us to understand the logic. If we cannot understand how the machine thinks and yet we become increasingly reliant on its output, then we may lose the ability to learn and innovate.
Entering a golden era of innovation in digital technologies
Entering a golden era of innovation in digital technologies
Even if we are still many decades away from achieving general AI, the advances in narrow AI – as well as the supporting cast of platform technologies and the ever-expanding ocean of data – will likely enable greater autonomy in robotics and automation and create very large market opportunities for the industry. And as progress towards general AI continues, those opportunities will further increase in size.
We believe we are entering a golden era of innovation in robotics, and in digital technologies more broadly, and we believe that these innovations will enable significant steps forward in economic productivity, in sustainability, and will provide investment opportunities for the patient investor.
The individual companies mentioned on this page are meant for illustration purposes only and are not intended as a solicitation or an offer to buy or sell any interest or any investment.
Angus Muirhead
Head of Thematic Equities
Angus Muirhead (BA, CFA), Managing Director, is Head of Thematic Equities at UBS Asset Management, and Lead Portfolio Manager for the Robotics strategy. Angus joined the Thematic Equity team in 2016 as a Senior Portfolio Manager. He started his investment career in 1997 as a buy-side equity analyst at Phillips & Drew Fund Management in London before moving to Tokyo in 2000 to focus on the Japanese technology and healthcare sectors. In 2007, he moved to Zurich as a portfolio manager specializing in global technology and healthcare-related thematic equity funds. Angus holds a bachelor’s degree in Modern Japanese Language and Business Studies from Durham University, United Kingdom, including a year of study at Kumamoto University, Japan, and is a CFA charterholder.