Humans and machines make decisions very differently, but we could ultimately work together to overcome the IoT’s data problem and make the most of the wealth of information it generates.
Humans are believed to each make around 35,000 decisions every day, and through evolution and practice have become highly skilled at it. If we hadn’t, we probably wouldn’t have survived for the last 300,000 years. We’re good at decision-making because we’ve learned to compensate for our weaknesses. We use mental shortcuts or heuristics to counteract our inability to process lots of information, and we use gut feeling and intuition to help us make decisions based on unconscious recall of observation and experience. The process is frequently irrational, we our innately biased, emotional, and influenced by fear of loss, but somehow, more often than not, we get things right.
Machines, on the other hand ‘think’ very differently, relying on their computational superiority to rapidly process hundreds of millions of data points and make decisions based on consequence rather than intuition. As such, computers can now do some specific tasks very well. They have, for example, comfortably conquered the best human players in games such as chess and Go. More usefully, they can also identify anomalies in thousands of x-rays faster and more reliably than a radiologist, although the difference is marginal. They can also read your child a book at bedtime, but what they can’t do is understand a story well enough to answer any questions your son or daughter may have about it. For that we still need Mum and Dad.
For now, at least, machines need humans as much as humans need machines, and it is at this intersection where machine learning (ML) offers intriguing possibilities for the management of the billions of end-devices that comprise the IoT. ML is a practical, mathematical field involving humans creating algorithms and programs that allow computers to follow hundreds of millions of instructions looking for occasional deviations in an almost endless stream of otherwise unremarkable data, then interpreting patterns in those deviations to solve real world problems.
ML solving real problems with data
Picture the smart city of the future, for example. Streetlights will carry sensors monitoring traffic flow and air quality; weather stations will be deployed to measure a host of meteorological variables to predict hazardous climate events; buildings will be embedded with hundreds of sensors to monitor everything from air conditioning and lifts to water and gas pipes. These devices and millions more besides will generate trillions of gigabytes of data, and making sense of it is a major headache engineers believe ML will help them solve.
Relaying that volume of data to the Cloud, so it can be crunched, is untenable. The often battery-powered end devices don’t have the energy to do it, the networks can’t carry that kind of load, and even if they could, the cost to do so would be prohibitive. At the same time, ML isn’t practical on today’s IoT devices, because, despite enormous progress, the SoCs and SiPs that power these products are still relatively constrained.
However, today’s IoT devices can support TinyML, a scaled down version of ML that enables edge devices to constantly monitor data and use algorithms to detect deviations. Data gathered from one or multiple sensors is given to the Cloud, with powerful Cloud servers then able to train an ML model that is in turn updated over-the-air to the edge device. This enables the end device to then determine what data to send to the Cloud in future, if a data deviation suggests an air conditioning unit is going to break down, or a tornado is on its way, for example.
The judgement call on what to do next can then be made by a human, who can interpret ambiguity, vagueness and incomplete information – unlike their silicon counterpart. In the future the learning will be performed on device, but we aren’t there yet.
The possibilities of augmented intelligence
Research on the possibilities of augmented intelligence—man and machine working together—is still in its infancy, as is how it might be applied to the IoT where large-scale problems cannot always be solved by either computers or humans alone. This collaborative intelligence could allow both parties to play to their strengths. Humans must train machines and their ML algorithms to perform the work they are designed to do. Humans are also essential to interpret the decisions a machine makes and how it arrived there, particularly in evidence-based industries like healthcare. We are also required to continually work to ensure that machines are functioning properly, safely, and responsibly.
A machine on the other hand can boost human analytical and decision-making abilities by providing the right information at the right time, and filtering out any noise it deems unnecessary for us to make our decision. Machines and their ML algorithms can also be used as an instructional tool to improve the skills and performance of humans. Whether human or machine retains ultimate control depends on the application. For automated manufacturing or predictive maintenance IoT scenarios, machines could well be trusted to be entirely autonomous, but in the healthcare sector a good deal more caution would need to be exercised as getting it wrong is far more serious.