Missy Cummings from Duke on Logistics Automation: We’re Going in Reverse!
I had the opportunity to listen to Missy Cummings, a Professor in the Pratt School of Engineering at Duke University. Missy shared some insights on automation in the logistics sector, at a conference that was awash with discussions on block chain, digitization, and new technology in logistics.
Missy noted that “I was one of the US Navy’s first female fighter pilots, and Yossi Sheffi from MIT who I worked with (who was also a fighter pilot with the Israeli air force). One of the toughest things we did was land a jet fighter on an aircraft carrier – and it opened my eyes to the future. One of the things you learn quickly in the landing is to turn the computer on, and keep your hands off the controls. The computer handles it, but you may take over only if it gets tricky. In fact, when you take off, the pilot has to show their hands in the air and show them to everybody, to reassure everyone that you are NOT touching anything. And only then do they fire you off the carrier. If you touch anything, a human can create pilot-induced oscillations off the front of an aircraft carrier, and there were many fatalities in this case. This started me thinking: ‘I am so good I am not allowed to touch anything!’ But where is logistics going, if pilots can’t touch the planes on takeoff? That motivated me to go back to school at UVA, and started at MIT as an assistant professor, and then Duke hired me, and I now have a Humans in Automation Lab.”
Professor Cummings then gave the audience a brief history of humans and automation. At the turn of the century, horse and buggies were entrenched as the primary form of road transportation. There was a real battle among people, many who insisted we would never get rid of the horse and buggy. Not only have we moved to cars, but we are now moving to driverless cars. Next, on Wall Street – the human interactions on the trading floor have been replaced by computers. Most of the financial industry today (fin-tech) is done by computers. In aviation – the old jets are replaced by UAV drones with no pilots. So we have achieved that level of automation technologically, but we won’t have full replacement of pilots, primarily due to regulatory issues.
Cummings works on human supervisory control. A human operator is trying to control a computer vehicle, and it mediates the commands that are interpreted by actuators into a drone. So humans sit on the loop (not IN the loop) –as they are not directly controlling the systems. We have sensors and displays that are used to balance humans in the automation. But should humans be given stop buttons, or what is the nature of their interactions? Computers work best in automation when involved in complex, time-pressured high risk domains, require embedded automation/autonomy with human supervision. Think of mining trucks in Australia, or nuclear power grids –there is a lot going on and a lot of information being taken in. Humans are increasingly moving from the mechanics of operating equipment, to the cognitive interpretation of what is happening. Every commercial aircraft is basically an automated drone today. The pilot coujld be doing their job from a Starbucks coffee shop! Pilots touch the stick about 3.5 minutes out of an entire flight, and that is on take-off only. That is due to the regulatory structure, (and it will soon go away). Even NASA let the astronauts have too much control, and it resulted in suboptimal landings for the first few trips to the moon. Today, humans are only involved in case something goes wrong. If I can get rid of the human capital, labor and retirement costs will go down. But there must be a balancing act somewhere. In China, we see cheap labor, and even then they have decided to start investing in robots. There is a balance between humans and machines, and if I can automate all these jobs, is there a potential human cost?
Cummings explores this in her paper on skills, rules, knowledge, and expertise that explores the way you do a complicated task. (A figure from this paper is shown below). You start with skills based reasoning, for instance, involving how to keep the plane balanced. You spend a year as a pilot learning those skills, and they become automatic, the same way you learn how to drive a car and stay between the white lines. Then once you learn a skill, you have cognitive rasoning for rules based – procedures. For example, you learn that “If something happens, do this”. Even the medical community is using procedures. Then you free up cognitive reasoning to do knowledge-based reasoning. Because as you get into situations with more uncertainty, you have to make guesses about what to do. The best example of this in aviation is “the miracle on the Hudson”. Sully had to make a guess about the right place to land. “Can I make it to Teeterborough or put it in the water?” His experience allowed his mind to make fast simulations, and he learned that he wouldn’t be able to make the landing. The record shows that had he turned the aircraft as soon as the birds struck the engine – he could have made it back. But he would have had to instantly make the decision, and he had to first consult the procedure manual, which took time, and rendered the decision moot.
So how does automation work? Automation has to be able to assess the situation, understand what is going to happen, and it has to focus on skills-based reasoning first. Over time as it learns more and more situations that leads to expertise. But humans are the only machines that truly have the highest level of expertise.
Of the transportation domains, rail has the lowest uncertainty, as you are traveling in a one-dimensional vehicle which is akin to skill-based reasoning. Next up in terms of uncertainty are drones, which are the easiest to automate. Drone technologies are mature, have been around for 30+ years, and are rules-based. Airplanes are also rules-based, and because there are three degrees of freedom, there is also more room for error and less likelihood of head-on collisions. But the most dangerous, and the most uncertain technologies, are driverless vehicles. There are many, many more uncertain conditions occurring, and the trajectory is only in two dimensions, so there is much much less room for error, compared to aircraft which has 3 dimensions. Driverless cars require expert knowledge and reasoning. “Are there kids playing soccer, what about animals, and how do I react? That is pure expert-based reasoning, and we are simply not there yet with computers to handle this level of uncertainty.
Unfortunately, these technologies are being introduced in reverse!! We are primitive in automated rail. There are 3rd world countries that have more successful and more advanced automated rail systems. Drones are coming very slowly, with FedEx and UPS moving slowly, but there no technological reasons why they will not evolve. (There are some sociotechnical concerns however). But driverless cars!! Dr.Cummings is working with groups in Washington DC on proposed legislation to put 1 M driverless cars on the roads in the next year. In her opinion, “We are not ready for it, but that is the power of lobbying markets. And car companies are cutting deals with Uber, Lyft – and there is a big business out there, but it is much too early in the evolution of technology for driverless vehicles.”
Think about the following story. A navy seal loved his Tesla and put 20 you tube videos about it on the web. He was going down a divided highway – and a tractor trailer turned in front of him and he should have seen it, or the Tesla should have seen it, but in any case he was killed. This happened because he was bored, only held the wheel for 25 seconds of a 37 minute ride, and did not read the PDF manual with the Tesla. If he had, he would have known that this is NOT a driverless car – and that it has problems with the camera vision. The truck was turning broadside and the color of the truck made it unrecognizable. The technology is still very immature –and complacency, boredom, and a lack of understanding of the autonomy are common reasons why more than one person has died this way.
We have human strengths and limitations we need to understand. AI is not perfect. Computers still don’t understand inductive reasoning, the contents of the world around them. Machine learning and pattern recognition still requires a lot of training and teaching that has to go on – and to take higher level relationships doesn’t work well because computers are deductive, and make rigid decisions, making them very vulnerable.
In one case, a bunch of students put graffiti on stop signs, in the form of four black and white stickers. This simple trick was able to fool a driverless car that the stop sign was interpreted as a 45 mile an hour speed limit. If hackers can fool automated cars by putting stickers on stop sign, that is a very serious safety concern.
Let’s focus instead on automating trains! That is an area that would definitely benefit public transportation, and help us to drive true progress in safe, efficient, automated transport.