HN2new | past | comments | ask | show | jobs | submitlogin

Automation doesn't have to be perfect; it just has to have better judgement than a human pilot.


Automation is incapable of "judgement;" automation consistently and reliably responds to inputs according to pre-defined instructions. That's why automation works really well in some situations but not others. The more complex the task, the harder it is make a complete instruction set that will result in a satisfactory outcome for every possible situation.

Even human pilots behave somewhat like automatons in some situations: in most situations they follow procedures, which could be described as "responding to inputs according to pre-defined instructions." However, they often encounter situations not covered by the procedures, in which case they must instead exercise their judgement.

The problem with judgement is that it is neither consistent nor reliable. Some humans have better judgement than others. Eventually we will have automation sophisticated enough to handle even to full complexity of aviation, at which point automation will yield safe results more consistently and reliably than human judgement.


I suppose it gets into philosophical debates about AI, but I don't see any reason in principle that we can't describe at least some kinds of computerized systems as exercising something described as "judgment". We already have one existence proof, the human brain, of a system that can exercise something we call "judgment", and I don't see a strong reason to believe that it's due to anything magical about the human brain in particular (like a soul or something along those lines), rather than just being a complex reasoning system that's able to balance many contextual factors.


Automated systems as they currently exist have fixed responses to fixed inputs. If an automatic system encounters a set of inputs it wasn't programmed for, it has no capacity to determine a best course of action for those inputs. Depending on how it was programmed, it will either keep doing what it was doing previously, or switch to a pre-programmed "contingency plan" that hopefully will result in a tolerable outcome (but which might result in catastrophe), or possibly execute a random set of instructions (which can happen to poorly-designed state machines, for example).

A human being, on the other hand, has the capacity, when faced with unexpected or unfamiliar conditions, to exercise something we call "judgement" in an attempt to develop an appropriate response. I'm not saying that it's impossible for an automated system to have this capacity, I'm saying that no current automated systems have it, and that we're nowhere near to developing such a system any time soon.


That's definitely the case with deployed civilian aircraft systems, but I'd be surprised if there wasn't some unmanned system somewhere doing more complex reasoning. There was a talk years ago at IJCAI from some people from NASA Ames on a prototype aircraft-control system they'd built that used a reasoning system to assist with performing emergency landings in situations with no preprogrammed contingency, by taking into account some telemetry information (e.g. aircraft damage), map information, an aerodynamic model, and risk models.

I do believe they were planning to deploy it as a suggestion system though, which would suggest a course of action, and then leave it to the pilot to implement it or not. Then the judgment gets more murky; now the system is doing some of the judgment (evaluation of alternatives, etc.) that a human pilot would normally do, but leaving some of the judgment (accept the suggestion, modify it) still to the human.

edit: Here's a more recent paper than the one I'm thinking of, but must be the same project: http://ti.arc.nasa.gov/m/profile/de2smith/publications/IAAI0...


That's not really true. It's often the case that automation is designed reach a specific goal and it then try's to achieve / maintain that state. AKA Segways try to balance and an F-15 Control Augmentation System (CAS) aka stability assistance system try's to keep flying even without a wing. (Yes, this worked and was not programmed for.)

http://www.airliners.net/aviation-forums/tech_ops/print.main...

Plenty of other planes of lost a section of wing a wing and still landed. http://www.airliners.net/aviation-forums/general_aviation/re... Granted, all of these cases had a pilot, but in the F-15 the avionics actually discovered how to maintain level flight after the loss of the wing.


I have a master's in aeronautical engineering, and one of my major areas of study was stability and control, which is what your Segway and F-15 examples fall under.

Automated flight control systems definitely do not exercise "judgement:" they have inputs, a transfer function (typically MIMO, these days), and outputs. It used to be that the transfer function was fixed, but more sophisticated systems (e.g. fighter jets) often have many different transfer functions and switch between them based upon various inputs. They don't "decide" or "discover" anything: for any given set of inputs, they will predictably produce a pre-determined set of outputs.

Aircraft stability and control systems are not programmed to care about, or even know about, the existance of the wings. The closest they get to this is that they will know the current states of the control surfaces on the wings. So it doesn't really make sense to say that the F-15 CAS was "not programmed for" the state of missing a wing (although it almost certainly was programmed to respond properly in the situation where it gets no feedback from some of the flight controls). As you say, it's designed to reach a specific goal (keep the plane level) and then maintain that state. If the system detects an uncommanded roll-rate, it will move the flight controls to stop that roll-rate. It doesn't know or care that the uncommanded roll-rate is the result of asymmetric lift due to an (almost completely) missing wing: it's just going to keep moving the flight control surfaces until that roll rate goes away. If the aicraft had been damaged in a slightly different way, it's possible that the CAS would have issued commands to the flight controls that would have departed the plane, but fortunately the handling characteristics of the aircraft remained close enough to normal that the control laws still produced good results.

Unfortunately, this behavior can result in mishaps when a flight control system gets erroneous inputs: when it believes that it is rolling when it is wings-level (or believies it is level when it is rolling). This was a major contributor to the Air France fligh 447 crash. In such situations, it takes judgement to realize that something is wrong and to figure out what to do about it.


Ok, it was my understanding that the F-15 adjusted the transfer function based on a feedback cycle rather than simply picking from a list of them. However, thinking back the conversation was ambiguous and I don't have the clearance required to find out the correct answer.

However, while flight control systems have been responsible for plenty of crashes pilots have often mistaken level for non level flight and focused on faulty instruments rather than switching to working backups etc. An automated system can handle redundancy much more efficiently than people in such situations so while it's little value for a person to have ex:7 gyroscopes if they need to pick between them autopilots can gain from access to such information.


I never claimed that human pilots are superior to automated flight control systems in every aspect. In fact, I have stated explicitly that computers perform some tasks better and more safely than humans.

I was simply making the point that when you encounter a situation that isn't covered in the instructions, you need juman judgement to figure out the best way to proceed. I also explicitly stated that human judgement is far from flawless, and that if you can develop a sufficiently comprehensive automation program, you can get safer results than you would on average with human judgement.


If you feed completely new information into a system it's going to do something. Some times it's even the correct choice, but really people also do the same thing in novel situations. I have no problem calling judgement simply deciding what to do based on the current situation and as soon as you add any form of adaptation then computers can do that. But, I am also willing to concede your probably using a different definition.

PS: IMO, what separates people from machine learning systems is treating everything as training data, a much larger training set, a lot more processing power, and a tendency to explore novel situations. The trade off is efficiency and reaction times. Still, when you get into thrust vectoring, super sonic flight, high g turns, rapidly changing weight/drag/thrust at the same time trading consistency for improved handling of novel situations is probably worth it so I expect the air force uses systems that are fare more adaptable than the civilian world.


> Automation is incapable of "judgement;" automation consistently and reliably responds to inputs according to pre-defined instructions.

Where are you getting this definition of "automation" from?

As far as I'm concerned, an automatic system is simply one that require little or no direct human control.


Before claiming a machine is incapable of judgment, we would have to define what it is. Is a fly capable of judgment? A fish? This is a philosophical question and won't yield any meaningful answers. Call it judgment of feedback loop, the end result is similar.


I feel that the biggest problem with automation and AI based systems is the ability to handle 'Black Swan' events - for the lack of a better term.


The reason for accidents is usually an unanticipated combination of events. Unanticipated means an autopilot will not have been programmed to handle it.

There are many, many aviation disasters that were avoided because the pilot got creative.


And there are a couple of disasters where the pilots didn't listen to their mechanical friends.


You would need a better-than-human-pilot AI for that. Until we get there - and are able to reliably demonstrate that, you'll want humans in the loop.


Fortunately for some of us (e.g. me), judging is not something computers are particularly good at right now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: