Home / Technology / Robot Teachers and Robot Soldiers

Robot Teachers and Robot Soldiers

Robots have long been a favourite of science fiction. Whether it’s the amiable android Data from Star Trek: Next Generation, or the dysfunctional Hal 9000 from Arthur C. Clarke’s Space Odyssey, we are fascinated with the idea of intelligent machines which will perform complex tasks for us.

With the recent advent of autonomous, or sem-autonomous devices, we see the possibility of devices which can operate at a distance with remote human control. We already entrust very many functions to complex machines, whether it is controlling our car engines or the flying of airliners, the control of power stations or the control of enormous ships, so we are comfortable with the idea of machines growing in capability.

The use of robots as either soldiers or teachers throws up a number of interesting moral and ethical issues, as well as some fundamental technical aspects. In both applications, we require the machine to work to certain ends, relating to and reacting to its surrounding within a set of programmed responses.

In the case of education, we already have programs which assist in repetitive tasks such as rote learning, vocabulary exercises in language study, and so it seems reasonable to extend this activity into those areas requiring more interaction with the student.

However, the teacher is responsive to very small changes in student actions, their demeanour, their expressions, their physical movements, the sound of the voice, and even hesitation. The teacher can elicit the meaning of all of these signals to indicate uncertainty, ambiguity, confusion, and can react accordingly expressing ideas in a different way, repeating the information, illustrating the material with different examples, and so on.

Critical to the process of teaching is this recognition of ambiguity and even with the use of speech recognition, the ambiguity inherent in natural language is still a significant technical problem for the interaction with machines. Faced with uncertainty or ambiguity, the teacher will elicit more information or simply try to perceive more and there is an in-built understanding of when clarity is obtained. This is still extremely difficult to emulate with computer equipment.

So it is possible that the full range of teaching actions are not within the grasp of computerised robots at least yet, and perhaps they never will be. But on the other hand, there are some tasks which robots can perform in an impersonal way which may make the students respond better. Sometimes the personality of teacher and student can be such that learning is not as effective as it otherwise might be.

An impersonal approach to learning and assessment may be beneficial to the learner though the evidence suggests that the relationship between student and teacher is essential in developing and maintaining motivation. How would this be reproduced with impersonal computer equipment? Some say, by developing machines with personalities.

As teaching progresses, the teacher learns a great deal about how students think, what their specific learning blocks are, what their educational needs are. This learning process is through the interaction between teacher and student. For machine learning to be as effective, the machines have to be able to recognise and interpret a range of human expressions, both linguistic and physical. Although artificial intelligence has come a long way, such ambiguity remains a difficult problem.

Once machines are able to learn from their interactions with people, they will also be able to emulate the behaviours that show personality. The way opens up to developing machines with designed personalities for particular tasks, such as dealing with the public in administrative activities, answering queries, working on help desks, and so on.

If the interaction and learning can become efficient, then teaching by robot is a distinct possibility. However there are very many situations when human interaction is required, for example in physical emergencies. The range of abilities of the human in dealing with unexpected events outstrips the robot though this may not always be the case.

In the case of soldiers, the range of expected behaviour is much more restricted. Automated aircraft drones deliver an explosive payload or fly along a particular flightpath taking photographs, all of which is definitively pre-programmed and exact. In those cases where there is no ambiguity, such robotic soldiering has already proved effective.

By removing humans from one side of the armed conflict, the injury and death toll is loaded into the other side. A mouse in the USA can control a drone in Afghanistan causing the death of hundreds at a single click. Such alienation from the reality of war makes a comparison with video games more than appropriate.

And this raises again the question of ambiguity. Whereas the operation of such equipment may seem objective and clinical, the programming of these devices relies on judgements made about targets. The issue of ambiguity is hidden by the certainty in the decision.

Looking at an aerial photograph may indicate a suspicious looking structure on the ground. The ambiguity is not resolved but the decision is made anyway. Whereas a human at closer range could distinguish between a guerrilla group and a wedding party, it may be impossible from an aerial photograph. So humans make mistakes.

Whether or not robotic soldiers will make mistakes depends on the extent that they can process signals from their environment. Image processing, movement sensors, and sound detection and interpretation all play their part but crucially it will be the decision-making software that makes the difference.

If the control systems are geared towards suspicion, then more will be detected as a threat than if there is the opportunity to question and check. Consider what would happen if someone tried to surrender to a robot soldier. Would it consider them as a trick, a hostile attempt to get close, or would it understand the gestures and react accordingly?

And if the strategic position changed dramatically, would it consequently adjust its tactics to preserve life, or would it be programmed to kill?

The mechanisation of warfare and the physical removal of control personnel from the battlefield, means that moral and ethical decisions will seem to be more remote. Clinical tactical strikes insulate the army personnel from the consequences of their actions reducing the moral impact of their decisions. Soldiers may become more like the robots they control.

Whether working as teachers or soldiers, the handling of ambiguity is an important problem but in both areas too, the moral and ethical questions stand out. There is a significant difference between education and training, and it is by no means certain that robots can have the personal and emotional flexibility to develop the former.

Warfare should not be made impersonal regardless of the military advantage provided by robots and automation and it is crucially important that soldiers are not morally inured against the consequences of their actions.

Facebook Comments