Should robots be given power to make ethical decisions?
As far as I understand this question, robots are programmed and are therefore provided with a set of yes/no options in ‘decision making.’ Therefore, we would have to develop a set of ethics for the area of work the robot is performing (as Asimov attempted). It seems likely that most if not all issues have a yes/no component to them – that doesnt mean that they think – it means that they make a mechanical choice based on a set of yes/no ‘rules.’ Seems under those circumstances it is possible to do so.