Robots in security and Law Enforcement
Back in June we wrote about the implications that changes in technology and progress within the field of Artificial Intelligence may have for the security world.
We also touched briefly on legal use of force with regards to new technology, Fast forward a month to July and we witnessed the Dallas Police using a bomb disposal robot take down Micah Johnson after he shot dead five members of their department.
Whilst this is not the first time a robot has been involved in the killing of a person (think weaponised drones) it's the first time the Police have employed such a tactic. The Robot in question was a 790 lb Remotec Andros Mark VA-1, which is normally utilised for bomb disposal duties, it's target on the 8th of July an ex. US Military Engineer who whilst had served in Afghanistan had never served in a front-line combat role.
After shooting five officers dead Johnson managed to get into cover in an area that made any attempt to detain him peacefully impossible, he also threatened to initiate explosive devices and kill more officers. The Chief of Police David Brown then instructed the SWAT team on the scene to get creative.
Necessity being the mother of invention, the SWAT team modified the robot with a simple explosive charge taped to it's arm, the robot was driven to Johnson, the charge detonated and the stand-off was ended.
In this situation we can view the robot as another tool in the Police departments arsenal, but the fact it was a robot and not a human that ended the life of Micah Johnson does not sit easy for many, perhaps they are worried that this is the thin end of the wedge in a Police force that is becoming more militarised and a world which is seeing the roles of humans become more and more automated.
Perhaps people are worried about the ethics with regards to a robot carrying out a killing without trial? There are some that speculated the robot could have been fitted with a device to incapacitate rather than kill - i.e. Gas or a Taser. Either way, in this situation, a shoot to kill order was clearly legally justified due to the behavior of target and it's likely in the future we will see more robot devices carrying out such roles.
The question is, as AI and learning algorithms improve, and the devices learn to make decisions for themselves where will they see the threat? Where will they see the target, we can only hope that machines of super-intelligence don't start to implement their own, learned rules of engagement that see us as the bad guys.