OpenAI connected to a rifle
OpenAI has cut off a developer who built a device that could respond to ChatGPT queries to aim and fire an automated rifle. The device went viral after a video on Reddit showed its developer reading firing commands aloud, after which a rifle beside him quickly began aiming and firing at nearby walls.
This kind of robotic automation has been possible for some time – and it’s components are easily available to hobbyists around the world. The only novel thing is using voice control; which isn’t even that novel by chatGPT standards. The reality is – as we are seeing in Ukraine – that drones are being used for active warfare and it’s only a small stretch further to imagine soldiers building something like this to defend their positions.
This obviously brings up a lot ethical and philosophical questions. Are these weapons – or defenses like barbed wire/electric fences? Are they illegal? What makes them illegal? What makes them a war crime? These sorts of devices even have their own classification: lethal autonomous weapons – and many of them are not actually illegal in war.
In civil law, there is the famous Katko v. Briney case of a booby trapped shotgun. It isn’t the automated, unattended, or indiscriminate nature of such a device that makes it illegal. It’s the fact that deadly force can only be used to defend a human life imminently in peril. A robot, or even a homeowner, cannot use deadly force to defend property – even if the person is on the property illegally or performing other illegal acts (theft). But what if the autonomous system could determine when someone was about to kill? What if it’s a mob with weapons approaching you?
We’re entering a brave new world – one in which our ethics and laws are going to have to do a lot to catch up on.
Articles: