Browsed by
Category: AI

Did it get creepy? It got creepy

Did it get creepy? It got creepy

Realbotix got a decent amount of press in the ‘in other things we saw at CES 2025’ category. They’re a company which aims to make more humanoid robots in both appearance and conversation – though it appears they aren’t making robots that look like just anyone. Maybe to attract a certain demographic(s) that might shell out the $125k for one?

It was kind of fun to watch the press tastefully stumble around how to describe them.

OpenAI connected to a rifle

OpenAI connected to a rifle

OpenAI has cut off a developer who built a device that could respond to ChatGPT queries to aim and fire an automated rifle. The device went viral after a video on Reddit showed its developer reading firing commands aloud, after which a rifle beside him quickly began aiming and firing at nearby walls.

This kind of robotic automation has been possible for some time – and it’s components are easily available to hobbyists around the world. The only novel thing is using voice control; which isn’t even that novel by chatGPT standards. The reality is – as we are seeing in Ukraine – that drones are being used for active warfare and it’s only a small stretch further to imagine soldiers building something like this to defend their positions.

This obviously brings up a lot ethical and philosophical questions. Are these weapons – or defenses like barbed wire/electric fences? Are they illegal? What makes them illegal? What makes them a war crime? These sorts of devices even have their own classification: lethal autonomous weapons – and many of them are not actually illegal in war.

In civil law, there is the famous Katko v. Briney case of a booby trapped shotgun. It isn’t the automated, unattended, or indiscriminate nature of such a device that makes it illegal. It’s the fact that deadly force can only be used to defend a human life imminently in peril. A robot, or even a homeowner, cannot use deadly force to defend property – even if the person is on the property illegally or performing other illegal acts (theft). But what if the autonomous system could determine when someone was about to kill? What if it’s a mob with weapons approaching you?

We’re entering a brave new world – one in which our ethics and laws are going to have to do a lot to catch up on.

Articles:

Trying to convince AI they are living in a simulation

Trying to convince AI they are living in a simulation

Several companies are developing AI powered NPC’s for games – complete with behaviors and realtime voice synthesis. This YouTuber decided to try and tell the NPC’s they were living in a simulation – and the results were not so different than what would happen if you tried this on the streets in real life. With science telling us reality may be a simulation, maybe we’re all just layers of bots in the ether…or destined for a higher reality.

Browsing the internet on a Mac Plus

Browsing the internet on a Mac Plus

Hunter Irving picked up a 1986 Macintosh Plus and helped create MacProxy Plus, an open-source app that lets vintage Macs browse the modern web. 

He uses a BlueSCSI device to emulate a rare mac ethernet adapter (Daynaport SCSI/Link-T) and a Macproxy to convert modern web pages to something 90’s era html only browsers can display. He improved Macproxy to have modular components with custom handling for specific websites. Thus, MacProxy Plus. He used claude.ai to help write some of the proxy.

He then went on to handle images – and video – using dithering and generated ASCII art.

Links:

AI runs 100m dash

AI runs 100m dash

AI Warehouse tasked a group of five AI agents to complete a 100-meter dash. Each was trained using Deep Reinforcement Learning and each agent has different physical characteristics. It’s kind of like watching AI play QWOP.