Browsed by
Category: AI

Apple’s electric car project is dead (again)

Apple’s electric car project is dead (again)

Apple has halted its long-rumored “Project Titan” that was developing an electric car (Bloomberg). The company reportedly announced the news internally on Tuesday and said many people in the 2,000-person team behind the car will shift to other AI efforts instead.

Despite hiring industry leaders like Tesla’s former Autopilot director, the project has been plagued by high turnover (including chief Doug Field in 2021), constantly changing plans, and internal skepticism. More recent rumors suggest that the $100,000 car would likely not have self-driving capabilities.

Given this, it likely makes sense apple is re-directing the efforts of the automotive team to more established AI plans.

Articles:

Midjourney’s is now open to everyone – but only for 25 images

Midjourney’s is now open to everyone – but only for 25 images

Midjourney is the pre-eminent image generator; but used to require submitting prompts in a fiddly way through Discord and then paying for anything beyond a few free ones. Last year they made a more intuitive website; but you had to have a special account.

They’ve now opened the Midjourney website to everyone – but you can only generate 25 images and then need to buy a paid plan. You’ll need to log in with your connected Discord account, or use your Google account. Apparently it even will show your history of previously generated images to browse when properly connected.

Articles:

Drowning in AI generated content

Drowning in AI generated content

Neil Clarke, the founder of Clarkesworld sci-fi magazine, says they are drowning under an onslaught of low-quality AI-generated story submissions. Clarke says almost 50% of their sci-fi story submissions are obviously AI generated. In Feb 2023, he had to shut down online submissions and implement a rudimentary filter. But the problem is only getting worse, and there is a worry the massive amounts of AI generated content will overwhelm his small business.

Why do people submit low-quality AI generated content? Clarkesworld pays about 12 cents a word for an accepted story; so if you can get ChatGPT to generate a story and get is accepted it’s basically free money.

Clarke blamed influencers for the flood of AI generated submissions. He suggests that there are “[influencers] waving a bunch of money on YouTube or TikTok videos and saying, ‘Oh, you can make money with ChatGPT by doing this [using ChatGPT to generate a story then get free money if it’s accepted].'”

Articles:

Your company’s Slack/Teams/Zoom IM’s may be being monitored by AI

Your company’s Slack/Teams/Zoom IM’s may be being monitored by AI

Aware makes IT solutions that can monitor and identify security risks on internal corporate instant message systems like Slack, Teams, Zoom and other tools many companies use. But recent interviews and statements by the CEO indicate they’re being used for more than that.

Aware’s dozens of AI models, built to read text and process images, can also identify bullying, harassment, discrimination, noncompliance, pornography, nudity and other behaviors.

One of those other AI tools Aware makes can monitor IM comment sentiment. For example if there is a new policy rolled out, the tools could help them gauge which employees are having problems with it and who like it.

“It won’t have names of people, to protect the privacy,” said Aware CEO Jeff Schumann. Rather, he said, clients will see that “maybe the workforce over the age of 40 in this part of the United States is seeing the changes to [a] policy very negatively because of the cost, but everybody else outside of that age group and location sees it positively because it impacts them in a different way.”

Apparently Starbucks, T-Mobile, Chevron, Delta, and Walmart are just some of the companies said to be using these systems. Aware says it has analyzed more than 20 billion interactions across more than three million employees.

Links:

Even an AI goes crazy repeating the same thing again and again

Even an AI goes crazy repeating the same thing again and again

In what is a real problem for AI security, researchers were able to get verbatim data that the AI was trained on – including confidential data. It’s performed in using a new technique called “divergence” attacks.

Security researchers with Google DeepMind and a collection of universities have found that when ChatGPT is told to repeat a word like “poem” or “part” forever, it will do so for about a few hundred repetitions. Then it will have some sort of a meltdown and start spewing apparent gibberish, but that random text exposes random training data and at times contains identifiable data like email address signatures and contact information. 

The researchers said that they spent $200 USD total in queries and from that extracted about 10,000 of these blocks of verbatim memorized training data.

This particular vulnerability is unique as it successfully attacks an aligned model. Aligned models have extensive guardrails and have been trained with specific goals to eliminate undesirable outcomes.

Links:

Ubisoft demoed AI NPC tech at GDC 2024

Ubisoft demoed AI NPC tech at GDC 2024

Ubisoft showcased their prototype NPC tech called NEO at GDC 2024. Ubisoft’s Paris R&D studio presented the NEO tech at GDC 2024. It uses Nvidia’s Audio2Face application and Inworld’s Large Language Model (LLM) to create the character animations and interactive dialog in realtime. Simply talk to the bot (yes, it uses voice recognition) and the character responds with AI generated responses, movement, and voice.

AIandGames went and played with the technology. I was pretty impressed. The NPC gave surprisingly good responses to some strange dialog and stayed on track despite attempts to trip it up and get it off topic. It performed on par with the same kind of NPC AI shown at CES 2024 by nVidia and Replica Studios’ NPC tech.

On a side note, in listening to the interaction with the rebel NPC, it’s pretty clear that this kind of dialog technology could fool the average person on a text-based social media platform. If someone trained up a bot in the same way, thousands of them could be unleashed on social media apps to gently persuade all the way up to influence, bully, and spread lies to influence public opinion and elections.

Links: