Browsed by
Category: AI

McDonalds ended AI drive through tests

McDonalds ended AI drive through tests

The fast-food giant ended a test run of its AI drive-thru technology partnership with IBM in more than 100 restaurants. The so-called Automated Order Taker will be shut off no later than July 26, 2024.

The global AI partnership began in 2021. The combination of technologies from the two companies aimed to simplify and speed up operations with voice-activated ordering.

Two sources familiar with the technology told CNBC that among its challenges, it had issues interpreting different accents and dialects, which affected order accuracy.

Article:

Zoom in! With AI

Zoom in! With AI

Extreme zoom-in videos are something that have gotten a little publicity with some videos made by Jesse Martin.

Now Google and University of Washington have created a text-to-image model for extreme semantic zooms for consistent multi-scale content creation

Check out the Research paper here.

It seems like this kind of technique that over-arches the generation from one topic to the next might be very useful in maintaining continuity relating to temporal stability.

Articles:

Syntilay 3d Printed shoes

Syntilay 3d Printed shoes

The idea of 3D printing AI generated shoes is not new. Nike has debuted AI generated and 3D printed shoes, and others like Lightspray are also creating completely automated manufacturing methods.

Now enter Syntilay, the world’s first entirely AI-designed and 3D-printed thermoplastic polyurethane shoe. Syntilay used Midjourney AI to create the image, then the image was run back through Vizcom to generate the 3D model data. Generative AI was used one more time to apply some patterns to the final design to add some character. They’re then shipped to the printer for each order.

You can own your own for $149.99 a pair.

The 89 year old Joe Foster, who co-founded Reebok 60 years ago, is so interested in the idea that he is now helping to launch Syntilay.

Articles:

Design logos that have words with DallE 3 and ChatGPT

Design logos that have words with DallE 3 and ChatGPT

AI may have trouble with accuracy of information (AI is the know-it-all neighbor) it is a great way to brainstorm a variety of different ideas quickly. Getting them to generate images that have correctly spelled words can be hard.

Julian Horsey and Metricsmule give you prompts that demonstrate how to use ChatGPT combined with DallE 3 to generate logos for your company – that include correctly spelled words.

Article:

Google report on using AI for internal code migrations

Google report on using AI for internal code migrations

Google published a report on it’s effort to migrate code to the latest dependencies – an often thankless task fraught with risk. Google’s code migrations involved: changing 32-bit IDs in the 500-plus-million-line codebase for Google Ads to 64-bit IDs; converting its old JUnit3 testing library to JUnit4; and replacing the Joda time library with Java’s standard java.time package. The 32-bit ID’s were particularly rough because they were often generically defined types that were not easily searchable.

They used a collection of AI tools as well as manual code reviews and touch-ups to achieve their goal. They emphasize that LLMs should be viewed as complementary to traditional migration techniques that rely on Abstract Syntax Trees (ASTs), grep-like searches, Kythe, and custom scripts because LLMs can be very expensive.

The results?

With LLM assistance, it took just three months to migrate 5,359 files and modify 149,000 lines of code to complete the JUnit3-JUnit4 transition. Approximately 87 percent of the code generated by AI ended up being committed with no changes. For the Joda-Java time framework switch, the authors estimate a time saving of 89 percent compared to the projected manual change time.

Links:

No more software engineer hiring in 2025?

No more software engineer hiring in 2025?

Generative AI is reaching greater and greater heights. Google has proven that they can migrate software at rate of up to 80-90% faster by using AI assisted coding tools. More and more companies are finding AI assisted software development can dramatically help certain tasks and predict AI assisted development will soon be sweeping the industry.

Now Salesforce, Microsoft, Replit, Meta and other CEO’s are all saying they might not be hiring a lot more software engineers very soon.

Salesforce CEO Marc Benioff has gone so far as saying Salesforce is looking at essentially freezing hiring for software engineers in 2025 owing to agentic AI: 

I think in engineering this year at Salesforce, we’re seriously debating maybe weren’t gonna’ hire anybody this year because we’ve seen such incredible productivity gains because of the agents that work side-by-side with our engineers, making them more productive. And we can all agree, software engineering has become a lot more productive in the last two years with these basically new models.

Mark Zuckerberg also piped with similar sentiments

In 2025, AI systems at Meta and other companies will be capable of writing code like mid-level engineers. at first, it’s costly, but the systems will become more efficient as time passes. eventually, AI engineers will build most of the code and AI in apps, replacing human engineers.

Microsoft Copilot Studio can now create new agents through Copilot Studio and integrate them into Copilot. Microsoft’s CEO said:

The way to conceptualize the world going forward is everyone of us doing knowledge work will use Copilot to do our work and we will create a swarm of agents to help us with our work and drive the productivity of our organizations.

Replit’s CEO Amjad Masad went even further by saying that ‘We don’t care about professional coders anymore’. They have grown their revenue five-fold in the last 6 months thanks to artificial-intelligence capabilities that enabled their new product called “Agent,” a tool that can write a working software application with nothing but a natural language prompt. “It was a huge hit,” Masad said.

So what’s a soon to be out of work software engineer to do?

A recent report by The World Economic Forum (WEF) and AI expert Tak Lo both estimate about 92 million workers are about to be displaced by AI, but claim there will be the creation of 170 million new jobs.

I always approach these reports skeptically. Just because you claim it will create new jobs doesn’t mean those that lost their jobs have the skills or capability to take one of those new jobs. Jobs that often require very different skills or appeal to very different people. In fact, they predict 39% of current skill sets will become outdated by 2030.

The report indicates that manual labor jobs will be the safest: construction, farmers, laborers, medical/nursing, truck drivers, etc. They claim that knowledge workers should be largely safe, but that’s definitely not what technology leaders are clearly saying – implying this rosy report is probably not right.

Cognitive decline?

Additionally, a study by Uplevel found that the productivity of developers hasn’t improved because of AI coding assistants, and their code has become more buggy.

Research is also starting to show that AI may be contributing to a decline of critical thinking skills:

The effects of AI on cognitive development are already being identified in schools across the United States. In a report titled, “Generative AI Can Harm Learning”, researchers at the University of Pennsylvania found that students who relied on AI for practice problems performed worse on tests compared to students who completed assignments without AI assistance. This suggests that the use of AI in academic settings is not just an issue of convenience, but may be contributing to a decline in critical thinking skills.

Greg Isenberg had an interesting interaction with a young Stanford grad who claimed he was forgetting words now due to his near constant use of chatGPT to finish his thoughts. https://twitter.com/gregisenberg/status/1869202002783207622

The new skills

There’s also a great article by Dev that outlines the mindset and skills developers will need to develop to stay relevant.

Articles:

Did it get creepy? It got creepy

Did it get creepy? It got creepy

Realbotix got a decent amount of press in the ‘in other things we saw at CES 2025’ category. They’re a company which aims to make more humanoid robots in both appearance and conversation – though it appears they aren’t making robots that look like just anyone. Maybe to attract a certain demographic(s) that might shell out the $125k for one?

It was kind of fun to watch the press tastefully stumble around how to describe them.

OpenAI connected to a rifle

OpenAI connected to a rifle

OpenAI has cut off a developer who built a device that could respond to ChatGPT queries to aim and fire an automated rifle. The device went viral after a video on Reddit showed its developer reading firing commands aloud, after which a rifle beside him quickly began aiming and firing at nearby walls.

This kind of robotic automation has been possible for some time – and it’s components are easily available to hobbyists around the world. The only novel thing is using voice control; which isn’t even that novel by chatGPT standards. The reality is – as we are seeing in Ukraine – that drones are being used for active warfare and it’s only a small stretch further to imagine soldiers building something like this to defend their positions.

This obviously brings up a lot ethical and philosophical questions. Are these weapons – or defenses like barbed wire/electric fences? Are they illegal? What makes them illegal? What makes them a war crime? These sorts of devices even have their own classification: lethal autonomous weapons – and many of them are not actually illegal in war.

In civil law, there is the famous Katko v. Briney case of a booby trapped shotgun. It isn’t the automated, unattended, or indiscriminate nature of such a device that makes it illegal. It’s the fact that deadly force can only be used to defend a human life imminently in peril. A robot, or even a homeowner, cannot use deadly force to defend property – even if the person is on the property illegally or performing other illegal acts (theft). But what if the autonomous system could determine when someone was about to kill? What if it’s a mob with weapons approaching you?

We’re entering a brave new world – one in which our ethics and laws are going to have to do a lot to catch up on.

Articles: