The 15-second audio clip sounds like a muffled version of Pink Floyd’s Another Brick in the Wall played underwater. Except Pink Floyd didn’t perform any of the music in the clip. Instead, the track was captured by a team of researchers at the University of California at Berkeley, who looked at the brain activity of more than two dozen people who listened to the song.
That data was then decoded by a machine learning model and reconstructed into audio — marking the first time researchers have been able to re-create a song from neural signals.
A University of Montana study used ChatGPT 4 take the Torrance Test of Creative Thinking, a well-known tool used for decades to assess human creativity. They then submitted the AI’s test answers along with the test taken by 24 UM students taking Dr. Erik Guzik’s entrepreneurship and personal finance classes.
These scores were then compared with 2,700 college students nationally who took the TTCT in 2016. All submissions were scored by Scholastic Testing Service, which didn’t know one of the submissions was done by ChatGPT.
The result? The AI’s test answers placed in the top percentile for fluency – the ability to generate a large volume of ideas – and for originality – the ability to come up with new ideas. The AI slipped a bit – to the 97th percentile – for flexibility, the ability to generate different types and categories of ideas.
“For ChatGPT and GPT-4, we showed for the first time that it performs in the top 1% for originality,” Guzik said. “That was new.”
FoxMaster uses a wide variety of AI tools: vision, object recognition, chatgpt, and others to give Laura Croft the AI treatment. She not only can traverse the game, but also has personality and narrates what is going on.
Foxmaster admits some of this is not complete and may be stretched a bit – but his analysis and breakdown of the problems of navigation, identification, and character personality into discrete problems is very interesting.
Beginning early July, mirrored spheres began popping up in cities across the world. It is not public art but a 6.2 pound biometric imaging device designed to scan your eyeballs and capture your irises.
No, it’s not a joke. The company doing this is Worldcoin. Worldcoin was founded three years ago by Alex Blania and Sam Altman, CEO of OpenAI. Their intent is to create “a new identity and financial network connecting billions of people in the age of A.I.” via a privacy-ensuring digital identity it calls World ID and a digital currency, WLD. World ID is supposed to be a ‘perfectly safe’ global identity protocol to enable individuals to prove their personhood online in an era of rampant A.I. deepfakes. WLD is a tool to build an “A.I.-funded UBI [universal basic income]”
This should all sound familiar because they’re the same arguments being made for digital currencies like Bitcoin. The company stresses they have security all along the trust chain, but it reads more like a dystopian nightmare in which everyone has been cataloged and identified by an unknown party with unknown motives. This is a company that could be working for anyone. A 3rd party agent that has unknown motives, unknown technical expertise, and unknown longevity to keep your biometric data safe. You do not really know which government, people, nor company is really behind it all nor what their values nor legal protections you would have. It’s a terrify black box of giving up your biometric data to an almost completely unknown entity. I vote a hard no.
In my opinion, this is a violation of privacy and an extremely bad idea. Even if they are secure today – are they really ready to protect your biometric data, collected surreptitiously, without consent, for all time, and never to sell it? It seems pretty unlikely as every country and every company who has promised this before hasn’t lasted 10 years before being hacked, leaking, being forced to turn over that data, or just flat selling you and your data to the highest bidder when they’re bought out or go bankrupt. Ready for North Korea, China, or Putin to have access to your World ID and any money you put in WLD?
So, maybe be sure to wear some good sunglasses when you find a shiny orb laying on the street.
At least some folks are starting to do an investigation – one I suspect will end badly for Worldcoin considering the EU’s less than open stance on collection/storage of biometric data collection and privacy.
AI is here. Netflix put out a short show called Unknown: Killer Robots that is largely on-point. If anything, it’s not as up to date as reality – which is a little scary.
Despite our efforts, there’s really no putting the AI genie back in the bottle despite the attempts of artists, politicians, and academic pundits. AI has demonstrated it can teach robots how to walk and fly better than any static system, it can create art faster and with nearly the same quality as real artists. The question is, can we control and limit it in a reasonable way – or will it destroy us?
To that point, the show demonstrates how countries are increasingly seeing AI in military use. We’re already seeing off-the-shelf drones being used extensively in the Ukrainian war – from tiny drones scouting and delivering hand grenades to ad-hoc drones using small plane engines to delivering bombs. They also cover the counter-intuitive reality that trying to save lives by developing military technology has almost always lead to even greater casualties – such as the Gatling gun which was design to put fewer people on the military field but resulted in massive causalities of mis-matched soldiers in WW1
Many of the topics and data they cover in this show are actually old news – the reality is that AI enabled systems are around 5-7 years more advanced. Which is almost half the length of time the modern field has existed.
Still, they cover most major bases:
AI flying drones that go into hostile buildings to map and scope them without risking troops in extremely deadly house-clearing fights.
Wildfires and the fog of war – multi-system battle managers
The stakes couldn’t be higher. While politicians can argue the ethics, the reality is that when forces are pushed to their breaking point and a force is about to lose – just about nothing is off the table. Especially ad-hoc and terrorist forces which have perpetrated chemical, biological, and conventional weapons attacks (from bombings to shootings).
Some of the better quotes:
It will be like people on horseback charging tanks. Forces with AI will absolutely dominate forces without AI
There is no prize for second place in war.
I think people have largely underestimated the peace we enjoy today due to overwhelming military dominance we’ve had over the last 70 years.
One of the big reasons for Google and others web services to develop their own custom chips is that general purpose CPUs are flexible but typically need a lot of power. That power costs a lot of money in electricity bills and cooling costs in huge data centers. So, why buy chips with lots of stuff you don’t need when you can build your own – and save millions of dollars a year in a data center with lower cooling and power costs?
In just a 6 years, Google has managed to design and build 4 ever increasingly capable AI data center chips. They had somewhat humble beginnings – but they are becoming increasingly powerful. Now they have just published information about TPU version 4.
a nearly 10x leap forward in scaling ML system performance over TPU v3
boosting energy efficiency ~2-3x compared to contemporary ML DSAs, and
reducing CO2e as much as ~20x over these DSAs in typical on-premise data centers
Even crazier, it’s the first system to use purely optical switching.
TPU v4 is the first supercomputer to deploy a reconfigurable OCS (optical circuit switching). OCSes dynamically reconfigure their interconnect topology and are much cheaper, lower power, and faster than Infiniband. The figure below shows how an OCS works, using two MEMs arrays. No optical to electrical to optical conversion or power-hungry network packet switches are required, saving power.
I like his thinking: we already have enough computers – what we need is more personality. Where are the kind of robots we saw as kids? C3PO, R2D2, the robot from Lost in Space. So, he hacked an Alexa into an old TV with a set of eyes and gives his robot a little of the personality he was looking for.
The question of copyright, lawsuits, and AI is going to very quickly come to a head.
Creatives from artists to comedians are filing lawsuits, staging online ‘protests’, and suing various AI-based companies for copyright infringement. In 2022, ArtStation members staged a online campaign against AI generated artwork by posting ‘No AI art’ images in their portfolios.
But it doesn’t stop there. Now we can add game developers to the fray.
Recently Steam devs were seeing their games with AI generated content blocked from Steam. Valve responded that it was not able to “ship games for which the developer does not have all the necessary rights” or for “utilizing AI tech.”
In a statement to IGN, Valve spokesperson Kaci Aitchison Boyle clarified the position. While developers can use these AI technologies in their work with appropriate commercial licenses, they can not infringe on existing copyrights.
Aitchison Boyle emphasized that Valve is not attempting to discourage the use of AI but the confusion arose due to Valve’s ongoing efforts to incorporate AI technology into its existing review process while ensuring compliance with copyright laws.
Dream Textures add-on for Blender by Carson Katri uses stable diffusion to generate textures for a scene. Below is an example based on the prompt “sci-fi abandoned buildings”. The AI-generated results aren’t always perfect, but the process is pretty amazing. Not to mention amazingly fast compared to creating from scratch.