I know, they’re probably counting on people not checking. I know they’re likely banking on the particular model selling out and grabbing a sale from a desperate person needing one that day. You only need to get one person to make money after all…
The scam used digitally recreated versions of the company’s chief financial officer, along with other employees, who appeared in a live deepfaked video conference call instructing an employee to transfer funds. They included active faces and voices that were good enough to fool them.
The scam was initially uncovered following a phishing attempt, when an employee in the finance department of the company’s Hong Kong branch received what seemed to be a phishing message, purportedly from the company’s UK-based chief financial officer, instructing them to execute a secret transaction. Despite initial doubts, the employee was convinced enough by the presence of the CFO and others in a group video call to make 15 transfers totaling HK$200 million to five different Hong Kong bank accounts.
I’m not great a cutting onions – but man this guy sure is.
Senpai Kai walks you through all the different kinds of ways to make Michelin Star rated cuts and when you might use them. His other videos show you how to make other Michelin star versions of various dishes. It’s interesting to see how it is all made.
Adversarial machine learning is a machine learning branch that tries to trick AI models by providing carefully crafted/deceptive input to break AI algorithms.
Adversarial network attacks are starting to get more and more research, but had humble beginnings. The first attempts were by protest activists that did very simple defacing or face painting techniques. Dubbed CV Dazzle, it sought to thwart early computer vision detection routines by painting over your face/objects with geometric patterns.
These worked on very early computer vision algorithms, but are largely ineffective on modern CV systems. The creators of this kind of face painting were largely artists that now talk about the effort more as a political and fashion statement than actually being effective.
More effective approaches
It turns out that you can often fool algorithms in a way not actually visible to average users. This paper shows that you can cause AI’s to consistently misclassify adversarially modified images. It does this by applying small but intentionally worst-case perturbations to examples from the dataset. This perturbed input results in the model outputting an incorrect answer with high confidence. For example, the panda picture below is combined with perturbations to produce an output image that looks ok visually, but is recognized by AI models as something incorrect – and incorrectly at high confidence.
This isn’t the only technique. There’s a lot more. One of them, Generative Adversarial Networks (GAN), are actually used to improve current AI models by attempting to fool a model, which is then used to help train it to be more robust – like working out at a gym or practicing the same thing with many variations.
Nightshade and Glaze
This kind of attack isn’t academic. Some artists see themselves currently in a battle with generative AI algorithms.
Nightshade is a tool that artists can use to alters the pixels of an image in a way that fools an AI algorithm and computer vision technology but leaves it unaltered to human eyes. If the images are scraped by an AI model it can result in images being incorrectly classified which results in an increasingly incorrectly trained model.
Glaze is a tool that prevents style mimicry. Glaze computes a set of minimal changes that will appear unchanged to human eyes but appears to AI models like a dramatically different art style. For example, a charcoal portrait but an AI model might see the glazed version as a modern abstract portrate. So when someone then prompts the model to generate art mimicking the charcoal artist, they will get something quite different from what they expected.
Buxton Prep and Boarding School in Massachusetts has adopted a novel experiment: completely banning smart phones on campus (staff and students) and replacing them with very simple Light Phones. Light Phones are a model of “dumb” phone that can make normal calls, but everything else has very limited functionality. They use simple black and white screens, can only send texts very slowly, and can’t load any modern applications. This allows parents to still contact their children, but in ways that don’t cause distraction.
The interesting part: almost everyone agrees (staff and students) that the school is much better. There’s fewer interruptions during class and more meaningful interactions across the board.
The school cannot comment on academic performance changes because they use a narrative evaluation system, but Peter Beck, the head of the school, says the move has been transformative to school social life:
“People are engaging in the lounges. They are lingering after class to chat,” said Beck, who estimates that he’s now having more conversations than ever at the school. “All these face-to-face interactions, the frequency has gone through the roof.”
Ian Trombulak who tried a similar thing in another school says it’s not easy. It starts with what could almost be described as the 5 stages of grief. When his students learned that cellphones wouldn’t be allowed on a field trip, the news was ‘apocalyptic’
“They were so upset. They didn’t know how to handle themselves. I was really nervous,” said Trombulak, reliving the drama. But part way through the trip, the kids largely forgot about their phones. “At the end of the first day, sitting around the campfire, they said, ‘We didn’t think about our phones all day,’” “That was really cool.”
As for Buxton school students, a similar experience was recounted by high school senior when they found out they would be losing their smart phones:
“When it was announced I practically had a breakdown,” said then senior Max Weeks. And while he’s still not a fan of what he says was a “unilateral” decision to switch to the Light Phone, he said, overall, the experience “hasn’t been as bad as I expected”.
It’s not just anecdotal either – there’s data behind it.
Contrary to those that hype technology in the classroom, Arnold Glass, a professor of psychology at Rutgers University who has researched the impact of cellphones on student performance say “[students] lose anywhere between a half and whole letter grade if they are allowed to consult their phones in class.”
Nicholas Carr (Pulitzer Prize winning technology writer) says that peer reviewed studies show the brain interprets printed and digital text differently. People generally read digital text 20-30% slower and reading hyper-linked text seems to increase the brain’s “cognitive load,” lowering the ability to process, store, and retain information.
Gregory Cannon like’s Tetris. He recently wrote an AI that plays NES Tetris extremely well. It’s not perfect as it uses a search & heuristic approach, but boy can it play some Tetris.
13-year-old boy named Willis Gibson/Blue Scuti just got the first NES Tetris kill screen (game crash) on level 157. A monumental accomplishment that was considered pretty much impossible. It was originally thought the kill screen was level 29 because the game speed up to the point where nobody thought actual play was possible. Well, it turns out by using hyper-tapping (invented by Thor Aackerlund), you can get past level 29 all the way to level 41. Rolling was invented by Cheez and got people up to level 148.
Along comes StackRabbit. StackRabbit is a automated Tetris playing bot that was created by Gregory Cannon. With the machine playing the game, they discovered there really was a kill screen on level 237. As people (HydrantDude) dug into the crash, they discovered the crash could happen on different levels because several different factors triggered the crash. HydrantDude discovered the earliest it could happen was level 155 – that was within reach of human players for the first time ever.
That’s what Willis/Blue Scuti just did on level 157. This was considered the very first human reached true kill screen.
But, as the video points out, there is always a new set of achievements. Because the kill screen depends on game conditions, it is still possible to go to even higher levels.
It’s been interesting to watch the record chasing in the NES version of Tetris. From new control techniques to reaching levels that people never thought possible.
Dan Foisy13K built a volumetric display that uses a phased array of ultrasonic transducers to levitate a 1mm foam ball and move it at speeds greater than 1m/s. He even used an FPGA to do the acoustic calculations fast enough.
Things are never what they seem – especially in food commercials. This video shows you all the tricks used to make ordinary looking food look like a runway model.