Browsed by
Author: matt

History textbooks about the depression are … depressingly wrong

History textbooks about the depression are … depressingly wrong

What caused the Great Depression? When I was in high school, I learned it was primarily due to over-speculation in many areas of the economy – especially the stock market. Combine all that over speculation, over-leveraging, and lack of proper banking controls based on the risky profiles of those investments that was common in that era, and you had a house of cards ready to collapse. It was the same reason we got the 2008 housing bubble – over-speculation (based on lending to people who should not have qualified for loans) and lack of managing risk profiles.

Is that what kids today learn though? What if I told you a recent review showed that most college level history texts attribute the Great Depression to a long-debunked New Deal notion of ‘under consumption’? That’s something I never heard before – even in high school. It had been debunked that long ago.

Perhaps this is one reason why college degrees are increasingly not working out for young people – because you’re getting a much more inferior degree these days.

This argument [under consumption] attained popularity in the early 1930s, and was used to justify many of the economic planning and regulatory programs of Franklin Delano Roosevelt’s New Deal. Economists today overwhelmingly reject “under consumption” theory.

The charts shows that US history instruction, including at the college level, is badly out of sync with the scholarly literature on the Great Depression.

The resulting treatment of the Great Depression in US history textbooks does little to educate students about the actual causes of the Great Depression. It does, however, privilege obsolete political arguments from the early 1930s that were used to justify the New Deal.

Fully robotic/automated restaurant

Fully robotic/automated restaurant

“To our knowledge, this is the world’s first operating restaurant where both ordering and every single cooking process are fully automated”

CaliExpress is a burger joint in Pasadena with a unique staff. Instead of people flipping burgers, it’s burgers and fries are made by a robot called Flippy from Miso Robotic’s, and uses PopID, a biometrics powered technology company, to place orders. In short, robots run the place. They claim its safer since no humans risk oil burns or spills, stressful jobs are reduced, and those running the systems make much higher wages.

The location also has displays on the Flippy robot development timeline, including robotic arms used on retired Flippy models, photographic displays, and 3-D printed artifacts from its development. 

The group encourages local schools to come for tours as the location shows the Flippy development timeline, including early robotic arms once used, photographic displays, and 3-D printed artifacts. The goal is to inspire future AI and automation development.

Now, if you combine that technology with experiences like Inamo in London…I think you have an amazing restaurant that can almost literally just run itself.

This kind of automation shouldn’t be a surprise. California passed a bill that requires chain restaurants to pay $20/hour minimum wage starting April 2024, so it’s no wonder restaurants are now finding automating jobs is cheaper than paying employees.

If the goal was to make more living wage jobs, it’s doing exactly this by removing low-skilled burger flipping and hospitality jobs and replacing them with tech workers. But this means that those burger flippers need to be able to take those robot repair jobs. That’s not something all of them can do. It is another in a long line of examples of how well meaning but short-sighted legislators actual may be working against their lowest-skill constituents because they don’t think about the long term consequences.

Or, it could be what they intended – which is making the divide between rich and poor even greater as low-skill workers jobs are being replaced with high-skill jobs they may not be able to do.

Articles:

‘Peaceful’ Protests

‘Peaceful’ Protests

This flier was making the rounds on local Portland social media during the 115 straight nights of riots in downtown Portland in 2020 – encouraging people to take these different roles. Video evidence you can find freely on YouTube and countless photos/Instagram accounts show that all of these roles were indeed being done on a nightly basis.

Note that only 1 of the 12 roles is a ‘peaceful protester’. The other 11 out of 12 roles include multiple levels of criminal assault, arson, social media disinformation, propaganda, and putting lives at risk blocking medical and rescue vehicles. It is most worth noting that ‘light mages’ using lasers/light directed towards people are in direct violation of international treaty against blinding weapons.

These are well-oiled machines; or as one person I know put it: Portland’s Protest-Industrial Complex.

Scalpers and the Apple Vision Pro

Scalpers and the Apple Vision Pro

Scalping on tech items has been going on since people lined up outside stores for the first gaming consoles. It is not much different than ticket scalpers. But having the audacity to try and scalp items that are not even out of stock and available for immediate pickup is pretty out there. But here’s exactly one of these craigslist sales happening here in even lowly Portland.

I know, they’re probably counting on people not checking. I know they’re likely banking on the particular model selling out and grabbing a sale from a desperate person needing one that day. You only need to get one person to make money after all…

Everyone looked real

Everyone looked real

How’s this for clever – and frightening? Scammers deepfaked various employees on a video conference and managed to scam $25 million from a Hong Kong based firm.

The scam used digitally recreated versions of the company’s chief financial officer, along with other employees, who appeared in a live deepfaked video conference call instructing an employee to transfer funds. They included active faces and voices that were good enough to fool them.

The scam was initially uncovered following a phishing attempt, when an employee in the finance department of the company’s Hong Kong branch received what seemed to be a phishing message, purportedly from the company’s UK-based chief financial officer, instructing them to execute a secret transaction. Despite initial doubts, the employee was convinced enough by the presence of the CFO and others in a group video call to make 15 transfers totaling HK$200 million to five different Hong Kong bank accounts. 

Articles:

How to Cut a Michelin Star Rated Onion

How to Cut a Michelin Star Rated Onion

I’m not great a cutting onions – but man this guy sure is.

Senpai Kai walks you through all the different kinds of ways to make Michelin Star rated cuts and when you might use them. His other videos show you how to make other Michelin star versions of various dishes. It’s interesting to see how it is all made.

Attacking AI with Adversarial Machine Learning

Attacking AI with Adversarial Machine Learning

Adversarial machine learning is a machine learning branch that tries to trick AI models by providing carefully crafted/deceptive input to break AI algorithms.

Adversarial network attacks are starting to get more and more research, but had humble beginnings. The first attempts were by protest activists that did very simple defacing or face painting techniques. Dubbed CV Dazzle, it sought to thwart early computer vision detection routines by painting over your face/objects with geometric patterns.

These worked on very early computer vision algorithms, but are largely ineffective on modern CV systems. The creators of this kind of face painting were largely artists that now talk about the effort more as a political and fashion statement than actually being effective.

More effective approaches

It turns out that you can often fool algorithms in a way not actually visible to average users. This paper shows that you can cause AI’s to consistently misclassify adversarially modified images. It does this by applying small but intentionally worst-case perturbations to examples from the dataset. This perturbed input results in the model outputting an incorrect answer with high confidence. For example, the panda picture below is combined with perturbations to produce an output image that looks ok visually, but is recognized by AI models as something incorrect – and incorrectly at high confidence.

This isn’t the only technique. There’s a lot more. One of them, Generative Adversarial Networks (GAN), are actually used to improve current AI models by attempting to fool a model, which is then used to help train it to be more robust – like working out at a gym or practicing the same thing with many variations.

Nightshade and Glaze

This kind of attack isn’t academic. Some artists see themselves currently in a battle with generative AI algorithms.

Nightshade is a tool that artists can use to alters the pixels of an image in a way that fools an AI algorithm and computer vision technology but leaves it unaltered to human eyes. If the images are scraped by an AI model it can result in images being incorrectly classified which results in an increasingly incorrectly trained model.

Glaze is a tool that prevents style mimicry. Glaze computes a set of minimal changes that will appear unchanged to human eyes but appears to AI models like a dramatically different art style. For example, a charcoal portrait but an AI model might see the glazed version as a modern abstract portrate. So when someone then prompts the model to generate art mimicking the charcoal artist, they will get something quite different from what they expected.

The AI Arms Race is On

As with anything, we’re now in an arms race with lots of papers written about the various problems of adversarial attacks and how to protect your models and training data from them. Viso.ai has a good overview of the space that will get you started.

Links: