A number of people have reported experiencing eye pain, vision problems, and sunburnt skin after attending ApeFest, a Bored Ape Yacht Club NFT collection event Nov 3-5th in Hong Kong. One person posted he woke up at 4am and could not see anymore. They rushed to the hospital where they are hoping to make a full recovery but were diagnosed with UV eye damage. It turns out, someone was almost certainly using full-spectrum UV-C (often called germicidal) lights instead of black lights.
BigClive on Youtube (who does amazing videos about extremely dangerous and counterfeit electronic devices you can buy and should be cautious of) recently uploaded a video and found the likely culprit. In the photo of the area with the toilets there were fluorescent tubes which are the characteristic teal-blue of mercury vapor discharge which emits quite a bit of UV-C and ozone as well.
Did you know you’re getting about 3.00 mSv of radiation every year if you live in the US? This breaks down to about 0.0082mSv per day.
There’s actually quite a bit of variation in the US depending on where you live. The types of rock in your area, position relative to the planet’s poles, elevation, and wide variety of other factors can affect your daily background radiation dosage.
Approximate effective radiation dose:
Comparable to natural background radiation for:
One day’s background radiation
0.0082 mSv
1 days
A year of background radiation (US average)
3.0 mSv
1 year/365 days
Cross-country flight from New York to Los Angeles
0.04 mSv
4.87 days
X-rays
X-rays are something most people are familiar with. It turns out, however, modern x-rays actually give you a very low dosage. The lowest doses are dental X-rays. For a full panoramic dental x-ray, you’ll get about 0.007 mSv (0.7mrem). How little is that? One full day’s background radiation is about 0.0066 mSv (depending on where you live).
CT scans are particularly troublesome because they actually give you a pretty substantial dose of radiation. How much? Ex: A chest X-ray gives you the equivalent of 10 days of natural background radiation (0.1mSv). This is a very low dosage and highly unlikely to cause permanent or long-term damage.
On the other hand, a chest CT scan gives you 2.6 YEARS of radiation dosage – the equivalent of 77 chest x-rays. See some examples below or click them to see even more dosages based on different body part:
Etihad Airways started offering something they’re calling The Residence on some of their highest end flights. Private airport entrances with concierge service, a private 3 room suite on the plane with living room, double bedroom, and full bathroom with shower. Add private large screen tv’s, cognac and turndown services along with private high end meals and you have a recipe for luxury. The price? $20,000 per ticket (vs $5000 for first class).
2023 Update: Unfortunately, it seems like The Residence were largely discontinued when Etihad retired most of their A380’s during covid. Nonstop Dan shows why they ran into problems selling them. Not because there weren’t customers, but because it turns out people that can afford that kind of luxury usually find it is barely any better deal than just renting a whole private jet. The price for a one-way private jet from Abu Dhabi to London also costs about $40,000 – or about the same as 2 Residence tickets. So if you’re flying with at least one other person, the private jet lets you take many more friends/business partners, have much more flexibility in schedules, and avoid big airports all together.
Bonus points for mentioning Abu Dhabi is a huge hub because it is located in a spot that makes most of the world’s population just 6 hours away – from China to Europe.
Free code campcompares various AI-based image recognizers to see how well they can identify if a picture is a chihuahua or a muffin. It’s surprisingly harder than you think and has a history of being used to determine the quality of the recognizer.
The author compares solutions from Amazon, Microsoft, IBM, Google, Cloudsight, and Clarifai. They also discuss the per-image cost as well as the quality of tags and other considerations. Definitely worth looking at if you’re trying to find an image classifier system.
Matthew Brennan is not a computer scientist, but he takes 335 frames from a video and then processes them 3 different ways to compare the results. He creates a 3D mesh out of it for Photogrammetry, the processes it into a NeRF, and finally Gaussian Splatting.
What’s cool is that he shows how each works and how to process the data yourself. He also gives you access to the data to try it yourself.
His speculations? Perhaps AI personas will become so realistic and comforting to us that we’ll stop interacting with each other – and spend our lives conversing and in relationships with non-entities.
Or (as is already happening) governments, extremist groups, media, and intelligence agencies weaponized AI to flood the internet with manipulated stories, data, and opinions. Finally (as if becoming unable to form real relationships and being in relationships with AI is not scary enough) he asks what if AI itself becomes conscious.
One of the main reasons this would be terrifying is because right now we have no way to ensure alignment of AI to any set of values.
ELIZA was an early ‘AI’ created by MIT scientist Joseph Weizenbaum between 1964 to 1967.
He created it to explore communication between humans and machines. ELIZA simulated conversation by using very simple pattern matching and substitution that gave users an illusion of understanding – but it had no representation that could be considered really understanding what was being said by either party. Something you can easily discover by playing with it for a few minutes.
Fast forward to 1991, and Creative Labs was having amazing success with their SoundBlaster add-on sound cards. On the driver disks that came with the SoundBlaster, there were programs showing off different capabilities. One of these capabilities was voice generation. To show off the ability to voice synthesize text, Creative Labs included a little program called Dr. Sbaitso (SoundBlaster Acting Intelligent Text-to-Speech Operator).
You interacted with it like a pseudo-therapist; but you can clearly see the connections and similar pattern/substitution methods that Eliza used. I remember being wowed by it when I played with it for the first time – and experimented for hours with it. It quickly shows its limitations, but the speech synthesis was very good for the time.
It doesn’t hold the test of time, but it is pretty neat and you can even check it out here:
The Times interviewed “23 current and former employees, advisers and investors”. The two former Apple employee founders “preferred positivity over criticism, leading them to disregard warnings about the AI Pin’s poor battery life and power consumption. A senior software engineer was dismissed after raising questions about the product, they said, while others left out of frustration.” Another software engineer was fired for questioning if the AI pin would be ready for launch, the report describes a staff meeting where the founders “said the employee had violated policy by talking negatively about Humane.” ]
It looks like the Humane pin has finally launched at a relatively reasonable cost of $699. We finally have some details. I’m pretty sure it’s not a smartphone killer – Humane has definitely backed off from that original stance. In fact, it’s turned into something of a disaster.
The translation feature is a really excellent usage and having a simple assistant that can let you check flight times and send text messages without pulling out your phone is pretty slick. But I’m not sure about a lot of the rest. Needing a $20 monthly subscription and not tethering with your existing phone plan is a troubling extra expense.
Having to interact with it with talking will definitely make it a bit awkward in social and public situations. I bet it would have problems at a dinner party or louder venue. Gesture recognition is a finicky technology (especially in strange lighting conditions, if you’re wearing gloves, etc), so if there are any issues there it could be very frustrating and you can only do so much with simple gestures.
The screen projection looks limited to high contrast basic information. You certainly won’t be reading lots of text – which is problematic if you want to read text messages instead of having them read aloud to you (and everyone else around you). I certainly wouldn’t want everyone to hear what people are texting me; but maybe they’ll allow blue tooth headphone tethering.
I think the biggest issue is that it didn’t live up to the hype. Almost all of these things can be done with your average smart phone – albeit with a little more fiddling. The AI just isn’t really delivering a unique enough set of features to live up to the promise of the device. It really seems to just be giving you a more vocal interface – which I’m not sure is enough of a selling point. The reality is people likely do not want to be talking to their devices in public. I could easily see the iWatch or smart phones integrating some of these features though.
The one thing is does do is make me start thinking of how we interact with our technology very differently. How would a truly smart AI assistant be like to interact with? What would a really functional assistant like this operate like? I’m glad someone is trying this out. Even if it’s not successful, it’s going to breed a lot of new ideas.
The reddit chat on the device seems to mirror a lot of the concerns. Also, it seems they only have about 100,000 interested folks sign up to purchase one. I’m one of those people who signed up, but it required no deposit/etc so it’s uncertain how many actual buyers there will be.
Final thought: The way you tap it makes me think immediately of Star Trek communicator badges. I bet it’s not long before someone mods one.
Boxabl has made folding houses that can be taken to your location and installed in just a few hours. While not actually shipping any buildings yet, they’re getting tons of press. They’re very modern looking and they appear to be getting lots of traction.
How much does it cost? One of their first offerings is the Casita – a 375 square foot unit. Supposedly, these units will be stackable and connectable to other units.
AI’s can be applied to a number of different classes of problems. Recognizing and predicting are some of these tasks. But when it comes to generating something, you’re probably using a GAN.
This is a video from about 3 years ago when GANS were really getting started. If you’re trying to get your feet wet, this is a great, brief introduction to the history of AI systems like GANs (generative adversarial networks). Or, check out some of these other networks.