Browsed by
Category: Technical

Oculus Quest 2 issues from sitting too long

Oculus Quest 2 issues from sitting too long

Oculus is doing a black Friday sale for 2023. I have an Oculus Quest 2, but hadn’t used it in over a year. I plugged it in to charge it back up and browse the store. Unfortunately, the store app screen told me it couldn’t load the store.

I went to setting->Wifi and manually connected to my home network. Duh! (so I thought) Even after this, the store app was blank and would tell me it couldn’t display anything. Time to start debugging.

  • The wifi would connect and said it had excellent wifi signal, but limited connectivity.
    • I tested my other wireless devices and they had no trouble connecting to the wifi and could browse the net normally.
    • I unplugged the mesh network repeaters around my house in case it was picking up a weak signal from one of those. No change.
    • I tried setting up my iPhone with as a wireless hotspot and connected to that. I got the same strong signal, but limited connectivity.
    • I checked the IP address of my Oculus in Settings, and I could ping the device just fine. If I turned the headset off, I couldn’t ping it. It seemed like it was connected ok.
    • I tried unplugging other devices from my wifi in case there was an IP conflict anywhere. Same problem.
  • I tried connecting the quest with a USB cable but I still could not get updates nor see anything in the store app or main menu.
  • Despite not being used for 2 years, when I went to system->updates, it showed no updates available. Something is fishy, there had to be updates.
  • I opened the in-headset browser and it would tell me that I could not browse because the date was wrong. It was set to 5:00am Sept 17, 2037. Whoa.
    • There is NO way to change the date/time in the settings or anywhere else I could find.

It turns out, others have seen this issue too. Their Oculus fast forwards to the future mysteriously and then connectivity to the store/web/updates doesn’t seem to work after that. You need to get the date fixed, but there’s no obvious way to do it.

Solution: Factory Reset

In the end, I decided to do a factory reset (Hold the power and volume buttons while booting) because it had been well over a year since I used it last and I figured it would be good to have a clean start. However, there is the option of using side-loaded apps (see below).

Unfortunately, even the factory reset gave me a few headaches. First, the headset isn’t always obvious when it sleeps vs actually powers off. My first attempt I didn’t power off all the way and just woke from sleep and I didn’t get the reset menu holding the power+volume buttons down. I went to settings menu and shut the device down in the headset to be sure.

After that, I was then able to cold boot and get into the factory reset menu. I selected factory reset and waited for it to clean the device and the progress bar indicated the reset was complete. The screen went black (but still powered), but didn’t reboot. I let it set a few minutes, then decided to manually reboot with the power button. Fingers crossed.

The first reboot I got the meta logo, but shortly after that the screen went blank (still powered) but no reboot. I let it set for a few minutes then manually powered it off using the power button – again.

On the second reboot, I got the meta logo, and then it started animating. That’s a good sign. Then the welcome page came up and I could connect to wifi and start updating.

During the update phase (1/2) while the progress bar was moving, I took the headset off to read more instructions. When I put it back on to see how far it was, the display was a patterned garbled static. I took it off and let it sit for a minute, then tried again. The display came back up and the update phase 1 of 2 completed normally.

Sidequest

The bad part about a factory reset is you lose all your installed games. I had to go back in and start installing all of them again. What a pain, because it wasn’t a very fast process.

Another option is to load an app that will update your time via an alternate Oculus app store called Sidequest. Sidequest allows you to load your own apps – including an ‘Open Settings’ app that allows you to update your date/time.

ADB

The Oculus is really just an Android device underneath. This means if you have developer mode enabled and have the Android developer kit installed, you can use ADB commands. I haven’t tried this, but supposedly this will work:

adb shell am start -a android.settings.SETTINGS

If you have Sidequest loaded, you can use this:

adb shell am start -a android.intent.action.VIEW -d com.oculus.tv -e uri com.android.settings/.DevelopmentSettings com.oculus.vrshell/.MainActivity

Links:

Building your own calendar display

Building your own calendar display

Stavros decided to make a little e-ink display device that showed his outlook calendar and could sit next to his main monitor. He seemed to have a decent, basic understanding of programming, but had some clever ways of getting around things he didn’t know – namely – using CoPilot and sample code to hack together what he needed. I think it’s a great read to show how you can work through problems in a very pragmatic way – without re-inventing the wheel.

In the end, he struggled through finding a good quality e-ink display, an SDK that let him display on it consistently (running into many bad SDK’s and ones that left lots of artifacts), getting calandar graphics on the device, and 3d printing the case it was mounted in.

Most interesting to me was that instead of trying to interface with his calendar app and go through the difficult work of re-creating a properly formatted/sized and good looking calendar graphics – he came up with a much more simple and easy method. He admits he wasn’t very good at C++ programming and had some false starts trying to find a software package that let him render consistently to the display. There were many that didn’t work right, left lots of lines on the screen, etc.

He then took his C++ compiler and a block of framebuffer rendering sample code. Then, with the help of CoPilot, he stumbled through a method that simply displaying the calendar in a web browser, copy the screen, download the image file over HTTP, and copy the bytes directly onto the framebuffer.

He set up a sever-side script to generate the image along with a hash of the image so the device knew when an actual update happened to the image since he didn’t want the e-ink display constantly flashing if it didn’t have a real update for the display.

A clever bit of hackery – and demonstrates how simply things can be made if you are creative.

Article:

Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields

Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields

Neural Radiance Fields (NeRF) produce some pretty beautiful renderings. A little like photogrammetry, it utilizes objects placed in a multi-dimensional volume (as captured from multiple viewpoints) and then when you want to render it from a particular angle, shoots rays into the scene based on camera location and queries the volume in order to get a screen coordinate pixel color at that location.

It does suffer from some shortcomings – such as largely only working well on static scenes, has trouble when there is missing or occluded portions, and most notably renders objects that lack fine details or produces blobby geometry common to volumetric rendering techniques.

But it doesn’t stop people from trying. Zip-NeRF is an example where these Google scientists demonstrate how ideas from rendering and signal processing yield better error rates and trains dramatically faster than previous techniques.

It’s always interesting to see what new things people are trying out these days.

Gaussian Splatting graphics pipeline

Gaussian Splatting graphics pipeline

Say hello to Gaussian splatting. It allows high quality, realtime rendering of scenes captured from multiple photos or videos.

Gaussian Splatting is a rendering technique that can produce extremely high quality rendering at very high frame rates. It uses a novel new technique who’s closest cousin is probably photogrammetry. Photogrammetry has been around for awhile (taking many 2D pictures of an object from many different directions and then re-building a 3D object). 3D Gaussian Splatting takes this much further.

Gaussian Splatting starts with lots of pictures like photogrammetry, but it then converts the data into a point cloud. The points become gaussians with are then used by the rendering routine.

  1. Take a collection of photographs or extremely high quality renderings from a number of different camera positions all around the environment. The individual points from each of the photos becomes gaussians in 3D space.
  2. The gaussians are not correct for rendering, so you must run a training pass over them much like a 1 layer neural net – but with special properties like densification and pruning.
  3. From your camera position, projecting the gaussian points back into the 2D plane based on camera
  4. Sort by depth
  5. Iterate over each gaussian for a given pixel and sum the contribution.
  6. This trained set can then be rendered from any angle.

Update 11/2023: There’s also a way of handling animated objects via 4D Gaussian splatting.

Articles:

Star Trek communicators at the Paris Fashion show

Star Trek communicators at the Paris Fashion show

Humane technology has been showing off it’s little AI powered wearable devices – this time at the Paris Fashion show. They paired up with Coperni fashion and pinned the devices to their models.

What these devices do is – well – nobody really knows. Officially they are a “screenless, standalone device and software platform built from the ground up for AI. The intelligent clothing-based wearable uses a range of sensors that enable natural and intuitive compute interactions and is designed to weave seamlessly into users’ day-to-day lives.” It appears to be powered by a Snapdragon AR2 Gen 1 processor and have ‘AI-powered optical recognition and a laser-projected display’.

There is some speculation they are a little like the Google Clip, but the AI Pin claims to be completely standalone and not tethered to other devices (like the iWatch, etc)

Right now there is almost no information about what they do or if they even turn on. There’s a lot of speculation that right now they are in ‘hype’ mode to collect investors. But with no actual working prototypes nor information about them – many are wondering if this is a case of the emperor has no clothes.

Still, it is an interesting thought experiment. If you did have one of these devices – what WOULD you want in a wearable little device like this? There doesn’t appear to be any screen; so how would it fit in the economy that has cell phones, digitally connected watches, and possibly screenless wearable devices?

Update 11/2023:

There’s a little more information here on Time’s ‘Best Inventions of 2023‘; though they do disclose that 2 TIME co-chairs were investors – so maybe a little bit of thumbs on the scale there.

Anti night vision hoodie

Anti night vision hoodie

This hacker decided to make a hoodie that can blind night vision cameras. It does this by having 12 embedded ultraviolet leds (light that is invisible to the naked eye) that strobe and blow out night vision cameras.

Obviously, this can equally be used for either anonymity or criminal activities. One thing is for sure, if you’re running away from cops or helicopters using night vision – you’re going to stand out like like a Christmas tree.

This is a more active approach to similar anti-paparazzi clothes that came out a few years back. Though those are a little expensive. A trendy anti-paparazzi scarf is $249.

Near-ultrasonic attacks on any device

Near-ultrasonic attacks on any device

Just because you can’t hear it doesn’t mean your smart device can’t.

Researchers have developed a novel attack called “Near-Ultrasound Inaudible Trojan” (NUIT) that can launch silent attacks against devices powered by voice assistants. The main principle that makes NUIT effective and dangerous is that microphones in smart devices can respond to near-ultrasound that the human ear cannot, thus performing the attack with minimal risk of exposure while still using conventional speaker technology.

The team demonstrated NUIT attacks against modern voice assistants found inside millions of devices, including Apple’s Siri, Google’s Assistant, Microsoft’s Cortana, and Amazon’s Alexa, showing the ability to send malicious commands to those devices.

Inaudible attacks

NUIT could be incorporated into websites that play media or YouTube videos and tricking targets into visiting sites or playing malicious media on trustworthy sites.

The researchers say the NUIT attacks can be conducted using two different methods.

NUIT-1, is when a device is both the source and target of the attack. For example, an attack can be launched on a smartphone by playing an audio file that causes the device to perform an action, such as opening a garage door or sending a text message.

The other method, NUIT-2, is when the attack is launched by a device with a speaker to another device with a microphone, such as a website or over TV to a smart speaker.

Just one more reason not to have a bunch of smart devices in your house.

Article:

nVidia uses AI for place and route on it’s chips

nVidia uses AI for place and route on it’s chips

nVidia just published a paper and blog post revealing how its AutoDMP system can accelerate modern chip floor-planning using GPU-accelerated AI/ML optimization, resulting in a 30X speedup over previous methods. Hopefully it doesn’t get the treatment the Google AI place-and-route solution got.

AutoDMP is short for Automated DREAMPlace-based Macro Placement. It is designed to plug into an Electronic Design Automation (EDA) system used by chip designers, to accelerate and optimize the time-consuming process of finding optimal placements for the building blocks of processors. In one of Nvidia’s examples of AutoDMP at work, the tool leveraged its AI on the problem of determining an optimal layout of 256 RSIC-V cores with 2.7 million standard cells and 320 memory macros. AutoDMP took 3.5 hours to come up with an optimal layout on a single Nvidia DGX Station A100. 

Initial metrics shows it does an amazing job – in a fraction of the time. Definitely worth the read.

AutoDMP is open source, with the code published on GitHub. Below is a link to an article about Cadence’s Cerebrus AI place-and-route solution.

Article: