Browsed by
Category: Technical

Building your own calendar display

Building your own calendar display

Stavros decided to make a little e-ink display device that showed his outlook calendar and could sit next to his main monitor. He seemed to have a decent, basic understanding of programming, but had some clever ways of getting around things he didn’t know – namely – using CoPilot and sample code to hack together what he needed. I think it’s a great read to show how you can work through problems in a very pragmatic way – without re-inventing the wheel.

In the end, he struggled through finding a good quality e-ink display, an SDK that let him display on it consistently (running into many bad SDK’s and ones that left lots of artifacts), getting calandar graphics on the device, and 3d printing the case it was mounted in.

Most interesting to me was that instead of trying to interface with his calendar app and go through the difficult work of re-creating a properly formatted/sized and good looking calendar graphics – he came up with a much more simple and easy method. He admits he wasn’t very good at C++ programming and had some false starts trying to find a software package that let him render consistently to the display. There were many that didn’t work right, left lots of lines on the screen, etc.

He then took his C++ compiler and a block of framebuffer rendering sample code. Then, with the help of CoPilot, he stumbled through a method that simply displaying the calendar in a web browser, copy the screen, download the image file over HTTP, and copy the bytes directly onto the framebuffer.

He set up a sever-side script to generate the image along with a hash of the image so the device knew when an actual update happened to the image since he didn’t want the e-ink display constantly flashing if it didn’t have a real update for the display.

A clever bit of hackery – and demonstrates how simply things can be made if you are creative.

Article:

Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields

Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields

Neural Radiance Fields (NeRF) produce some pretty beautiful renderings. A little like photogrammetry, it utilizes objects placed in a multi-dimensional volume (as captured from multiple viewpoints) and then when you want to render it from a particular angle, shoots rays into the scene based on camera location and queries the volume in order to get a screen coordinate pixel color at that location.

It does suffer from some shortcomings – such as largely only working well on static scenes, has trouble when there is missing or occluded portions, and most notably renders objects that lack fine details or produces blobby geometry common to volumetric rendering techniques.

But it doesn’t stop people from trying. Zip-NeRF is an example where these Google scientists demonstrate how ideas from rendering and signal processing yield better error rates and trains dramatically faster than previous techniques.

It’s always interesting to see what new things people are trying out these days.

Gaussian Splatting graphics pipeline

Gaussian Splatting graphics pipeline

Say hello to Gaussian splatting. It allows high quality, realtime rendering of scenes captured from multiple photos or videos.

Gaussian Splatting is a rendering technique that can produce extremely high quality rendering at very high frame rates. It uses a novel new technique who’s closest cousin is probably photogrammetry. Photogrammetry has been around for awhile (taking many 2D pictures of an object from many different directions and then re-building a 3D object). 3D Gaussian Splatting takes this much further.

Gaussian Splatting starts with lots of pictures like photogrammetry, but it then converts the data into a point cloud. The points become gaussians with are then used by the rendering routine.

  1. Take a collection of photographs or extremely high quality renderings from a number of different camera positions all around the environment. The individual points from each of the photos becomes gaussians in 3D space.
  2. The gaussians are not correct for rendering, so you must run a training pass over them much like a 1 layer neural net – but with special properties like densification and pruning.
  3. From your camera position, projecting the gaussian points back into the 2D plane based on camera
  4. Sort by depth
  5. Iterate over each gaussian for a given pixel and sum the contribution.
  6. This trained set can then be rendered from any angle.

Update 11/2023: There’s also a way of handling animated objects via 4D Gaussian splatting.

Articles:

Star Trek communicators at the Paris Fashion show

Star Trek communicators at the Paris Fashion show

Humane technology has been showing off it’s little AI powered wearable devices – this time at the Paris Fashion show. They paired up with Coperni fashion and pinned the devices to their models.

What these devices do is – well – nobody really knows. Officially they are a “screenless, standalone device and software platform built from the ground up for AI. The intelligent clothing-based wearable uses a range of sensors that enable natural and intuitive compute interactions and is designed to weave seamlessly into users’ day-to-day lives.” It appears to be powered by a Snapdragon AR2 Gen 1 processor and have ‘AI-powered optical recognition and a laser-projected display’.

There is some speculation they are a little like the Google Clip, but the AI Pin claims to be completely standalone and not tethered to other devices (like the iWatch, etc)

Right now there is almost no information about what they do or if they even turn on. There’s a lot of speculation that right now they are in ‘hype’ mode to collect investors. But with no actual working prototypes nor information about them – many are wondering if this is a case of the emperor has no clothes.

Still, it is an interesting thought experiment. If you did have one of these devices – what WOULD you want in a wearable little device like this? There doesn’t appear to be any screen; so how would it fit in the economy that has cell phones, digitally connected watches, and possibly screenless wearable devices?

Update 11/2023:

There’s a little more information here on Time’s ‘Best Inventions of 2023‘; though they do disclose that 2 TIME co-chairs were investors – so maybe a little bit of thumbs on the scale there.

Anti night vision hoodie

Anti night vision hoodie

This hacker decided to make a hoodie that can blind night vision cameras. It does this by having 12 embedded ultraviolet leds (light that is invisible to the naked eye) that strobe and blow out night vision cameras.

Obviously, this can equally be used for either anonymity or criminal activities. One thing is for sure, if you’re running away from cops or helicopters using night vision – you’re going to stand out like like a Christmas tree.

This is a more active approach to similar anti-paparazzi clothes that came out a few years back. Though those are a little expensive. A trendy anti-paparazzi scarf is $249.

Near-ultrasonic attacks on any device

Near-ultrasonic attacks on any device

Just because you can’t hear it doesn’t mean your smart device can’t.

Researchers have developed a novel attack called “Near-Ultrasound Inaudible Trojan” (NUIT) that can launch silent attacks against devices powered by voice assistants. The main principle that makes NUIT effective and dangerous is that microphones in smart devices can respond to near-ultrasound that the human ear cannot, thus performing the attack with minimal risk of exposure while still using conventional speaker technology.

The team demonstrated NUIT attacks against modern voice assistants found inside millions of devices, including Apple’s Siri, Google’s Assistant, Microsoft’s Cortana, and Amazon’s Alexa, showing the ability to send malicious commands to those devices.

Inaudible attacks

NUIT could be incorporated into websites that play media or YouTube videos and tricking targets into visiting sites or playing malicious media on trustworthy sites.

The researchers say the NUIT attacks can be conducted using two different methods.

NUIT-1, is when a device is both the source and target of the attack. For example, an attack can be launched on a smartphone by playing an audio file that causes the device to perform an action, such as opening a garage door or sending a text message.

The other method, NUIT-2, is when the attack is launched by a device with a speaker to another device with a microphone, such as a website or over TV to a smart speaker.

Just one more reason not to have a bunch of smart devices in your house.

Article:

nVidia uses AI for place and route on it’s chips

nVidia uses AI for place and route on it’s chips

nVidia just published a paper and blog post revealing how its AutoDMP system can accelerate modern chip floor-planning using GPU-accelerated AI/ML optimization, resulting in a 30X speedup over previous methods. Hopefully it doesn’t get the treatment the Google AI place-and-route solution got.

AutoDMP is short for Automated DREAMPlace-based Macro Placement. It is designed to plug into an Electronic Design Automation (EDA) system used by chip designers, to accelerate and optimize the time-consuming process of finding optimal placements for the building blocks of processors. In one of Nvidia’s examples of AutoDMP at work, the tool leveraged its AI on the problem of determining an optimal layout of 256 RSIC-V cores with 2.7 million standard cells and 320 memory macros. AutoDMP took 3.5 hours to come up with an optimal layout on a single Nvidia DGX Station A100. 

Initial metrics shows it does an amazing job – in a fraction of the time. Definitely worth the read.

AutoDMP is open source, with the code published on GitHub. Below is a link to an article about Cadence’s Cerebrus AI place-and-route solution.

Article:

Free (trial) Windows development virtual machines

Free (trial) Windows development virtual machines

Pre-canned VM Windows 11 development environment

Did you know that Microsoft provides free virtual machine images of the latest version of Windows – with developer tools, SDK’s, and samples all pre-installed? Microsoft provide regularly updated virtual machine images for VMWare, Hyper-V, VirtualBox, and Parallels.

A few important points. The images are not activated and cannot be activated – even with a valid product key.

What about Linux?

If you want to install and run a Linux distro (Ubuntu for example), you can use Virtualbox/VMWare or the built in Windows Subsystem for Linux (WSL). With WSL, you get a Linux command prompt mounted on your local Windows filesystem and can launch X-windows apps that pop up on your Windows desktop in separate windows.

The experience is kind of a weird mash-up of Windows and Linux on the same system at the same time. Kind of like a better/embedded version of cygwin. It’s not as contained as a virtual machine host app like Virtualbox/VMWare that keeps all your windows in the virtual machine host app; but this might be enough for most people.

I haven’t done any experiments, but would love to test out some OpenGL/Vulkan apps to see if you get full GPU accelerated rendering.