Browsed by
Author: matt

Google Tensor Processing Unit (TPU)

Google Tensor Processing Unit (TPU)

In 2016, Google announced they had developed their own chip to handle machine learning workloads. They now use these custom-made chips in all their major datacenters for their most important daily functions.

Even in 2006, Google engineers had identified the need for a machine learning based hardware, but this need became acute in 2013 when the explosion of data to be mined meant they might need to double the size of their data centers. In just under 2 years, they hired, designed, and built the first versions of their TPU’s. A time that was described as ‘hectic’ by one of the chief engineers.

The units were marvels of simplicity based on observing a neural network workload. Machine learning networks consist of the following repeating steps:

  • Multiply the input data (x) with weights (w) to represent the signal strength
  • Add the results to aggregate the neuron’s state into a single value
  • Apply an activation function (f) (such as ReLUSigmoidtanh or others) to modulate the artificial neuron’s activity.

The chip was then designed to handle the linear algebra steps of this workload after they analyzed the key neural net configurations used in their operations. They found they needed to handle operations with anywhere from 5 to 100 million weights at a time – far more than the dozens or even hundreds of multiply units in a CPU or GPU could handle.

They then created a pipelined chip with a massive bank of 65,536 8-bit integer multipliers (compared with just a few thousand multipliers in most GPU’s and dozens in a CPU), a unified 24MB cache of SRAM that worked as registers, followed by activation units hardwired for neural net tasks.

To reduce complexity of design, they realized their algorithms worked fine using quantization, so they could utilize 8-bit integer multiplication units instead of full floating point ones.

In looking further at the workload, they utilize a ‘systolic’ system in which the results of one step flow into the next without having to be written out to memory – pumping much like the chambers of a heart where one step feeds into the next:

The results: astounding. They are able to make 225,000 predictions in the same time it takes a GPU to make 13,000, or a CPU to make only 5500.

Their simplistic design and systolic system that minimizes writes to memory greatly reduces power usage – something important when you have thousands of these units in every data center:

They have since built 2 more versions for the TPU, one in 2017 and another in 2018.  The second version improved on the first after they realized the first version was bandwidth limited – so they added 16GB of high bandwidth memory to achieve 45TFLOPS. They then expanded the reach by allowing each chip to be combined into a single 4 chip module. 64 modules are combined into a single pod (making 256 individual chips).

I highly suggest giving the links a further read. It’s not only fascinating from an AI perspective, but a wonderful story of looking at a problem with outside eyes and engineering the best solution given today’s technology.

Rent the Italian Villa from Under a Tuscan Sun

Rent the Italian Villa from Under a Tuscan Sun

Turns out you can rent the Villa used in the movie ‘Under a Tuscan Sun’. It’s about $2000-$4000 per night and has a minimum 7 night stay, but it does sleep 20 comfortably. Breaking that down, if you found 20 friends for a week, it would only cost about $1000. That’s not a bad deal.

So, who’s with me for a stay at Villa Laura in Cortona?

   

Colonel Sanders gives Portland commencement address

Colonel Sanders gives Portland commencement address

Yes, this really did happen. George Hamilton, playing the part of the Crispy Colonel, delivered Friday’s commencement address at The Art Institute of Portland.

“Life you will take you places you have never imagined,” he told graduates. “Maybe one day you’ll be an actor, or a chicken salesman, or an actor who’s pretending to be a chicken salesman.”

Read more here:
https://www.oregonlive.com/portland/index.ssf/2018/06/colonel_sanders_aka_george_ham.html

 

Compelling web AR experience

Compelling web AR experience

Imagine looking up famous artworks, sculptures, and historical artifacts – then bringing them to your living room to examine as if it were really there.

Google’s Chrome Canary uses the WebXR format to bring an educational AR experience to your browser. You’ll need an ARCore-compatible Android phone running Oreo in addition to Canary, but you’re good to go after that. You can walk around a Mesoamerican sculpture reading annotations as if you were visiting a museum exhibit without the usual cordons and glass cases.

Grim Fandango – performed live with original actors

Grim Fandango – performed live with original actors

One of the absolute highlights of E3 has been Double Fine performing the classic adventure game Grim Fandango live on stage, complete with original voice actors and band.

Close your eyes, sit back in your easy chair, grab a balloon animal, sip from your margarita, and give it a listen.

Stanford prison experiment? Mostly a sham

Stanford prison experiment? Mostly a sham

We’ve all heard of the Stanford Prison Experiment – the one that proved that people can turn evil based on being given authority. There’s only one problem – it was mostly a sham. The original participants have been coming clean: they were hamming it up and were coached. Said one participant, “Anybody who is a clinician would know that I was faking. If you listen to the tape, it’s not subtle.”

Sadly, it looks as if that’s not the only experiment of that era that hand a thumb on the scale. Read more about the study from those who were in it here.

Raising kids who WANT to do chores

Raising kids who WANT to do chores

Mayans and many other indigenous cultures have a lot to teach western parents. Like how to raise kids that WANT to do chores – without even asking.

Turns out, it starts as toddlers who are invited over and over again to doing chores together. The research has turned up some interesting facts.

1. Don’t reward your toddlers for doing chores. Rewarding them after they finished produced LESS helpful kids later. It is unknown why.

2. Let your toddler help – even if they make bigger messes or it takes longer. Many modern parents tell the kid to go do something else, indigenous parents keep inviting them to help – even if it takes longer or the parent has to do it twice. “How else will they learn?” was one response.

3. Expose kids to chores as much as possible. Let them be part of any chore you’re doing. Especially during the early years, children watch adults and want to be a part of it. Instead of lecturing or explaining, simply give them a part of it to do with you. It shows they are part of the social activity of the family – that they belong and are being integrated – not excluded.

4. Give them tasks appropriate to their skill level. Hold measuring cups while you cook, moving a chair, etc. But it has to be a key part. Parents that give toddlers ‘fake’ projects (like re-sweeping a floor that’s already clean) quickly figure out they aren’t invited to really contribute.

5. Always work together. Motivation is lost if you divide up chores and everyone works solo. If doing laundry, make sure everyone is folding everyone’s clothes – not just their own. Make them part of a common goal together.

6. Don’t force it. Don’t force kids to help, offer them opportunities to be part of the activity and invite them to a task instead. It’s a subtle difference, but a huge one. Forcing or demanding creates resistance.

7. Westerners see children as wanting to just play, indigenous parents see toddlers coming over as an indication they want to help. Be creative and find ways to include them.

Read up about this fascinating difference here.