Browsed by
Month: August 2018

Generating garbley text

Generating garbley text

Ever see that strange garbaly text on posts? Want to create it yourself? It’s generated using Zalgo (http://eeemo.net/).

Here’s an example of what it can generate:

O͈̬͎̫̦̭̭n̬̦̞̺̼̗̠ce̖ ̷u̙̼͕̩̬̹ͅp̀o͇͎̣̥̼n̡͖̰͍̱͉͚ ͕͜a̵̭ ̭͠m͏̖͇̞̫̗̜i̞͖̺̖͔d̩̹̯n͉͙̞͚̘͟i̸͍̲͙͕g̜͈̩̬ͅht̗ ̥̥͈̺̝̣̰d͚̟͝ͅr̢͕̜̥̫̠̣̤e͇a̲̖̹͞r̜͚̞y̜͕,̱ ̝͖̰͙͎͚͘w̹̥̥͇̹̼͟h͕̰̝i̙̘̯l̦͈̯̯e̮͎ͅ ̧̺̟̘̙͕͕I̼͙ ̥̬̮̝͕͟p̬͔͙o͓͖̩n̴̩̞̬̻d͔͈ḛ̡̝̝̻̥r̞͍̫̲̰ḙ͔̩̲̟́d̨̦̹̳̯̩̗̯,̕ ̷͓̭̪̟̦̻̳w̘̬e̠a͙̹̹̞̺͉͝ḵ̞̹͠ ̘̮̩̫̲ͅa̖̦n̯͔̞̱̬d̮̰ w̸̫̪̦̦͙é͓͖͙̪͍a̛͎̫͓̦̹͙r̹͇͔y͔̹̱̭̱͙,̢͔͖̖̘
̯̰O͍̼̪̗̥̩ͅv͏̱͚̭̠e̵̳r̥̣̠ ̸̻̝̲m̜͍͙ą͍̳̝n͙͇̞y̫͉̳̼̹͓͠ ͍̪͉̗͉̯a̟͍ ̟̗̩̭̺̖q̸̹͚̖̜͙̳u̷̱̝̟̬͎a̳̟̥̱̠͠i̗͖̳͍n̦t ̳͎̠̦a̭̪̣n͙̗̤̯̟̮̩͞d̀ ͓͚̣͘c̴̗u̻͕͟r̸͕̱̪̗͚ì̘o̶̻͕̪̗̫̞u̗̖̻͍̣s̵̫̲̟̟͖̙ ̶̱v̥̩̻̱o̴͍̙̘̻͙l͕͔̮͚͓̱͜u͏̺̜m̮̝̦̠͘ę͉̤͎̹ ̟o̗̦f̧̲͖ ̼̻͚͕̯̞͡f̡̜̻̱̻o̷̮̝̙r̘͖̩̝͉g̘̰͝o̶̺̮̼̝̲̻t͖̗̗̳̼͔ͅte̢̥̻̺͈n̴̙̣ ̠̜̞l͘o͓͈͔̖͚r͓̘͉̖͖͎e̫͕̝͢—͏͎͍̯͕̬̬̬
̸̤͙͔͇ ̰̞͈̼̲̭̬ ҉̠̜͔̙͕͚͕ ͔͉̗̼̫͜ͅW̢̘̱͙̫̭̲h̝̙͎̥̲i̥̙̗l̹̘̖̠̩͖̫e̟ ̖̭̥̩̥̱̫I͍̼̜ ̨͓̪̲̝̰n̩o̵̦ͅd͍̜͚͚ḏ͍͡e̷̪̝̟̺̬d́,̪ ̝̬͓ͅn̝e͟a͡r̝̘̣̳͜l̛̻̠͎͇̣y̡̲̳̪̦ ̛̝̻͔n̤̗̝͚͓̠a̕p̙p̗̘͓̮̦i͏̪n̶g̢̰,̮̩̲̣̮̮ ̗̞̳s͍̭̹̤̪͟u͚̬̙̼͈ḓ̷̭̘͍ͅd̫̙̭̰̕e͖̙̝͕͔ṋ̮̼̱̲̙l͕͙̲͞y̛ ͏̗̟ͅt̢̩͈͕͙̦̼̭h̟̝̠̼̰̀ę̮͚̟͇̳̠̼r̳̭͍͕̤̟e̲̬̝͈̝̻ ͏̪̠̻͇̦̦c̘͎̼̠am͖̣ͅe͖̖̪͜ ̺̗̬͈͎a̵͉̖͚̗͈̥͖ ̣̪̙̟͇͓̕t̖̞͈a͕̲̮̯̦͟p̡̲̻͍p̪̖̞̫͈̝i͎̰̜̭ṋ̺g̩̠͚̘͎͚͟,̬
̶̩͇͎͈̯̫A͕͕̤s҉̘̭̳̮͙ ̛͍̠̘̼̞̣̟o̷f͙ ̘̗͖͖ͅs͢o̷̬m̷̹ͅe͏͙͍̞̬̱ ̦͔͓͙ͅo̹̰͢ͅņ̘e̡̙̣̪͍̻̩ ̤́g͍e͈̹̟n̛ͅt͍̗l̴̜̩̱̖͇ỵ̢͇ ͈̝̠̫͈̹r͖̟a̷p͓̝͚̬p̷͈̖͔͎̗̜ing̶̙̙̺,͏̼̥̥̹̰͈ r̶̼̠̹̺̩a̧̺͇̩pp̘i̜͓̭̜̼n͇̝͡g̲̗ͅ ҉̪̬̝̝̦̬̲a̡̩̦t͇̼̮ ̡̜͎͎̗̤m̫̯ͅy̱̻̯̯̮̮͈ ̪̯̬c̟͖̺͔̦ḩa̖̲̠ͅͅm̼b̵̝̤̙̲̗e̸ͅr̘̥̫̦ ̞d̞͔o̢̙̙o̺̼͘r͔̦͕̜͇̣͡.̝͙͎̼̺͍̰
̩͎̜͈̼͢

 

Light-based vortex

Light-based vortex

The Haze is a new immersive digital art installation from Japanese collective teamLab (previously) which situates guests at the center of a light-based vortex. The work uses light, fog, and sound to wrap guests in a mesmerizing cacophony of swirling spotlights which are reflected by a mirrored floor.

Everybody Dance now!

Everybody Dance now!

More astounding technology. Take any source dancer, a clip of the target person, and then make anyone do the same dance.

This will probably be used very soon to make whole hosts of dancers in movies/music videos all in perfect sync while only paying for one source dancer.

It could be used to bring back deceased dancers, or apply the dances of deceased dancers onto new artists.

Full paper and details here:

Google Tensor Processing Unit (TPU)

Google Tensor Processing Unit (TPU)

In 2016, Google announced they had developed their own chip to handle machine learning workloads. They now use these custom-made chips in all their major datacenters for their most important daily functions.

Even in 2006, Google engineers had identified the need for a machine learning based hardware, but this need became acute in 2013 when the explosion of data to be mined meant they might need to double the size of their data centers. In just under 2 years, they hired, designed, and built the first versions of their TPU’s. A time that was described as ‘hectic’ by one of the chief engineers.

The units were marvels of simplicity based on observing a neural network workload. Machine learning networks consist of the following repeating steps:

  • Multiply the input data (x) with weights (w) to represent the signal strength
  • Add the results to aggregate the neuron’s state into a single value
  • Apply an activation function (f) (such as ReLUSigmoidtanh or others) to modulate the artificial neuron’s activity.

The chip was then designed to handle the linear algebra steps of this workload after they analyzed the key neural net configurations used in their operations. They found they needed to handle operations with anywhere from 5 to 100 million weights at a time – far more than the dozens or even hundreds of multiply units in a CPU or GPU could handle.

They then created a pipelined chip with a massive bank of 65,536 8-bit integer multipliers (compared with just a few thousand multipliers in most GPU’s and dozens in a CPU), a unified 24MB cache of SRAM that worked as registers, followed by activation units hardwired for neural net tasks.

To reduce complexity of design, they realized their algorithms worked fine using quantization, so they could utilize 8-bit integer multiplication units instead of full floating point ones.

In looking further at the workload, they utilize a ‘systolic’ system in which the results of one step flow into the next without having to be written out to memory – pumping much like the chambers of a heart where one step feeds into the next:

The results: astounding. They are able to make 225,000 predictions in the same time it takes a GPU to make 13,000, or a CPU to make only 5500.

Their simplistic design and systolic system that minimizes writes to memory greatly reduces power usage – something important when you have thousands of these units in every data center:

They have since built 2 more versions for the TPU, one in 2017 and another in 2018.  The second version improved on the first after they realized the first version was bandwidth limited – so they added 16GB of high bandwidth memory to achieve 45TFLOPS. They then expanded the reach by allowing each chip to be combined into a single 4 chip module. 64 modules are combined into a single pod (making 256 individual chips).

I highly suggest giving the links a further read. It’s not only fascinating from an AI perspective, but a wonderful story of looking at a problem with outside eyes and engineering the best solution given today’s technology.