Browsed by
Category: Technical

Your first game

Your first game

Time for me to shake my fist and tell you darn kids to get off my lawn. Lets set the wayback machine to the 1980’s…

Ralph Koster shares the first video game he ever wrote as well as a great flashback to what almost everyone that wanted to learn to program did back in the 80’s and 90’s. We typed in long programs by hand from books we got at the library and computer magazines. We taught ourselves BASIC and smatterings of assembly. If you were really cool, you even tried to sell your games: which was done by copying them to a floppy, printing a dot-matrix label for it, and trying to sell it in a ziplock baggie.

I started by fiddling around with the programs I typed in to see if I could change them or make them do different things.

My very first video ‘game’ on my TSR-80 consisted of a bunch of black and white dots that would fall down from the top of the screen, and you moved your dot ‘ship’ back and forth to avoid them as the enemies rained down. They came down one at a time. Ridiculously slowly. But it was probably my very first ‘game’.

My second, more ‘real’ game was a castle adventure game. You were the sole heir of a long-lost uncle and had to search his castle to find the deed within 24 hours. It was a text adventure at its heart, but there was opening graphics. I even wrote my own graphics editor with which I drew those opening screens. I believe I still have the graph paper I used.

Anyway, for anyone who learned to program in the 80’s, Koster’s video will tug some familiar heartstrings. For you younger kids, this is how it was done back in the day…

Generating garbley text

Generating garbley text

Ever see that strange garbaly text on posts? Want to create it yourself? It’s generated using Zalgo (http://eeemo.net/).

Here’s an example of what it can generate:

O͈̬͎̫̦̭̭n̬̦̞̺̼̗̠ce̖ ̷u̙̼͕̩̬̹ͅp̀o͇͎̣̥̼n̡͖̰͍̱͉͚ ͕͜a̵̭ ̭͠m͏̖͇̞̫̗̜i̞͖̺̖͔d̩̹̯n͉͙̞͚̘͟i̸͍̲͙͕g̜͈̩̬ͅht̗ ̥̥͈̺̝̣̰d͚̟͝ͅr̢͕̜̥̫̠̣̤e͇a̲̖̹͞r̜͚̞y̜͕,̱ ̝͖̰͙͎͚͘w̹̥̥͇̹̼͟h͕̰̝i̙̘̯l̦͈̯̯e̮͎ͅ ̧̺̟̘̙͕͕I̼͙ ̥̬̮̝͕͟p̬͔͙o͓͖̩n̴̩̞̬̻d͔͈ḛ̡̝̝̻̥r̞͍̫̲̰ḙ͔̩̲̟́d̨̦̹̳̯̩̗̯,̕ ̷͓̭̪̟̦̻̳w̘̬e̠a͙̹̹̞̺͉͝ḵ̞̹͠ ̘̮̩̫̲ͅa̖̦n̯͔̞̱̬d̮̰ w̸̫̪̦̦͙é͓͖͙̪͍a̛͎̫͓̦̹͙r̹͇͔y͔̹̱̭̱͙,̢͔͖̖̘
̯̰O͍̼̪̗̥̩ͅv͏̱͚̭̠e̵̳r̥̣̠ ̸̻̝̲m̜͍͙ą͍̳̝n͙͇̞y̫͉̳̼̹͓͠ ͍̪͉̗͉̯a̟͍ ̟̗̩̭̺̖q̸̹͚̖̜͙̳u̷̱̝̟̬͎a̳̟̥̱̠͠i̗͖̳͍n̦t ̳͎̠̦a̭̪̣n͙̗̤̯̟̮̩͞d̀ ͓͚̣͘c̴̗u̻͕͟r̸͕̱̪̗͚ì̘o̶̻͕̪̗̫̞u̗̖̻͍̣s̵̫̲̟̟͖̙ ̶̱v̥̩̻̱o̴͍̙̘̻͙l͕͔̮͚͓̱͜u͏̺̜m̮̝̦̠͘ę͉̤͎̹ ̟o̗̦f̧̲͖ ̼̻͚͕̯̞͡f̡̜̻̱̻o̷̮̝̙r̘͖̩̝͉g̘̰͝o̶̺̮̼̝̲̻t͖̗̗̳̼͔ͅte̢̥̻̺͈n̴̙̣ ̠̜̞l͘o͓͈͔̖͚r͓̘͉̖͖͎e̫͕̝͢—͏͎͍̯͕̬̬̬
̸̤͙͔͇ ̰̞͈̼̲̭̬ ҉̠̜͔̙͕͚͕ ͔͉̗̼̫͜ͅW̢̘̱͙̫̭̲h̝̙͎̥̲i̥̙̗l̹̘̖̠̩͖̫e̟ ̖̭̥̩̥̱̫I͍̼̜ ̨͓̪̲̝̰n̩o̵̦ͅd͍̜͚͚ḏ͍͡e̷̪̝̟̺̬d́,̪ ̝̬͓ͅn̝e͟a͡r̝̘̣̳͜l̛̻̠͎͇̣y̡̲̳̪̦ ̛̝̻͔n̤̗̝͚͓̠a̕p̙p̗̘͓̮̦i͏̪n̶g̢̰,̮̩̲̣̮̮ ̗̞̳s͍̭̹̤̪͟u͚̬̙̼͈ḓ̷̭̘͍ͅd̫̙̭̰̕e͖̙̝͕͔ṋ̮̼̱̲̙l͕͙̲͞y̛ ͏̗̟ͅt̢̩͈͕͙̦̼̭h̟̝̠̼̰̀ę̮͚̟͇̳̠̼r̳̭͍͕̤̟e̲̬̝͈̝̻ ͏̪̠̻͇̦̦c̘͎̼̠am͖̣ͅe͖̖̪͜ ̺̗̬͈͎a̵͉̖͚̗͈̥͖ ̣̪̙̟͇͓̕t̖̞͈a͕̲̮̯̦͟p̡̲̻͍p̪̖̞̫͈̝i͎̰̜̭ṋ̺g̩̠͚̘͎͚͟,̬
̶̩͇͎͈̯̫A͕͕̤s҉̘̭̳̮͙ ̛͍̠̘̼̞̣̟o̷f͙ ̘̗͖͖ͅs͢o̷̬m̷̹ͅe͏͙͍̞̬̱ ̦͔͓͙ͅo̹̰͢ͅņ̘e̡̙̣̪͍̻̩ ̤́g͍e͈̹̟n̛ͅt͍̗l̴̜̩̱̖͇ỵ̢͇ ͈̝̠̫͈̹r͖̟a̷p͓̝͚̬p̷͈̖͔͎̗̜ing̶̙̙̺,͏̼̥̥̹̰͈ r̶̼̠̹̺̩a̧̺͇̩pp̘i̜͓̭̜̼n͇̝͡g̲̗ͅ ҉̪̬̝̝̦̬̲a̡̩̦t͇̼̮ ̡̜͎͎̗̤m̫̯ͅy̱̻̯̯̮̮͈ ̪̯̬c̟͖̺͔̦ḩa̖̲̠ͅͅm̼b̵̝̤̙̲̗e̸ͅr̘̥̫̦ ̞d̞͔o̢̙̙o̺̼͘r͔̦͕̜͇̣͡.̝͙͎̼̺͍̰
̩͎̜͈̼͢

 

Make anyone dance

Make anyone dance

More astounding technology. Take any source dancer and make anyone do the same dance.

This will probably be used very soon to make whole hosts of dancers in movies/music videos all in perfect sync while only paying for one source dancer.

It could be used to bring back deceased dancers, or apply the dances of deceased dancers onto new artists.

Full paper here:
https://arxiv.org/pdf/1808.07371.pdf

 

 

Google Tensor Processing Unit (TPU)

Google Tensor Processing Unit (TPU)

In 2016, Google announced they had developed their own chip to handle machine learning workloads. They now use these custom-made chips in all their major datacenters for their most important daily functions.

Even in 2006, Google engineers had identified the need for a machine learning based hardware, but this need became acute in 2013 when the explosion of data to be mined meant they might need to double the size of their data centers. In just under 2 years, they hired, designed, and built the first versions of their TPU’s. A time that was described as ‘hectic’ by one of the chief engineers.

The units were marvels of simplicity based on observing a neural network workload. Machine learning networks consist of the following repeating steps:

  • Multiply the input data (x) with weights (w) to represent the signal strength
  • Add the results to aggregate the neuron’s state into a single value
  • Apply an activation function (f) (such as ReLUSigmoidtanh or others) to modulate the artificial neuron’s activity.

The chip was then designed to handle the linear algebra steps of this workload after they analyzed the key neural net configurations used in their operations. They found they needed to handle operations with anywhere from 5 to 100 million weights at a time – far more than the dozens or even hundreds of multiply units in a CPU or GPU could handle.

They then created a pipelined chip with a massive bank of 65,536 8-bit integer multipliers (compared with just a few thousand multipliers in most GPU’s and dozens in a CPU), a unified 24MB cache of SRAM that worked as registers, followed by activation units hardwired for neural net tasks.

To reduce complexity of design, they realized their algorithms worked fine using quantization, so they could utilize 8-bit integer multiplication units instead of full floating point ones.

In looking further at the workload, they utilize a ‘systolic’ system in which the results of one step flow into the next without having to be written out to memory – pumping much like the chambers of a heart where one step feeds into the next:

The results: astounding. They are able to make 225,000 predictions in the same time it takes a GPU to make 13,000, or a CPU to make only 5500.

Their simplistic design and systolic system that minimizes writes to memory greatly reduces power usage – something important when you have thousands of these units in every data center:

They have since built 2 more versions for the TPU, one in 2017 and another in 2018.  The second version improved on the first after they realized the first version was bandwidth limited – so they added 16GB of high bandwidth memory to achieve 45TFLOPS. They then expanded the reach by allowing each chip to be combined into a single 4 chip module. 64 modules are combined into a single pod (making 256 individual chips).

I highly suggest giving the links a further read. It’s not only fascinating from an AI perspective, but a wonderful story of looking at a problem with outside eyes and engineering the best solution given today’s technology.

Compelling web AR experience

Compelling web AR experience

Imagine looking up famous artworks, sculptures, and historical artifacts – then bringing them to your living room to examine as if it were really there.

Google’s Chrome Canary uses the WebXR format to bring an educational AR experience to your browser. You’ll need an ARCore-compatible Android phone running Oreo in addition to Canary, but you’re good to go after that. You can walk around a Mesoamerican sculpture reading annotations as if you were visiting a museum exhibit without the usual cordons and glass cases.

Setting up your own git server with individual user accounts

Setting up your own git server with individual user accounts

This is sort of covered in other spots, but not as clearly and from scratch. Here’s a complete guide that shows how to set up your own git server and git clients. I found this setup handy when trying out some more complex git merging commands and experimenting with remotes while learning git.

I tested this on Ubuntu 16.04 by creating an Ubuntu virtual machine and then cloning it. 1 VM for the server, and 2 for clients. I used bridged networking so each would get their own IP address, but that’s not required as long as the VM’s can communicate with each other over TCP/IP.

There are two ways to set the git accounts up. The first way shown has all client users accessing the repository through the same server user account. While each user’s submissions will be labeled correctly in the git log, having everyone use the same system account isn’t safe computing practices for groups. Instead, if you follow the instructions in the optional part you can use individual user accounts and keep things more safe.

Client/Server Setup

First, on the server:

  1. Make sure ssh is installed on the server:
    server$ sudo apt-get install openssh-server
  2. Make sure sshd is running/listed when you do a ps. If not, reboot or restart it.
    server$ ps -A | grep sshd
  3. Make sure git is installed:
    server$ sudo apt-get install git-core
  4. Add a user to the server that will hold the git repositories
    server$ sudo adduser git
    server$ sudo passwd git
    server$ su - git
    server$ mkdir -p .ssh

 

Next, on your client:

  1. Make sure git is installed
    client$ sudo apt-get install git-core
  2. Create an ssh key. While not strictly required, it’s a good idea to add a passcode to the key when prompted during key creation.
    client$ ssh-keygen -t rsa

    This should create a file called id_rsa.pub in your ~/.ssh directory. See documentation on ssh-keygen for full details.

  3. Copy the ssh key to the server’s git directory:
    client$ scp ~/.ssh/id_rsa.pub git@server.com:/home/git/client_id_rsa.pub

 

Back on server:

  1. Add the client user’s key to the ssh list in the /home/git/.ssh directory
    server$ mkdir ~/.ssh
  2. Append the client user key to the list of authorized keys
    server$ cat ~/client_id_rsa.pub >> ~/.ssh/authorized_keys
  3. Create a new group called ‘gituser’ we’ll use for users to access our repository in /home/git/
    sudo groupadd gituser
    sudo usermod -a -G gituser git
  4. Log out completely and back in. You MUST do this for group assignment to take effect orsubsequent chgrp/chmod commands won’t work.
  5. Make the git repository and tell it to share based on the group the user belongs to.
    server$ cd ~git
    server$ mkdir -p mydepot
    server$ cd mydepot
    server$ git init --bare --shared=group
    Initialized empty Git repository in /home/git/mydepot/
  6. Set the permissions on the repository directory so that anyone in the new ‘gituser’ group can access it.
    chgrp -R gituser /home/git/mydepot
    chmod -R g+rw /home/git/mydepot
    chmod g+s `find /home/git/mydepot -type d`

 

Back on client (if it is a clean client without files for the repo):

  1. Test your ssh connection by trying to ssh into the server (using the git user)
  2. Create the local project:
    client$ mkdir -p depot/project1
    client$ cd depot/project1
    client$ git config --global user.email "you@client.com"
    client$ git config --global user.name "clientUsername"
  3. Clone the remote to your local system
    client$ git clone ssh://git@serverurl_or_ip:/home/git/mydepot/ .

Enter your username password and you’re done. The clone and the remote should be connected. Push/Fetch as normal. See the optional part below if you don’t want to use a global git user account on the server.

 

Or – Back on client that HAS existing files you want to get to the server:

Lets say you have a client that already has a bunch of files or even a git repository and you want to start using a remote repository. Here’s how you can add those local files into the remote server repository you just created.

  1. Initialize the repository where your client files are
    client$ git init
      Initialized empty Git repository in <blah>
    client$ git add .
    client$ git commit
      <Write something about this being a commit from the client>
  2. If you are going to using the git user account for all users, connect the project to your server this way:
    client$ git remote add origin ssh://git@serverurl_or_ip:/home/git/mydepot/

    If you don’t want to use the git account, then you must first create a user account on the server that matches the client userid (making sure to set the group/user properties on the server account), then use this:

    client$ git remote add origin ssh://serverurl_or_ip:/home/git/mydepot/

    Enter the password for your username or the ‘git’ server user depending on which one you used.

  3. Set up git configuration to avoid warnings and push:
    client$ git config –global push.default simple
    client$ git push –set-upstream origin masterYou will be prompted for the passkey you used when you created your RSA key in the above push step. Enter that passkey (not your git/user account password).

Optional – Using user accounts instead of a global ‘git’ account on the server.

The previous instructions had everyone use the same ‘git’ server user account when checking in – which means everyone must have the ‘git’ server account password. The log will show the right names, but security-wise this isn’t always best to use one global account on servers.

If you have more than one user but want everyone to log in separately, simply create a user account on the server like this:

On client for each client user:

  1. Create a ssh key on your client as before.
  2. Copy that key .pub to the server and append it to the authorized_keys file as above.
    client$ scp .ssh/myclient_id_rsa.pub git@serverurl_or_ip:/home/git

On server:

  1. Append the client’s public key to the authorized keys
    server$ cat ~/myclient_id_rsa.pub >> ~/.ssh/authorized_keys
  2. Create a user account that matches the userid on the client
    server$ sudo useradd client_username
    server$ sudo passwd client_username
  3. Make sure the new user account has access to the /home/git/ project directories by setting their group membership:
    server$ sudo usermod -a -G client_username

From now on, you don’t need to specify the git user account. Do not put the git@ part into the git clone url and use the username’s password when asked to log in:

client$ git clone ssh://serverurl_or_ip:/home/git/mydepot .

This method works great, but does require that you keep the client and server userid account passwords synced.

Setting up a Windows client:

Once the server is set up, you’re almost there. Microsoft has written a good guide. You’ll need OpenSSH or Windows 10 installed the generate an ssh key (if you don’t have one already).
https://docs.microsoft.com/en-us/vsts/git/use-ssh-keys-to-authenticate?view=vsts

 

Resource links:

Terry Davis and TempleOS

Terry Davis and TempleOS

Terry Davis is a programmer that has had a very difficult life. After a series of manic episodes, at 44 Terry is now homeless and most believe he has schizophrenia or suffering some other seriously debilitating mental disorder. He was most recently spotted on the streets here in Portland.

What’s unusual about Terry is that he wrote his own operating system: TempleOS. It’s free to download, and is written in HolyC. He even has his own youtube channel. There is even video of him debugging it while living homeless in his car in the middle of parking lots.

His life currently demonstrates the terrible state of mental health care in the United States, and the difficult reality of many homeless. They are often very intelligent, but have disorders that make them non-functional in society.

Read more about TempleOS and Terry on wikipedia.

Warning: his more recent videos have racist and inappropriate sexual language.