Browsed by
Category: Technical

Set up VNC on Ubuntu 14.04

Set up VNC on Ubuntu 14.04

Setting up VNC on Ubuntu used to be pretty painless. But recent changes in Ubuntu and X have left it kind of a mess. It took me way longer to set up VNC than it should have, and finding the documentation wasn’t super-easy either. There were lots of broken guides. So, here’s what you need to do:

  • Follow these setup instructions first:
    https://www.howtoforge.com/how-to-install-vnc-server-on-ubuntu-14.04
  • When completed, however, a known issue means the screen will come up blue-grey and have few desktop controls if you try to connect to it. This is because (near as I can tell) the X manager currently used for Ubuntu doesn’t work over VNC anymore. You need to set VNC up to use an older desktop manager that
  • To fix that problem, you need to fix things according to this guide:
    http://onkea.com/ubuntu-vnc-grey-screen/
  • On your client, start the vncserver and connect to it by matching the final digit of the port number to the :X number you used to create it.
    • Example:
      host: vncserver :4 –geometry 800×600 (to create the server)
      client should use the ip: 10.23.47.150:5904
  • If you get an error starting the vncserver, increment the :2 to :3 or :4 and so forth until you find one not in use by some other user on the server.

OpenGL ES 2.0/3.0 Offscreen rendering with glRenderbuffer

OpenGL ES 2.0/3.0 Offscreen rendering with glRenderbuffer

Rendering to offscreen surfaces is a key component to any graphics pipeline. Post-processing effects, deferred rendering, and newer global illumination strategies all use it. Unfortunately, implementing offscreen rendering on OpenGL ES is not well documented. OpenGL ES is often used on embedded/mobile devices, and until recently, these devices haven’t typically had the graphics bandwidth to keep up with new rendering techniques. Compound this with the fact that many mobile games have simple gameplay/small screens that do not need such complex lighting models, many people now use off the shelf engines for their games, and that there is still a good amount of mobile hardware out there that doesn’t even support render to offscreen surfaces, and it is no surprise that few people use the technique and it’s not well discussed.

In implementing offscreen rendering for OpenGL ES, I turned to the very good OpenGL ES Programming book as it has a whole chapter on framebuffer objects. When I tried the samples in the book, however, I was having a lot of difficulty getting it working on my linux-based mobile device. A lot of the implementation examples use a technique of creating framebuffer objects using textures, but you can also use framebuffer objects via something called render buffers. One reason this is good to know is because many hardware vendors support very few render-to-texture formats. You can often find yourself struggling with your implementation not working because the output formats aren’t supported.

Thankfully, I found this article and thought I’d copy the information here since it’s the only place I’ve seen working code that demonstrated the technique. It also includes the very important step of reading the output format and uses glReadPixels() so you can validate that you were writing correctly to the offscreen renderbuffer surface.

In my case, on an Intel graphics part, I found that the format (which is also the most recommended one) that worked  was GL_RGB/GL_UNSIGNED_SHORT_5_6_5. Steps 1-8 is standard OpenGL ES setup code that is included so you can verify your setup. Step 9 is where the glFrameBuffer and glRenderBuffer objects are created.

 

    #define CONTEXT_ES20

    #ifdef CONTEXT_ES20
        EGLint ai32ContextAttribs[] = { EGL_CONTEXT_CLIENT_VERSION, 2, EGL_NONE };
    #endif

    // Step 1 - Get the default display.
    EGLDisplay eglDisplay = eglGetDisplay((EGLNativeDisplayType)0);

    // Step 2 - Initialize EGL.
    eglInitialize(eglDisplay, 0, 0);

    #ifdef CONTEXT_ES20
    // Step 3 - Make OpenGL ES the current API.
    eglBindAPI(EGL_OPENGL_ES_API);

    // Step 4 - Specify the required configuration attributes.
    EGLint pi32ConfigAttribs[5];
    pi32ConfigAttribs[0] = EGL_SURFACE_TYPE;
    pi32ConfigAttribs[1] = EGL_WINDOW_BIT;
    pi32ConfigAttribs[2] = EGL_RENDERABLE_TYPE;
    pi32ConfigAttribs[3] = EGL_OPENGL_ES2_BIT;
    pi32ConfigAttribs[4] = EGL_NONE;
    #else
    EGLint pi32ConfigAttribs[3];
    pi32ConfigAttribs[0] = EGL_SURFACE_TYPE;
    pi32ConfigAttribs[1] = EGL_WINDOW_BIT;
    pi32ConfigAttribs[2] = EGL_NONE;
    #endif

    // Step 5 - Find a config that matches all requirements.
    int iConfigs;
    EGLConfig eglConfig;
    eglChooseConfig(eglDisplay, pi32ConfigAttribs, &eglConfig, 1,
                                                    &iConfigs);

    if (iConfigs != 1) {
        printf("Error: eglChooseConfig(): config not found.n");
        exit(-1);
    }

    // Step 6 - Create a surface to draw to.
    EGLSurface eglSurface;
    eglSurface = eglCreateWindowSurface(eglDisplay, eglConfig,
                                  (EGLNativeWindowType)NULL, NULL);

    // Step 7 - Create a context.
    EGLContext eglContext;
    #ifdef CONTEXT_ES20
        eglContext = eglCreateContext(eglDisplay, eglConfig, NULL,
                                               ai32ContextAttribs);
    #else
        eglContext = eglCreateContext(eglDisplay, eglConfig, NULL, NULL);
    #endif

    // Step 8 - Bind the context to the current thread
    eglMakeCurrent(eglDisplay, eglSurface, eglSurface, eglContext);
    // end of standard gl context setup

    // Step 9 - create framebuffer object
    GLuint fboId = 0;
    GLuint renderBufferWidth = 1280;
    GLuint renderBufferHeight = 720;

    // create a framebuffer object
    glGenFramebuffers(1, &fboId);
    glBindFramebuffer(GL_FRAMEBUFFER, fboId);

    // create a texture object
    // note that this is commented out/not used in this case but is
    // included for completeness/as example
    /*  GLuint textureId;
     glGenTextures(1, &textureId);
     glBindTexture(GL_TEXTURE_2D, textureId);
     glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
     glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);                             
     //GL_LINEAR_MIPMAP_LINEAR
     glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
     glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
     glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP_HINT, GL_TRUE); // automatic mipmap
     glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, renderBufferWidth, renderBufferHeight, 0,
                  GL_RGB, GL_UNSIGNED_BYTE, 0);
     glBindTexture(GL_TEXTURE_2D, 0);
     // attach the texture to FBO color attachment point
     glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                         GL_TEXTURE_2D, textureId, 0);
     */
     qDebug() << glGetError();
     GLuint renderBuffer;
     glGenRenderbuffers(1, &renderBuffer);
     glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer);
     qDebug() << glGetError();
     glRenderbufferStorage(GL_RENDERBUFFER,
                           GL_RGB565,
                           renderBufferWidth,
                           renderBufferHeight);
     qDebug() << glGetError();
     glFramebufferRenderbuffer(GL_FRAMEBUFFER,
                               GL_COLOR_ATTACHMENT0,
                               GL_RENDERBUFFER,
                               renderBuffer);

      qDebug() << glGetError();
      GLuint depthRenderbuffer;
      glGenRenderbuffers(1, &depthRenderbuffer);
      glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
      glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16,     renderBufferWidth, renderBufferHeight);
      glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);

      // check FBO status
      GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
      if(status != GL_FRAMEBUFFER_COMPLETE) {
          printf("Problem with OpenGL framebuffer after specifying color render buffer: n%xn", status);
      } else {
          printf("FBO creation succeddedn");
  }

  // check the output format
  // This is critical to knowing what surface format just got created
  // ES only supports 5-6-5 and other limited formats and the driver
  // might have picked another format
  GLint format = 0, type = 0;
  glGetIntegerv(GL_IMPLEMENTATION_COLOR_READ_FORMAT, &format);
  glGetIntegerv(GL_IMPLEMENTATION_COLOR_READ_TYPE, &type);

  // clear the offscreen buffer
  glClearColor(1.0,0.0,1.0,1.0);
  glClear(GL_COLOR_BUFFER_BIT);

  // commit the clear to the offscreen surface
  eglSwapBuffers(eglDisplay, eglSurface);

  // You should put your own calculation code here based on format/type
  // you discovered above
  int size = 2 * renderBufferHeight * renderBufferWidth;
  unsigned char *data = new unsigned char[size];
  printf("size %d", size);

  // in my case, I got back a buffer that was RGB565
  glReadPixels(0,0,renderBufferWidth,renderBufferHeight,GL_RGB, GL_RGB565, data);

  // Check output buffer to make sure you cleared it properly.
  // In 5-6-5 format, clearing to clearcolor=(1, 0, 1, 1)
  // you get 1111100000011111b = 0xF81F in hex
  if( (data[0] != 0x1F) || (data[1] != 0xF8))
      printf("Error rendering to offscreen buffern");

  QImage image(data2, renderBufferWidth,  renderBufferHeight,renderBufferWidth*2, QImage::Format_RGB16);
  image.save("result.png");
New AI just ‘decisively’ beat pro poker players in 7 day tourney and demonstrates mastery of imperfect information games

New AI just ‘decisively’ beat pro poker players in 7 day tourney and demonstrates mastery of imperfect information games

Developed by Carnegie Mellon University, a new AI called Libratus won the “Brains Vs. Artificial Intelligence” tournament against four poker pros by $1,766,250 in chips over 120,000 hands (games). Researchers can now say that the victory margin was large enough to count as a statistically significant win, meaning that they could be at least 99.7 percent sure that the AI victory was not due to chance.

The four human poker pros who participated in the recent tournament spent many extra hours each day on trying to puzzle out Libratus. They teamed up at the start of the tournament with a collective plan of each trying different ranges of bet sizes to probe for weaknesses in the Libratus AI’s strategy that they could exploit. During each night of the tournament, they gathered together back in their hotel rooms to analyze the day’s worth of plays and talk strategy.

The AI took a lead that was never lost. It see-sawed close to even mid-week and even shrunk to $50,000 on the 6th day. But on the 7th day ‘the wheels came off’. By the end, Jimmy Chou, became convinced that Libratus had tailored its strategy to each individual player. Dong Kim, who performed the best among the four by only losing $85,649 in chips to Libratus, believed that the humans were playing slightly different versions of the AI each day.

After Kim finished playing on the final day, he helped answer some questions for online viewers watching the poker tournament through the live-streaming service Twitch. He congratulated the Carnegie Mellon researchers on a “decisive victory.” But when asked about what went well for the poker pros, he hesitated: “I think what went well was… shit. It’s hard to say. We took such a beating.”

The victory demonstrates the AI has likely surpassed the best humans at doing strategic reasoning in “imperfect information” games such as poker. But more than that, Libratus algorithms can take the “rules” of any imperfect-information game or scenario and then come up with its own strategy. For example, the Carnegie Mellon team hopes its AI could design drugs to counter viruses that evolve resistance to certain treatments, or perform automated business negotiations. It could also power applications in cybersecurity, military robotic systems or finance.

http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/ai-learns-from-mistakes-to-defeat-human-poker-players

 

Fingerprints are not security

Fingerprints are not security

Jan Krissler, known in hacker circles as Starbug, was already known for his high-profile stunt of cracking Apple TouchID sensors within 24 hours of the iPhone 5S release. In this case, he used several easily taken close-range photos of German defense minister Ursula von der Leyen, including one gleaned from a press release issued by her own office and another he took himself from three meters away, to reverse-engineer her fingerprint and pass biometric scans.

The same conference also demonstrated a “corneal keylogger”. The idea behind the attack is simple. A hacker may have access to a user’s phone camera, but not anything else. How to go from there to stealing all their passwords?

One way, demonstrated on stage, is to read what they’re typing by analyzing photographs of the reflections in their eyes. Smartphone cameras, even front-facing ones, are now high-resolution enough that such an attack is possible.

“Biometrics are not secrets… Ideally, they’re unique to each individual, but that’s not the same thing as being a secret.”

https://www.theguardian.com/technology/2014/dec/30/hacker-fakes-german-ministers-fingerprints-using-photos-of-her-hands

PIX for Windows is back!

PIX for Windows is back!

PIX is a performance tuning and debugging tool for game developers – that hadn’t been updated in years for the desktop. It survived on in three generations of Xbox consoles, but there was no desktop love. No longer! Microsoft just announced PIX beta is now available for analyzing DirectX 12 games on Windows.

PIX on Windows provides five main modes of operation:

  • GPU captures for debugging and analyzing the performance of Direct3D 12 graphics rendering.
  • Timing captures for understanding the performance and threading of all CPU and GPU work carried out by your game.
  • Function Summary captures accumulate information about how long each function runs for and how often each is called.
  • Callgraph captures trace the execution of a single function.
  • Memory Allocation captures provide insight into the memory allocations made by your game.

Go to the Microsoft blog to download it for free.

 

scp without entering a password each time

scp without entering a password each time

Lets say you want to copy between two hosts host_src and remote_machine. host_src is the host where you would run  scp, ssh or rsyn, irrespective of the direction of the file copy.

  1. On host_src, run this command as the user that runs scp/ssh/rsync$ ssh-keygen -t rsaThis will prompt for a passphrase. Just press the enter key. If you assign a password to the key, then you’ll need to enter it each time you scp. It will then generate a private key and a public key. ssh-keygen shows where it saved the public key. This is by default ~/.ssh/id_rsa.pub:

    Your public key has been saved in <your_home_dir>/.ssh/id_rsa.pub

  1. Transfer the id_rsa.pub file to host_dest by either ftp, scp, rsync or any other method.
  1. On remote_machine, login as the remote user which you plan to use when you run scp, sshor rsync on host_src.
  2. Copy the contents of id_rsa.pub to ~/.ssh/authorized_keys
    $ cat id_rsa.pub >>~/.ssh/authorized_keys
    $ chmod 700 ~/.ssh/authorized_keys

If this file does not exists, then the above cat command will create it. Make sure you remove permission for others to read this file via chmod. If its a public key, why prevent others from reading this file? Probably, the owner of the key has distributed it to a few trusted users and has not placed any additional security measures to check if its really a trusted user.

Optional – allowing root to ssh:

  1. ssh by default does not allow root to log in. This has to be explicitly enabled on remote_machine. This can be done by editing /etc/ssh/sshd_config and changing the option of PermitRootLogin from no to yes.
  2. Don’t forget to restart sshd so that it reads the modified config file.
  3. Do this only if you want to use the root login.

Thats it. Now you can run scp, ssh and rsync on host_src connecting to remote_machine and it won’t prompt for the password. Note that this will still prompt for the password if you are running the commands on host_dest connecting to host_src. You can reverse the steps above (generate the public key on remote_machine and copy it to host_src) and you have a two way setup ready!

Connecting iSpy to an Amcrest IP2M-841 webcam

Connecting iSpy to an Amcrest IP2M-841 webcam

This was more annoying than it should have been. When setting up my Amcrest IP2M-841B camera, I was able to use the Amcrest IP Config tool to log in and watch my camera without issue.

When using iSpy 64, however, the darn thing couldn’t figure out how to connect to it. Here’s how I did it. I left the camera on channel 1, set the encoding to plain H.264, and then did the following.

Test your camera using Amcrest IP config.

The first thing is to make sure your camera is working at all:

  1. Be sure you can open the IP config tool and see your cameras.
  2. Make sure passwords are correct, you can get a live view, and that it’s set to H.264 encoding and the channels are correct.

 

Test your rtsp line using VLC:

  1. Open VLC (install it if you need)
  2. Media->Open Network Stream
  3. Copy in your rtsp: address
    1. example without the username/password:
      1. rtsp://192.168.1.99:554/cam/realmonitor?channel=1&subtype=0
      2. VLC will ask for your username/password and you can enter it.
    2. example with the username/password:
      1. rtsp://<username>:<password>@<ipaddress>:554/cam/realmonitor?channel=1&subtype=0
      2. I left the arguments as –rtsp-caching=100 (the default)
  4. You should see your stream come up in VLC
  5. NOTE: When setting your password, if you have any special characters like %!&#$ – or the like – be sure to convert them to their equivalent hex ASCII codes. See this chart here.
    1. Example: if your password is ‘cat&dog’, you should use the password: ‘cat%26dog’

Connecting to iSpy

If connecting via VLC worked, your 75% of the way there.

  1. Start iSpy
  2. Add->IP Camera
  3. Select the VLC Plugin tab (I have VLC installed, not sure if that’s 100% necessary)
  4. Set the VLC URL to what you had above (with the username+password):
    rtsp://<username>:<password>@<ipaddress>:554/cam/realmonitor?channel=1&subtype=0
  5. When setting the password, if you have any special characters like %!&%#$ – or the like – be sure to convert them to their equivalent hex ascii codes. See this chart here.
    1. Example: if your password is ‘cat&dog’, you should use the password: ‘cat%26dog’
  6. I left the arguments as –rtsp-caching=100 (the default)

Once you have iSpy connected, you can set up events and connect to the cloud for full web monitoring.

 

Resources:

So, where did I get that rtsp line? Directly from the Amcrest HTTP API SDK Protocol Specification. Section 4.1.1, p14 – Get real-time stream. It’s also a handy guide on all the other parameters you can send the camera.

Lane Finding

Lane Finding

My first homework assignment for my self-driving automotive class was to find lanes on still, then on captured video. Here was my first attempt, which seems to have come out pretty well. I still want to improve it a bit with some more frame-to-frame smoothing.

Useful links/techniques:

Social Media sites are Highly Manipulable – by just as few as 3-5 accounts

Social Media sites are Highly Manipulable – by just as few as 3-5 accounts

Research is demonstrating that changing the entire narrative on a topic is shockingly easy on social media sites. Sites that claim to be free of such manipulation via user-submitted content and using up/down vote systems such as Reddit are shockingly susceptible.

In most cases, researchers found it only took 1 bot casting 3-5 up/down votes at the right time to change the tone of an entire subreddit for/against something. Further experiments showed that it only took about $200 to get a completely false story to the front page of Reddit. There is growing evidence that this is already been happening in the Bitcoin Reddit forum.

This is likely not only going to be used by advertisers but foreign agencies to change the narratives of just about any topic. All of this comes on a budget that is just pennies compared to traditional advertising and military efforts.

Perhaps this will cause us to come full-circle back to curated media – or for a startup to start a vetting and verification service?

Either way, the old adage is as true than ever: Don’t believe it just because you read it on the internet. But now we might add: Also don’t believe it just because it gets lots of up/down votes.

Hawking joins voices saying automation and AI are going to decimate the middle class

Hawking joins voices saying automation and AI are going to decimate the middle class

Steven Hawking added his voice to a growing chorus of experts concerned that AI and automation are going to decimate middle class jobs, worsen inequality, and increase the risk of significant political upheaval.

Article here

A report put out in February 2016 by Citibank with the University of Oxford predicted that 47% of US jobs are at risk of automation. In the UK, 35% are. In China, it’s a whopping 77%. Hawking writes that automation will, ” accelerate the already widening economic inequality around the world. The internet and the platforms that it makes possible allow very small groups of individuals to make enormous profits while employing very few people. This is inevitable, it is progress, but it is also socially destructive.”

This is what I’ve said for some time. AI allows you to replace whole swaths of employees. We can see how this is playing out by looking at the economies AirBnB and Uber are setting up. Instead of these nationwide chains of workers facilitating this new industry, the work is largely done by servers and AI on commoditized server farms. Instead of that money coming into a company of thousands, machine learning and automation can do it with only hundreds. Those hundreds are in narrow job titles with many traditional disciplines no longer needed. Further, it’s not hard to see how this concentrates money from a nation-wide chain into an incredibly small number of pockets instead of a host of employees they might have hired in years past.

Don’t get me wrong, I’m not for putting our heads in the sand and ignoring these new economies or saying we should stop them. That’s impossible. However, as Hawking and economists note,  “We are living in a world of widening, not diminishing, financial inequality, in which many people can see not just their standard of living, but their ability to earn a living at all, disappearing. It is no wonder then that they are searching for a new deal, which Trump and Brexit might have appeared to represent.”

So what will be lost and what will be left? Just like the industrial revolution – certain kinds of jobs will be affected, but others will not. Creative, supervisory, and health care roles are likely safe. Skilled workers that know how to build and maintain AI server systems as well. But jobs like cashiers, tellers, secretaries, logistics, quantitative marketing/planning/strategy, financial planning, truck drivers, possibly even train conductors or airline pilots could all see major parts of their job functions replaced with machine learning algorithms. We’re already seeing this with automated checkouts, automated driving vehicles, and logistics AI’s are already out-performing replacing live counterparts. Even if one’s job is not replaced, there might only need to be one or two persons in the cockpit instead of 3 to 5.

This is going to come to a head in our lifetimes, and it’s very important we start talking and thinking about it now.