Creating a GPU-Enhanced Virtual Desktop for Udacity

Will Kessler
Udacity Eng & Data
Published in
4 min readMar 15, 2018

--

We’re thrilled to announce that Udacity has launched one-button access to a full GPU-driven Ubuntu desktop. This opens up our most advanced courses to any student with a Chrome browser and a halfway decent Internet connection.

A year ago when Udacity purchased my team at Cloudlabs Inc. this kind of access was just a dream. Although we subsequently built environments for Udacity Nanodegree programs focused on Web programming skills e.g. React and Front End Programming (IDE environments), Deep Learning (Jupyter environments), and Data Analysis (SQL environments), certain programs with 3D simulations, such as Robotics and Autonomous Flight, called for something new. (The Unity WebGL client satisfies some needs, but still puts a heavy rendering burden on the student’s computer, and requires custom middleware to connect applications.)

We asked, what if we could serve a GPU-powered Ubuntu desktop directly to the browser? After many hours of experimentation, we were finally able to create a stable build that works well in the Udacity Classroom.

Now students need only click one button to access a desktop, where they can run 3D tools such as gazebo sim, rviz, and Unity Studio right in their Chrome browser. Here’s a couple of pictures of the desktop in use.

Running Gazebo in the Desktop
Drone Simulation using Unity in Lubuntu

While it’s not that difficult to set up and use NVIDIA hardware in the cloud, it’s much more difficult to expose that hardware to Docker environments. In addition, most examples out there did not leverage nvidia-docker2. NVIDIA’s engineers provided us with a template Dockerfile that got us going. The desktop leverages excellent open source tools like TurboVNC and NoVNC to render 3d desktop graphics smoothly.

To help others trying to accomplish this, I’m sharing a repo and some notes here that will show you how to set up a Google Compute host running a Docker container with Lubuntu Desktop on Ubuntu16.04, and full access to the GPU for compute and graphics.

I want to thank all involved in making this happen. The support of NVIDIA engineers like Jonathan Calmels and Felix Abecassis was crucial, along with a trailblazing path set by a former Udacity Robotics student, Youcef Rahal. Also, Kyle Stewart-Frantz pitched in heavily, experimenting with different variations of our installed software base to get this to the finish line.

You can try the desktop out for yourself by enrolling in Udacity’s Robotics Software Engineer or Flying Car Nanodegree programs. Or try building this rig based on the repo. Have fun!

Technical Notes

Github code repo can be found here.

Here are some key technical notes and gotchas to be aware of.

Use Google Compute. This has been tested thoroughly on Google Compute. This may work well in AWS but we haven’t tested it.

Defeat screenlocking/power management. In the cloud, you don’t need or want screenlock or power management. This took a ton of fiddling, but the key utilities to defeat are: light-locker and xfce-power-manager. See autonomous_sys_build/Dockerfile on lines 68–69.

/etc/X11/xorg.conf. You need this file in order to tell Lubuntu’s desktop lightdm about the GPU. See the Device section. The PCI interface may vary if you’re not using Google Compute. This file was built by nvidia-xconfig.

/opt/noVNC/index.html. This is configured to start up noVNC without constantly asking for a password, and to maximize usage of the browser display. It does a redirect to the same URL but with the requisite parameters for this type of setup. (YMMV)

self.pem We set up a self-signed cert that noVNC can use. However, you can also access noVNC over http if you prefer and skip this cert.

dot desktop files We set up a few conveniences for our students: Terminator, Firefox, galculator, htop, gedit. These are in the .desktop files and produce nice icons on the desktop.

start_desktop.sh When the container is started up, this is the script we run to get the desktop up and going. First it makes sure VNC is up, and then it fires up noVNC to talk to VNC and serve the desktop over the browser.

Mount /tmp/.X11-unix/X0. This is key to exposing the GPU through to the container. You don’t want the container setting up its own directory here. See run.sh.

add_xhost.sh To make sure that X on the container can access the GPU, you need to run xhost on the host. This script executes this step. Note that the lightdm user has no shell by default, so we have to tweak /etc/passwd temporarily in order to run the xhost command as the lightdm user. Running xhost as the root user will not work as we need to grant access to the same user running on the container.

README The readme tells you basically what you need to know about getting going. But one note is the preinstall.sh script, which is optional but gets some useful utilities onto your host before you start the main build.sh script.

--

--