Thursday, October 20, 2016

On the safety and legality of the comma one

The comma one will not turn your car into an autonomous vehicle. It is an advanced driver assistance system. To put it in traditional auto manufacturer terms, it is "lane keep assist" and "adaptive cruise control"

Our supported car, the Honda 2016/17 Civic with Sensing already has these features. But as anyone with the car will tell you, they aren't very good. The system is just much better. It provides no new functionality, so it should be legal everywhere the Honda systems are; it is an aftermarket upgrade.

With all these Tesla autopilot like systems, it is very important that you pay attention. This system does not remove any of the driver's responsibilities from the task of driving. We provide two safety guarantees:

1. Enforced disengagements. Step on either pedal or press the cancel button to retake full manual control of the car immediately.

2. Actuation limits. While the system is engaged, the actuators are constrained to operate within reasonable limits; the same limits used by the stock system on the Honda.

At, we are working as hard as we can to deliver the best possible user experience. Onward to the launch.

Wednesday, September 14, 2016

comma one Supported Cars

Definitely: Honda Civic 2016/17 with Honda Sensing (all Touring edtion)
Probably: All Honda and Acura with Lane Keeping Assist System

== Beta Program Requirements ==

* Have 2000+ comma points
* Have a supported car
* Have a commute

Monday, July 11, 2016

Self coloring books

Machine learning is eating software. Here at we want to build the best machine learning. This makes us all work really hard and sometimes need some stress relief. Our art therapist suggested us to try adult coloring books to relax. They worked so well for us that we decided to share the love with the world and built commacoloring, adult coloring books .

commacoloring was really well received and made it to the front page of Product Hunt. We got a lot of feedback from our users (we love users!). A feature was requested to automatically color the easy parts of the image, letting the user focus in the details. We used our self-driving car engineering skills to build a self-coloring book.

We call this new feature Suggestions. You can try right now by clicking the "suggest" button!

The engineering

Note: you can skip that section without affecting your coloring experience, but if you are familiar with deep learning jargon, please read along.

To automate the coloring process we trained a deep neural network for pixel level semantic parsing, i.e a network that will classify (color) each pixel using information of its surroundings. Given the state of the art, we knew the right approach would be a fully convolutional neural network. We started by trying an encoder-decoder like architecture with 4 convolutions down and 4 deconvolutions up [1], with one output channel per class. This was taking too long to converge though.

We later noticed that [2] claims that retraining the encoder network is not really necessary. They used a pre-trained VGG for dense classification in low resolution and bilinear interpolation followed by Conditional Random Fields for upscaling the image back to its desired size. Also [3] stated that the job of the decoder/deconvolution network is to mainly upscale and smooth the segmented output image and it can be a smaller network. Reddit brought our attention to ReSeg [4] that uses only the convolutional layers of VGG as the encoder.

Our final solution combined ideas from [3] and [4] and used fixed VGG convolutional layers as the encoder and trained a simple deconvolutional network as the decoder. Each layer of our decoder used only 16 filters of 5x5 pixels with upscaling stride of 2. We tried faster upscaling with stride 4 but the results didn't look sharp enough.

In one of our experiments we reinitilized the VGG weights to random values and were still able to learn a successful decoder. We called this architecture Extreme Segmentation Network, since it resembles Extreme Learning Machines. Unfortunately, we were aware that the acronym would compete with Echo-State Networks' and we decided to use the original VGG filters in production. Our final network is called Suggestions Network (SugNet). Some results are shown in Figure 1 and 2.

Figure 1. Input image and self colored Suggestions example.

Figure 2. Sample outputs of the segmentation network after 400 training epochs compared to human colored images.

All our method was implemented with Keras using Tensorflow backend. The VGG image preprocessing used Theano backend. At test time, using Tensorflow only the results didn't match and we doubted our engineering skills for a while before remembering that Theano implements correlation instead of convolution. Here is how to convert convolutional wieghts from Theano to Tensorflow. Keras didn't have a proper deconvolution layer, but we started working on a PR for that.

[1] Vijay Badrinarayanan, Ankur Handa and Roberto Cipolla "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Robust Semantic Pixel-Wise Labelling". arXiv:1505.07293   
[2] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, Alan L. Yuille "Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs". arXiv:1412.7062  
[3] Adam Paszke, Abhishek Chaurasia, Sangpil Kim, Eugenio Culurciello "ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation". arXiv:1606.02147.
[4] Francesco Visin, Marco Ciccone, Adriana Romero, Kyle Kastner, Kyunghyun Cho, Yoshua Bengio, Matteo Matteucci, Aaron Courville "ReSeg: A Recurrent Neural Network-based Model for Semantic Segmentation". arXiv:1511.07053.

We hope that Suggestions will inspire you to build even more fun apps with the open source commacoloring product. Let us know about all the amazing things you build with it.

Monday, May 30, 2016

21.5" Touch Screens

Once you've had one in your car, it's hard to not have one in your car.

Thursday, May 26, 2016

Correction to article: "Have a spin and a chat with founder George Hotz in his $1,000 autonomous car"

The article by Emme Hall did not correctly represent comma points. We think it is extremely unlikely that there will be comma points, but rather comma points. Below is the corrected paragraph, be warned the correction is subtle.

To give people incentive to use the program, drivers earn "Comma Points" for each minute out on the road with the app activated. Hotz was quite cagey when asked what these points could eventually be redeemed for, saying only, "Comma Points are absolutely incredible and you'll wish you had them. You definitely want comma points. In a couple of months you'll be so happy you have Comma Points."

We should also clarify that comma points are amazing, and that this correction is better and more true than other corrections.