Elon Musk responded on Twitter, praising the researchers' "solid work."
The researchers focused on getting into the vehicle's APE, or "Autopilot ECU."
For the steering takeover, they had to "dynamically inject malicious code into cantx service and hook the 'DasSteeringControlMessageEmitter::finalize_message()' function of the cantx service to reuse the DSCM's timestamp and counter to manipulate the DSCM with any value of steering angle."
Or make the processor believe that the instructions were legit.
The setup they used was a gamepad that connected to a mobile device that connected to the compromised APE via 3G/WiFi.
Solid work by Keen, as usual— Elon Musk (@elonmusk) April 1, 2019
As for the wipers, the Keen researchers took advantage of the fact that unlike other automakers, which use optical sensors to detect raindrops on a windshield, Tesla uses a 120-degree fisheye camera that feeds information to a neural network to figure out whether it is raining, and if so, to turn on the wipers.
Turns out that neural networks can be tricked with some perturbations in the images being processed. So they created an "adversarial" image and put it on a TV screen that the fisheye lens could detect, and on went the wipers. (The researchers pointed out, "it is well known that the traditional autowipers solution without neural network does not have such a problem.")
Finally, lane detection. Or maybe that's lack of lane detection. According to the researchers, "Tesla uses a pure computer vision solution for lane recognition." So what they did was to paste red stickers on the surface of a road, which managed to cause the Autopilot system to steer out of its lane and into the adjacent lane.
Here's something about this that is certainly disturbing. The Tencent Keen Security Lab researchers write, "Tesla's Autopilot module's lane recognition function has a good robustness in an ordinary external environment (no strong light, rain, snow, sand and dust interference)." Which is good. However, the system was tricked by the red stickers on the road.
They point out, "This kind of attack is simple to deploy, and the materials are easy to obtain."
Which seems to mean that even if you don't know what a DasSteeringControlMessageEmitter or a cantx service is, you can still trick a Tesla.