#autonomousvehicles

waynerad@diasp.org

"Zoox's robotaxi is designed from the ground up just for passengers -- hence the lack of a steering wheel altogether. Next to each seat is a touchscreen for controlling temperature, playing music or looking at a route map. The robotaxi is symmetrical and bidirectional, so it'll never have to reverse out of a parking spot. And like Waymo's and Cruise's fleets, it's all-electric.

"Zoox hopes to make a strong first impression by deploying its purpose-built robotaxi out of the gate, instead of gradually working toward a rider-focused vehicle like its competitors. It plans to launch commercially in the coming months, starting in Las Vegas."

No steering wheel, pedals or driver's seat: Is Zoox the future of robotaxis?

#solidstatelife #ai #robotics #autonomousvehicles #zoox

waynerad@diasp.org

"Inside 'Project Rodeo,' the Tesla effort pushing the limits of self-driving technology."

"Operating on open streets with other vehicles, cyclists, and pedestrians, test drivers on Project Rodeo have tested unreleased software that will be crucial to Tesla's push into autonomous driving."

"Test drivers said they sometimes navigated perilous scenarios, particularly those drivers on Project Rodeo's 'critical intervention' team, who say they're trained to wait as long as possible before taking over the car's controls. Tesla engineers say there's a reason for this: The longer the car continues to drive itself, the more data they have to work with."

Inside 'Project Rodeo,' the Tesla effort pushing the limits of self-driving technology

#solidstatelife #ai #computervision #autonomousvehicles #tesla

waynerad@diasp.org

"Volvo Autonomous Solutions today unveiled Volvo's first-ever production ready autonomous truck at the ACT Expo in Las Vegas. The Volvo VNL Autonomous brings together Volvo's commercial vehicle expertise with industry-leading autonomous driving technology from Aurora Innovation (NASDAQ: AUR)."

"Today" was May 20th.

"This truck is the first of our standardized global autonomous technology platform, which will enable us to introduce additional models in the future, bringing autonomy to all Volvo Group truck brands, and to other geographies and use cases."

Am I the only one who feels a little nervous about the idea of autonomous gigantic trucks? But they say the system is safe.

"The new Volvo VNL Autonomous has been made with safety in mind. The Volvo VNL Autonomous therefore has redundant steering, braking, communication, computation, power management, energy storage and vehicle motion management systems."

"The Aurora Driver consists of powerful AI software, dual computers, proprietary lidar that can detect objects more than 400 meters away, high-resolution cameras, imaging radar, and additional sensors, enabling the Volvo VNL Autonomous to safely navigate the world around it."

"The Aurora Driver has been extensively trained and tested in Aurora's sophisticated virtual suite where it's driven billions of miles. It also has driven 1.5 million commercial miles on public roads, where it deftly navigates end-to-end trucking routes traversing highways, rural roadways, and surface streets day and night and through good and bad weather."

"The Volvo VNL Autonomous will be assembled at Volvo's flagship New River Valley (NRV) plant in Dublin, Virginia."

The Volvo VNL Autonomous -- proving the way forward

#solidstatelife #ai #autonomousvehicles #aurorainnovation #volvo

waynerad@diasp.org

"A Tesla driver was arrested for vehicular homicide after he ran over a motorcyclist while driving using Autopilot without paying attention. The man, 56, had activated Tesla's Autopilot feature. He was using his phone when he heard a bang as his car lurched forward and crashed into the motorcycle in front of him, troopers wrote. The motorcyclist, 28-year-old Jeffrey Nissen, was sadly pronounced dead at the scene."

Tesla driver arrested for homicide after running over motorcyclist on Autopilot

#solidstatelife #autonomousvehicles #tesla

waynerad@diasp.org

"Just days after Cruise won the right to operate completely computer-controlled taxi rides in San Francisco at all hours, one of its units has got stuck in wet cement."

Reports "show the front tires of a Cruise car sunk deeply into a drying part of fresh-patched road at a construction site on Golden Gate Avenue between Fillmore and Steiner Streets in San Francisco."

Cruise self-driving taxi gets wheels stuck in wet cement

#solidstatelife #ai #autonomousvehicles

waynerad@diasp.org

"One day after California green-lighted a massive expansion of driverless robotaxis in San Francisco, the implications became clear."

"At about 11 p.m. Friday, as many as 10 Cruise driverless taxis blocked two narrow streets in the center of the city's lively North Beach bar and restaurant district."

"The cars sat motionless with parking lights flashing for 15 minutes, then woke up and moved on, witnesses said."

San Francisco's North Beach streets clogged as long line of Cruise robotaxis come to a standstill

#solidstatelife #ai #autonomousvehicles

waynerad@diasp.org

"Autonomous trucking upstart Embark goes from $5b valuation to kaput in 16 months."

I told you about this company in 2017 when it launched. The "16 months" they are referring to is the length of time from its SPAC IPO to its kaputness. (Actually if you really want to sound German, you could say "kaputheit", but I don't think that's an actual word in German -- I think they just say kaputt. The world originally comes from French anyway -- capot -- but capot meant "bonnet" or "covered", not "broken").

I told you about the company again in 2020 when I told you about Forbes' AI 50.

It IPOd at a valuation of 5.16 billion. And as of March 3, it's gone.

Hope none of y'all lost money. My posts are not investment advice.

Autonomous trucking upstart Embark goes from $5b valuation to kaput in 16 months

#solidstatelife #ai #autonomousvehicles #evs

rrm00@diasporabr.com.br

#Zenseact under the #NorthernLights. In our pursuit to train cars to drive #autonomously, we need data from all corners of the world. Collecting this data on a Swedish winter night can be a long, cold, and dark experience. It can also be extraordinary #AutonomousVehicles

https://twitter.com/zenseact/status/1626498082773794816?s=20s

waynerad@diasp.org

Tesla Autopilot fell to number 7 in Consumer Reports' driving assistance systems ranking. The ranking was: Ford BlueCruise, Chevrolet/GMC/Cadillac Super Cruise, Mercedes-Benz Driver Assistance, BMW Driving Assistance Professional, Toyota Safety Sense 3.0, Volkswagen Travel Assist, Tesla Autopilot, Rivian Highway Assist, Nissan/Infiniti ProPILOT Assist, Honda Sensing, Volvo Pilot Assist, and Hyundai/Kia/Genesis Highway Driving Assist.

"The 12 active driving assistance systems we tested were put through their paces around the track at our 327-acre Auto Test Center in Connecticut and on a 50-mile loop on public roads between September and December 2022. Each system was rated for its performance in 40 separate tests, such as steering the car, controlling the speed, and keeping the driver safe and engaged with the act of driving. Additional features such as automatic lane changes or reacting for traffic lights were not evaluated in this test."

"Consumer Reports testers evaluated the way each of the 12 systems performed within five specific categories: capability and performance, keeping the driver engaged, ease of use, clear when safe to use, and unresponsive driver."

Ford's BlueCruise ousts GM's Super Cruise as CR's top-rated active driving assistance system

#solidstatelife #ai #autonomousvehicles #tesla #consumerreports

waynerad@pluspora.com

Tesla AI Day. Yeah, I know, lots of you have already seen the video. So I guess this is for the 3 people who haven't yet.

They think of their AI system as being analogous to the "visual cortex" in biological organisms. The problem they have is fusing the input from multiple cameras. A Tesla car has 8 cameras, which are high dynamic range (HDR) cameras with 1280x960 resolution that operate at 36 frames per second.

The solution to this problem they have opted for is for the neural networks that process the vision to output what they see in the form of 3D vectors, into what they call "vector space" and they can visualize this 3D "vector space" representation on a screen.

The processing first goes through residual networks (resnets), which are convolutional neural networks but the "residual" technique allows them to go much deeper than traditional convolutional networks. They like the fact that they can make the network deeper or shallower as they please to trade off vision processing with latency.

After the resnets, the data goes into something called a BiFPN, which stands for Bi-directional Feature Pyramid Network. They don't say much about what this network outputs, other than that it is "features", not images.

After this the data branches into multiple "heads". Each of the branches does something different: object detection, traffic lights, lane prediction, etc

After this, they do something called "rectification", which takes the vector space output and takes into account each camera's position and orientation and projects its output into the same 3D "vector space". The final fusion process uses a type of neural network called a transformer. These were originally invented for language translation and have an "attention" mechanism that enables the translation system to pay attention do different words in the input as it generates the output. Since then, "vision transformers" have been invented that enable the neural network to focus "attention" on a specific part of a scene. However, Tesla is not using standard vision transformers. They invented their own transformer which operates in "vector space". So it doesn't take images as its input, it takes sets of 3D vectors. What it outputs, at the end of the whole process, is a single unified 3D representation of the scene with curbs, lanes, traffic lights, other cars, pedestrians, and so on, identified.

This system has another trick of its sleeve. Everything up to here is just looking at camera input at a single point in time. But they enabled the system to understand motion over time. This is done with two "cache" systems. One of them is simply time based -- it remembers the last few seconds of whatever the car has seen. The second is space based. So if, for example, the Tesla car sits at a red light, it can remember lane markings it has seen many seconds ago because they are in the "space based" cache and it remembers the space it recently drove past or over.

These "caches" are combined with a recurrent neural network. This combination allows the system to keep track of the structure of the road over time, and the system handles remembering cars when they are temporarily occluded very well.

After all this, the data goes into the planning and control system. For this he shows an example of changing lanes to make a left turn, and says the path planning system does 2,500 path searches in 1.5 milliseconds.

The planning system plans for everything in a scene, including other cars and pedestrians. He shows an example where the car is driving down a narrow street where we can pull aside and yield for another car or they can pull aside and yield for us. If the other car yields, our car knows what to do because it created that plan for the other car.

He shows a visualization of an A* backtracking algorithm, and how it is too computationally expensive and says they are developing a neural network, borrowing the design from AlphaGo, to optimize "Monte Carlo Tree Search", which AlphaGo also does.

You might be surprised that up until this point, the system does not use neural networks, but uses traditional computer science path planning algorithms. In the Q&A section, Elon Musk reveals that these are written in C++. He says neural networks shouldn't be used unless they have to be, and for vision they have to be, but since path planning doesn't have to be it's written in C++.

I would think this system would have trouble working in places with chaotic driving without clear rules, and the presenter acknowledges the system won't work in other places like India, where he himself happens to be from.

Next they talk about data set labelling. Originally they labeled images, but they switched to labeling in 3D vector space. They developed a UI where people can move things in vector space and see the projection in multiple photographs.

He talks about an auto-labeling system, but I didn't really understand how it works. Apparently it can combine data from multiple cars and reconstruct the road surface and walls and other parts of the scene from the video from multiple cars going through the same place. It also does a good job handling occlusions of moving objects such as cars and pedestrians.

They went to the next level by creating a simulator. It makes pretty realistic video. Of course since the simulation is computer-generated the vector space can automatically be correctly labeled and produce massive amounts of training data. The simulation system even simulates the characteristics of the cameras in the cars, such as adding sensor noise and simulating the effect the sun has on the camera. Neural networks are used to enhance the images and make them look even more realistic.

The main purpose of the simulator, though, isn't just to create massive amounts of training data but to create lots of examples of accidents and other edge cases that occur infrequently in real life. Speeding police cars, and so on. Most of the environments are algorithmically created, not created by human artists, so there is a potentially unlimited amount of roads to train from.

Before putting the models in cars, they do extensive testing, with 1 million evaluations/week on every code change. They developed their own debugging tools so you can see the outputs of multiple different revisions of the software side by side.

The rest of the talk is about Dojo, Tesla's upcoming supercomputer.

Basically what they did is create a supercomputer for learning how to drive. They start the process by designing a training node, which is a CPU combined with dedicated hardware for matrix operations (the core operations in any AI system), hardware for parallel floating point and integer math (similar to a DSP chip), SRAM, and communication hardware. The CPU has 4 threads and an instruction set designed specifically for machine learning (so it's not using a general instruction set such as x86 or ARM). 354 of these "training nodes" are manufactured on a single chip, called the D1 chip, with high-speed communication from each node to its adjacent nodes on 4 sides. It has 50 billion transistors on a single 645 millimeter chip manufactured at 7 nm.

With these D1 chips, the plan is to take 500,000 D1 chips and connect them with "Dojo interface processors", which in turn connect to outside computers. The D1 chips are organized into "training tiles". They created their own power supply and cooling systems for these "tiles". The tiles are placed in an "exapod" where 10 cabinets are combined and the walls removed so the tiles can communicate directly with each other without cabinet walls getting in the way.

They made their own compiler to compile PyTorch models and other code for the hardware.

Basically, they created a supercomputer specialized, from the transistors themselves on up, for one specific task, which is training vision neural networks.

Tesla AI Day

#solidstatelife #ai #computervision #autonomousvehicles #tesla

waynerad@pluspora.com

Why Teslas keep striking parked firetrucks and police cars. In the opinion of an electrical and computer engineering professor at Carnegie Mellon University, not according to Tesla themselves.

"According to the NHTSA, most of these incidents occurred after dark while the first-responder vehicles were flashing lights and had flares and flashing arrow boards around them. Do you think these lights could confuse the cameras?"

"I’m sure that’s part of the problem. When the lights are spinning and flashing, looking at it from the camera image, these are just pixels, meaning that they have numbers, and the numbers basically go up and down, up and down in some regions, when the light flashes. Unless the training phase has been given those images and that kind of modality, it could throw the pattern matching off."

The CMU professor goes on to say Telsa should use radar or Lidar, but they're not going to do that.

Why Teslas Keep Striking Parked Firetrucks and Police Cars

#solidstatelife #ai #computervision #autonomousvehicles #tesla