Alphabet-owned Waymo unveiled its sixth-generation Driver system on Monday with a more efficient sensor setup. Despite having a reduced camera and LiDAR sensor count from the current platform, the self-driving ride’s new setup allegedly maintains safety levels. Once it’s ready for public rides, it will coexist with the current-gen lineup.

CNBC reports that the new system is built into Geely Zeekr electric vehicles. Waymo first said it would work with the Chinese EV maker in late 2021. The new platform’s rides are boxier than the current-gen lineup, built on Jaguar I-PACE SUVs. The Zeekr-built sixth-gen fleet is reportedly better for accessibility, including a lower step, higher ceiling and more legroom — with roughly the same overall footprint as the Jaguar-based lineup.

The sixth-gen Waymo Driver reduced its camera count from 29 to 13 and its LiDAR sensors from five to four. Alphabet says they work together with overlapping fields of view and safety-focused redundancies that let it perform better in various weather conditions. The company claims the new platform’s field of view extends up to 500 meters (1,640 feet) in daytime and nighttime and “a range of” weather conditions.

  • IphtashuFitz@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    3 months ago

    Couldn’t agree more. If all driving was easily predictable then “just better than humans” would be reasonable. But in my decades of driving I’ve encountered so many edge cases I’ve had to deal with that I seriously doubt true self driving will exist until we developed true AI (not just the LLM stuff that’s currently all the rage) that can react to events that aren’t pre-programmed.

    Just a few examples of things I’ve encountered:

    • A car fully engulfed in flames in the middle of a busy multi-lane intersection. I had a green light but I could hear (and barely see) emergency vehicles approaching from a different direction, so I had to give way.
    • Trees fallen entirely across the road.
    • I’ve seen a Tesla get confused by a landscaping truck hauling a trailer overflowing with tree trimmings so much that it looked like a giant bush. You couldn’t see the trailer, brake lights, license plate, etc. Would a Waymo car be able to tell the difference between a trailer like this and the above mentioned tree blocking the road?
    • Part of a one-way road near me was closed for a while so a water main could be repaired. People who lived on the street, delivery vehicles, etc. had to drive the wrong way to get out of there. Would a self driving car recognize when to do this?
    • I once stopped at a red light where a construction crew was working at the corner. I didn’t notice a cop standing near them was waving me through the red light until he walked up to my car and yelled at me to go. Would a self driving car recognize when a police officer overrides a traffic light?
    • Driving after heavy rain and encountering flooded roads where the sun & other reflections make it tough to spot at a distance.
    • And many, many more…
    • threelonmusketeers
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Thanks for sharing your experience. Do you think there are currently more unhandleable edge cases than there are human drivers who are tired, drunk, or distracted?

      My feeling is that autonomous vehicles will only get better from this point onward, and whereas I don’t foresee any appreciable improvement in human drivers. At what point do you think these lines will cross? 3 years? 8 years? 20 years?

      • IphtashuFitz@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Well that’s the thing about edge cases - by definition they haven’t been explicitly taken into account by the programming in these cars. It is literally impossible to define them all, program responses for them, and test those situations in real-world situations. For a self driving car to handle real-world edge cases it needs to be able to identify when one is happening and very quickly determine a safe response to it.

        These cars may already be safer than drunk/drowsy drivers in optimal situations, but even a drowsy driver will likely respond safely if they encounter an unusual situation that they’ve never seen before. At the very least they’d likely slow down or stop until they can assess the situation and figure out to proceed. Self driving cars also need to be able to recognize completely new/unexpected situations and figure out how to proceed safely. I don’t think they will be able to do that without some level of human intervention until true AI exists, and we’re still many decades away from that.