Are self-driving cars smart enough to risk life and limb? Fully autonomous vehicles are one thing, but when you have both a computer and a human behind the wheel at the same time, there are far too many situations where it’s not clear who or what should be in charge, and when. It’s definitely not a no-brainer.

Just last week a self-driving Uber with a safety driver hit and killed a pedestrian in Arizona. Granted, the homeless woman stepped right out in front of the car hauling a bike laden with large black garbage bags but still, video shows that either the operator or the car should have had enough time to react. Neither did.

I hear that Waymo, a Google spinoff that’s ramping up its own autonomous vehicle service, is fighting the problem of operator distraction by forcing them to maintain real-time logs and having a second backup driver as well. Sounds like a lot of effort and expense. I’m wondering if Silicon Valley is trying to fix a problem that doesn’t exist.

It’s been ages since I engaged cruise control on a vehicle. Aside from giving your foot a rest during a long road trip, I never really saw the purpose. Besides, I like to drive. It’s fun. And it’s more than a little scary to give up any measure of control of one or two tons of metal barreling down a highway at 65+.

On the other hand, low-cost sensor technology has been around forever. It’s hard to believe that every car doesn’t have an active warning system for when we get too close to another car, begin to nod off behind the wheel or have had a little too much to drink. And machines are now smart enough for self-driving systems like Tesla’s Autopilot.

Speaking of Tesla, a Model S on Autopilot drove itself under an 18-wheeler in Florida, killing the driver. Apparently the system couldn’t distinguish the trailer from the sky.

Nobody ever said these systems would be flawless. There will be accidents. There will be fatalities. And proponents say many more lives will be saved.

That said, in many reported accidents where the driver lived to talk about it, there was confusion over if and when the driver is supposed to take over from the self-driving system.

The Wall Street Journal reported on a non-injury crash between a Model S on Autopilot and a car that was parked on a California Interstate, of all places. The driver said the Autopilot failed to react but, after reviewing the car’s data, Tesla said the driver was at fault for breaking, which disengaged the Autopilot.

“So if you don’t brake, it’s your fault because you weren’t paying attention,” said the driver. “And if you do brake, it’s your fault because you were driving.” She also told the Journal that she does not plan to use the automated driving system again. Smart move.

The problem with most auto accidents is that they usually turn into “he said, she said” arguments over who’s at fault. Now you can add “car company said” to the confusion.

Maybe drivers need more education. Maybe self-driving systems need better hardware or a few more software updates. Maybe there’s a learning curve, for us and for the machines. Regardless, it seems clear to me that we’ve travelled too far, too fast into the all-too-uncertain domain of semi-autonomous vehicles.

Don’t get me wrong. There will be self-driving cars in your future. Lots of them and sooner than you think. The technology is cost-effective, the regulations are being hammered out, the business case is overwhelming and that makes it inevitable.

In addition to Uber, Waymo, Tesla, practically every major automaker and a host of startups are all racing, on their own or in tandem, to disrupt a global market for vehicles and services valued around $5 trillion.

While I’m not planning to give up my top-down sports car driving privileges anytime soon, I’m happy to be among the first to hop in a computer-driven Uber after a fun night of dining and drinking – after it can distinguish a pedestrian from air. Until then, I’d stick with man over machine.

Image credit Volvo.

A version of this was originally published on FOXBusiness.com.