Autonomous Driving
One Group Says Tap the Brakes

As California, the US and industrialized nations around the world speed toward robot cars the Consumer Watchdog group says we need to proceed with caution with autonomous driving.  Hear what specific issues they have with the five different levels of autonomous driving in this episode of iDriveSoCal.

***Transcript***

Recording date – February 20, 2018

John Simpson: Another thing that has to be dealt with is the so-called ethical dilemma, which is okay, if there’s going to be a crash, who does the car kill? So consider a situation where a six-year-old kid comes out into the street, does the car, faced with the choice of swerving into the stone wall and smashing the vehicle and potentially hurting the people in the vehicle, does it do that to save the kid or does it hit the kid? Those kinds of decisions are going to have to be programmed into the software that’s going to run the robot car. And we think that the kinds of decisions that are going to be programmed in that ethical level have to be publicly discussed and understood.

Tom Smith: Welcome to iDriveSoCal, the podcast all about mobility from the automotive capital of these fine United States of America – Southern California. I’m Tom Smith. And with me today is John M. Simpson from the Consumer Watchdog organization. John is the Privacy and Technology Project Director, and we’re going to talk about the topic of autonomous driving and specifically the speed at which our society is moving towards driverless vehicles out on the roads being tested today and soon to come. John, thank you so much for being with me.

John: Pleasure to be here.

Tom: So Consumer Watchdog Group, let’s first get out of the way what you guys are in just a very high level perspective.

John: Well, we’re a nonprofit public interest group that looks after what we perceive to be the consumer’s interest. We’ve been very active in insurance rate regulation, health care, and, of course, now we’re looking at the safety issues with consumer…excuse me, with autonomous vehicles. We also have had a big project looking at privacy concerns with online and Internet companies.

Tom: And both the safety issue of the vehicle and then the privacy of all this data that these autonomous vehicles are going to be consuming and kind of processing and working off of are issues you guys are looking at?

John: Absolutely and we got into the whole robot car issue, autonomous vehicle sure initially precisely because of those privacy concerns about the data. And we then started to realize that there are fundamental safety concerns and our focus is on that as well.

Tom: Okay. And I want to dive into both of those topics with reckless abandon–pun intended–because we’re talking about safety. But before we do that, the Consumer Watchdog Group let’s just…for listeners that are like, “All right, what’s the agenda behind this organization?” You guys are a nonprofit.

John: That’s correct.

Tom: Right? And funding for this type of work comes from where? Does it come from…?

John: We have a big fundraiser every year. We often get various kinds of grants from foundations when we have a specific project. So our initial Privacy Project was funded by something called the Rose Foundation. Our legal team, when they are involved in legal cases, get legal fees. So those are those three major… Oh, the other thing, too, we get something called Cypress Gotcha. But the consistent kind of big funder sounds like The Rose.

John: Oh, no, no, that’s just one particular project that we were doing in privacy. And I mean it can depend from year to year in different projects. We don’t take any corporate money.

Tom: Okay. And you guys have been around for a really long time.

John: Yeah, well, I guess our founder, who was involved in Proposition 103, 1988 that was.

Tom: Let’s talk about what we’re here to talk about and that is autonomous driving. I have next to me the Society of Automotive Engineers automation levels. And zero is no automation and then, basically, we have five levels one, two, three, four, five, five. Driver assistance is one, partial automation, two, conditional automation, three, high automation is four, and full automation is five. Four and five is basically like driver optional, human being optional…

John: Well, driver…level four would be essentially the car drives itself without any need or even necessarily the ability for a driver to intervene, but it can only do that under certain specified and somewhat limited conditions. So sometimes they talk about it being a geofence where you drive only inside a carefully marked area, If you try to get out of it, the car shuts down. Level five, and I have serious doubts as to whether we could ever get the true Level five, means that the car can go anywhere and do anything that a human driver can do. And…

Tom: You’re mixing a martini in the back seat.

John: And while you’re mixing a martini or sleeping or whatever. Now, four you could mix a martini in the back seat, but it wouldn’t take you outside of that geofenced area, or it wouldn’t do it under certain conditions like maybe it couldn’t do it if it rains. Whatever those conditions are, they’re defined. Level three is, in some ways, the most problematic because it sort of…it can drive itself, supposedly, except when it can’t, and it tells the driver, “Whoops, can’t handle this,” and you have to jump back in and be prepared to do that. And that can be an issue. Now that can also be a serious issue when you slide down to Level two because that, in fact,has gotten people killed. Level two is when you’ve got several different autonomous technologies that work together. So you might have automatic lane keeping, adjustable cruise control, adaptive cruise control they call it. And so yeah, what else? Lane Keeping, adaptive cruise control,and automatic braking, that’s kind of what the Tesla has with autopilot. And Tesla also made the mistake, I think, of overselling that and leaving people with the impression that the car could really handle itself. That’s why people have gotten killed. In Florida, the guy was just not paying attention; the thing ran into the side of a truck.

Tom: I wanted to get to the headlines. That’s the big headline that we all heard about a few years back.

John: And that’s only at level two. I mean they’re not fully autonomous, but they really did not–and they were criticized by NTSB, excuse me, by the NTSB, who investigated the accident, for not doing something that kept the driver constantly involved. Level two, you have to have the driver constantly monitoring and involved.

Tom: From a driver’s perspective, isn’t a level too kind of like, “Eh, I need to be pay attention anyway. So is it really…if I have to be paying attention, I’m not really relaxing. So why not just…?”

Well, I’m not quite sure where some of these things would be. Technically, Level two is where you’ve got several of these things working together. So I mean if I am going down the road in my Chevrolet Bolt, I’ve got lane keeping and it kind of if I start to drift a little bit, it beeps at me. I like that, you know? It has a little thing if somebody is in your blind spot, it flashes you know a warning light when you look at the mirror. That’s useful stuff. I look at that as just a safety feature.

John: Well yeah, and indeed a number of…if you take, though, a number of safety features and you bundle them together, then you get close to a level possibly of some kind of autonomy. And that was what was going on with the Tesla. And unfortunately, it lulled people into believing that they were completely autonomously capable when they are not.

Tom: We were on a good path of going down the five, four, three, two, one. And I just want to get back to that track before we get too far off. Take me back to level one. It sounds like Level one is what we’ve already talked about and that’s just kind of like the indicator in my side view mirror…

John: Level one is some kind of assistance that you know usually it can be something like cruise control. I mean that’s a level one.

Tom: Okay, so we’ve had that for years.

John: And a number of these we have. Level two starts to have several of those devices that kind of work together. So lane keeping combined with adaptive cruise control starts to give much more ability to the car to kind of do more even though you still need to have the human monitoring.

Tom: So level two’s out there. It’s in manufacturers, it’s in showrooms, it’s on the roads. Do we have any vehicles that are full-fledged Level three on the roads right now?

John: Well, I would say that there are right now all the vehicles that are being tested are essentially level threes. I mean they may want to get beyond that, but in California in order to test them legally, right now, you’ve got to have a test driver in the car who can take over. And that’s precisely what a level three would do. And you know at some point it said, “Nope, can’t handle this,” and the driver takes it over. So the next one is a Level four which you know that that means that under the circumstances that you have designed the car to drive in—which they call it the ODD, the operating driving domain–that in that specified area, you do not need a human to monitor or intervene.

Tom: So level four, correct me if I’m wrong, Waymo, Google’s driverless division or autonomous driving division, just received the green light, another pun intended, to have driverless car hailing service in Phoenix. That’s going to be level four, right? And Waymo has responded by saying, “Thank you for that green light. We’re going to have this done by the end of the year.”.

John: It’s not clear to me yet whether they will still have a driver in there monitoring them during this phase or not. If they have a driver monitoring, I suppose you could consider it a level four if he never had to intervene. But I mean what they do want to get to is what you’re talking about. Level four, somebody calls it up, and it takes…

Tom: Shows up by itself.

John: Shows up by itself, takes the occupants to where they want to go, they get out, and there’s no human intervention and no human monitoring.

Tom: And now then jumping up to level five, and I think this is categorically level five, General Motors has requested approval to begin manufacturing, I believe this year or next, a vehicle based on I think it’s the Cruise.

John: No, their division is Cruise and they want to use something based on the Chevrolet Bolt which is their electric car, which I happen to drive, by the way. It’s a great car.

Tom: That’s interesting.

John: They want to deploy these things and some kind of a ride hail service I think by 2019 is the plan. And you know that probably would be level four.

Tom: But what I’m talking about is the car. GM wants to… They’ve requested approval to begin manufacturing a vehicle based on one of those other existing vehicle platforms that has no steering wheel, no pedals, no gear shifter.

John: Correct.

Tom: So are we talking about the same thing?

John: Right now you have our so-called Federal Motor Vehicle Safety Standards, FMVS’s. Many of the FMVS’s refer to a human driver and talk about you know the steering wheel must be…you know the brake pedal must respond a certain way and so forth. So if you were going to put a car on the road that doesn’t have those things, you’re going to be violating the Federal Motor Vehicle Safety Standards unless you get an exemption, which they have requested. And you can get an exemption under current law for up to 2500 vehicles which is what General Motors says it wants to do. Now, some of the nomenclature is a little bit fuzzy. My belief is that what they want to put out there would be level fours because they would, yes, they would not have a steering wheel and all that stuff, but they still probably couldn’t go anywhere, everywhere, and do anything that a human driver could do. They would probably be, in a certain limited area, geofenced as they call it.

Tom: I get what you’re saying. Level five is perhaps a real long way off because that’s when the machines take over, right?

John: Yes, there you go.

Tom: But three and four are both very real issues that are being dealt with at a number of levels. What are the specific concerns that you guys have? And let’s try to categorize them if we can for… I mean Level two is kind of like that horse is already out of the barn.

John: It’s out of the barn and it’s killed people. Now all you guys can really impact or influence is levels three and four, is that correct? So that’s where your focus areas are now. And what are the concerns and how are you addressing it?

John: The worst thing could be if you put one of these things out–Chevrolet put one of these things out–and you have a crash, and they don’t…just kind of keep it quiet. I mean people need to know how safe they are. We think that there still is, as demonstrated by the California disengagement reports, the cars are not yet safe enough to be left to drive on their own. There are too many circumstances where everyday ordinary driving situations cannot adequately be handled by the vehicle. Another problem that needs to be addressed and has not been adequately addressed is the interaction between vehicles with human drivers and the robot cars. But what concerns us is that if you look at the regulations that the federal government is putting out, there are none. Essentially, they’re saying, “Well you have some voluntary stuff. Would be nice if you if you gave us a safety self-assessment.” You know I don’t know how many cars…how many companies are developing self-driving cars right now, there are total, but we can get a sense of it. There are 50 cars who were permitted…excuse me, 50 companies permitted to do this in California because they have to get a permit. We know there were 50 of them. Of the 50, how many do you think have followed the voluntary self-assessment? Two, Google Waymo and Chevrolet Cruise and if you read the safety self-assessments, they look like slick marketing brochures rather than a serious analysis of what the car can do and can’t do.

Tom: But then it becomes a political issue, right? I mean you could almost say just from stereotypical ads like okay, Democrats want to put more regulations and rules on the whole plan of our society moving forward, and Republicans want to let it be the Wild West and yeehaw, let’s go make money.

John: Yeah, but I think when it comes to you know people’s safety on the highway, there needs to be some serious federal motor vehicle safety standards in place. We have them that cover a number of things. And I think everyone would agree that you know now it’s a requirement to have seatbelts, and you know…

Tom: That’s a good one. What we’re saying is you need the same sorts of reform and safety standards that are enforceable that would apply to autonomous or self-driving or robot cars or whatever you want to call them. And there aren’t any right now and that’s a huge problem. So what are you doing? What do you do to try and raise these concerns in such a way that your efforts are impactful?

John: Well, I mean talking to people like you is something that, in fact, helps because we get the issue out there. We do a lot of media advocacy. We show up at the various opportunities for public comment that are in front of the various rulemaking agencies or authorities. One of my colleagues right now is working on comments that we’ll be submitting to NHTSA on what we think should be happening with their next round of guidance on autonomous vehicles. And if you look right now I mean at the most recent opinion polls, I think it’s up around 65% of people don’t think that we’re there yet and that these cars are not safe enough.

Tom: I hear what you’re saying. But at the same time, I wonder if that 65% of the people are just kind of not paying attention to the progress that’s being made. I started this podcast because I was so blown away at the progress that we’re making, and I understand where you guys are coming from, too. It’s like, “Hey, let’s do this but let’s do this very carefully.”

John: I’m not sure that we’re going to get to the point where you can have a car that can do anything and everything a human driver can do. We may get to this level four where it’s operating in certain kinds of areas. Another thing that has to be dealt with is the so-called ethical dilemma, which is okay, if there’s going to be a crash, who does the car kill? Well, all right, phrase it another way. Is the car programmed so that it gives priority to the safety of the occupants of the car? So consider a situation where a six-year-old kid comes out into the street, does the car, faced with the choice of swerving into the stone wall and smashing the vehicle and potentially hurting the people in the vehicle, does it do that to save the kid or does it hit the kid? Now, those kinds of decisions are going to have to be programmed into the software that’s going to run the robot car. And we think that the kinds of decisions that are going to be programmed in that ethical level have to be publicly discussed and understood.

Tom: Have you posed that question to developers of software?

John: Yeah I said, “Who are you going to kill?”

Tom: What is their answer?

John: I mean some people have sort of said, “Well, you know, it’s the safety of the occupants of the vehicle.” But there’s also been arguments about, “Well, is that really a real issue?” But we think it is. And that’s something, I think, that policymakers need to address and it needs to be something, at the very least, that’s out there publicly. So Google, for instance, needs to say, “All right, in these circumstances, this is what we have programmed the car to do.” It has not yet been made public to my knowledge.

Tom: I bet you that we could draw parallels between what we’re experiencing with exactly what you’re pointing out right now and when horses and buggies and cars need to share the road together for the first time.

John: And what I’m saying is that the companies are rushing this too fast without adequate attention being given to proving that what they’re trying to put out there is safe and without adequate public disclosure and discussion of the programming choices that are being made about the vehicles.

John: John. I look forward to our future talks. As I wrap this up, you mentioned you drive a Chevy Bolt?

John: I do indeed, and my wife drives a Honda Fit EV, so we have two electric cars.

Tom: And what levels of autonomous features do you guys use?

John: You know maybe kind of a low level two, where you’ve got a lot of autonomous technologies that help and assist you. Automatic emergency braking is a wonderful thing. We filed a petition with NHTSA trying to get them to make automatic emergency braking mandatory because it’s a lifesaver. So we’re not Luddites. We find it troublesome that there is an ongoing revolving door between the developers of these things and the people who are supposed to regulate it like NHTSA. The guy who is the “safety director” for Waymo, Ron Medford, had a high position at NHTSA. Mark Rosekind just went to Zoox, I guess, it’s called. So these were the people who were, on the one hand, charged with enforcement. The top attorney for NHTSA he just went to General Motors, and he’s the guy who’s now petitioning to get NHTSA to give them an exemption.

Tom: Prosecutor decides to cash in and go criminal defense attorney, right? I mean…

John: I guess.

Tom: Understood. Let me ask you this final question. Are there any autonomous features on your vehicle or your wife’s vehicle that you don’t use?

John: Not that I can think of. I mean these little helpful assistance kinds of things I find good and useful, lane keeping, all that sort of thing.

Tom: All right very good. John M. Simpson, the Consumer Watchdog Group, thank you so much for being with me. I look forward to our future talks.

John: As do I, thank you.

Tom: For iDriveSoCal, I’m Tom Smith. Thanks for listening.