Internal Naysayers and AI Self-Driving Cars
By Lance Eliot, the AI Trends Insider
On the Friday just before we all moved our clocks to “fall back” due to the conclusion of this year’s Daylight Savings Time (it was Friday, November 2, 2018), Uber released its internal safety review report about the Uber self-driving car incident in Arizona that took place in March of this year.
I’d say it is worth a moment herein to reflect on at least one of the most crucial aspects mentioned in the Uber internal safety review report, which includes this poignant statement (see the Uber web site):
“The competitive pressure to build and market self-driving technology may lead developers to stay silent on remaining development challenges.”
It is crucial that those inside a firm consider a personal sense of duty when developing AI systems. Should you do nothing even if you believe that internal approaches are amiss? We often walk a tightrope of wanting to keep a job but at the same time wanting to speak-up when matters don’t seem right.
An associated difficulty is becoming labeled as a naysayer. Becoming an internal naysayer can be damaging to one’s ego and career. Even if you aren’t a naysayer, the odds are that during the course of your work career you will encounter one. These internal naysayers can be both a curse and a blessing (or, if you prefer the alternate sequence, I’ll say they can be both a blessing or a curse), depending upon the circumstances and the nature of the naysaying involved.
The word “naysaying” tends to have a negative connotation to begin with and so I’d like to set the record straight that there can be positive naysaying and there can be negative naysaying.
The positive naysaying is the type of naysaying that in a sense provides “constructive criticism,” meaning that it likely is perhaps hard to hear or take, but that it has merit and should be given due consideration. I’ll define negative naysaying as naysaying that is done without any merit per se and offers no particular redeeming value (it tends to be purely destructive rather than constructive).
It is difficult to readily know when naysaying is truly positive or negative.
One of the reasons that it can be confusing or bewildering to separate a positive naysaying from a negative naysaying is that the naysaying itself might be delivered in a manner that is untoward. If someone is a naysayer and presents their naysaying in a harsh or demeaning manner, those that are the brunt of the naysaying are quite likely to adversely react to the naysaying. Thus, the message itself gets lost in the manner of how the message was conveyed. This can be unfortunate since then the naysaying is like the proverbial throwing the baby out with the bathwater in that the actual useful parts of the naysaying are bound to be ignored or discarded.
In some companies, once someone starts down the path of being a naysayer they often get labeled as such.
This then tends to cull and permeate whatever they might have to say. In essence, once you’ve been stamped as a naysayer, whatever you do or convey will be likely seen in that light. You could claim the sky is blue and it would be perceived as yet another naysayer kind of remark. It usually doesn’t take much to get labeled as a naysayer and it is an albatross that will be very hard to shed.
In fact, if the naysayer label is so stuck on you that no matter what you do it won’t come unglued, you either need to live with it or consider going elsewhere (or, wait it out and hope that there is so much turnover in the firm that the new entrants won’t know of and won’t become tainted about your alleged naysayer status). What’s tough too is that being branded as a naysayer internally can even follow you to other jobs, since often in the AI industry everyone knows everyone else, and the naysayer needs to decide whether they want such a label for throughout their career.
In rarer cases the naysayer label can become a badge of honor. If you were considered one of the first or one of the brave that came forth and naysaid about something that later was shown to be the case, you might be heralded for having come forward. Partially this depends upon how the initial naysaying took place. Partially it depends on the significance of the naysaid aspects. Unfortunately, often the original naysayer is left out in the cold, having been the one to start the firm toward some kind of awareness that ultimately was good for the company, but the naysayer has been earlier punished for their at-the-time act and there is little or no intention of overtly rewarding the naysayer now.
Dicey to Be the Bearer of Bad News
It can nearly always be dicey to be the bearer of bad news. Bad news is bad news. Whomever brings up the bad news will generally get associated with the bad news. It doesn’t particularly matter that you weren’t the “culprit” or that you otherwise had nothing to do with the bad news. Instead, people tend to anchor to the notion that the person reporting the bad news somehow has an afterglow of badness.
This is why trying to convey bad news is a tricky ordeal. Some will even attempt to pass-off the bad news to someone else to then will bring it up, in hopes of having the badness stick to that person instead. Or, one might try an anonymous method of communicating the bad news. Or, keep the bad news hidden. Or, attempt to coat the bad news with seemingly “good” news aspects and then hope that others will realize on their own that it is really just sugar-coated bad news (and, maybe the afterglow won’t fall onto the person that brought the news to their attention).
I’d say that for most of the numerous software development teams that I’ve been involved with, invariably there is at least one or more naysayer on the team. Indeed, studies tend to show that tech specialists are more prone to cynicism and more likely to question orthodoxy then many other types of workers. This makes the chances of having naysayers involved in systems development projects a somewhat likely probability.
For those that aren’t used to naysayers, or that have had problems in the past with naysayers, it can be a rude awakening to encounter naysayers that might seem actually helpful or constructive. The knee jerk reaction to the naysayer is that any naysaying is verboten and to be stymied. In a sense, some consider naysayers as being trouble makers, as being malcontents, and assert that those naysayers are disloyal and should be expunged.
My management philosophy is that I want to gauge whether the naysayers are providing value and offering positive naysaying. If the naysaying is solely of a disruptive nature and provides no value, and perhaps is being done for purposes of spite or other non-value aspects, it certainly then can be the case that the naysayer is not much more than a trouble maker per se. But, this is not an aspect that can be axiomatically assigned to someone.
There are those too that suggest the naysaying propensity is shifting and widening as the change in the age eras proceeds. For the millennials and the Gen Z, it is supposedly the case that they are more likely than prior generations to offer naysaying feedback. In prior generations, it was considered the case that firms are right to do what they do, and if you wanted to keep your job then you needed to just remain buttoned up and go with the flow. The newer generations are supposedly more akin to want to work in a place that is doing the right things, and they will voice their concerns, and be willing to take a chance rather than be saddled with something that they believe is wrong, immoral, or otherwise not in keeping with their beliefs.
Many leaders don’t realize that if they simply suppress all naysaying, they will likely lose potential feedback that could aid them in overcoming an issue now, or an issue that will arise further down-the-road. I’ve worked with numerous CEO’s that would lash out at any naysayer, and for which the firm came to realize this, and as such the CEO became surrounded solely by “yes men” of the type that would never present any negative news whatsoever. This then led those CEO’s towards at times a rather bleak end that would cause calamity for the firm.
Let’s for the moment differentiate between being a naysayer versus the more legalistic whistleblower moniker. For purposes herein, the naysayer involves situations of things that are taking place that seem to be unwise or misguided, while in the case of a whistleblower we’ll say that they involve aspects that are outright illegal. I realize that a naysayer can be or become a whistleblower, depending upon the circumstances and whether the aspects involved would be considered illegal or criminal in nature. For this discussion, the naysayer is someone focused on aspects considered ill-advised, rather than matters that go beyond that and extend into criminality.
What does this have to do with AI self-driving cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. I also frequently come in contact with AI developers that are doing likewise work at various auto makers and other tech firms.
And, as per the Uber internal safety review, it is crucial that we all keep in mind the importance of not remaining silent when we should be speaking up, plus, knowing how to speak-up in a manner that will increase the chances of being heard and making a difference.
At times, I’ve had those AI developers from other firms bring up to me concerns that they have about the firm they are presently working at and what the company is undertaking. These AI developers at times perceive there are unwise or misguided actions taking place but are not sure what they should do about it. You might say they are on the verge of being or becoming a naysayer and are unsure of what to do about it.
In some cases, they are already labeled as a naysayer and are unsure of what to do about it.
I’ve also had managers or leaders from such firms that tell me they have naysayers and from a leadership perspective they are unsure of what to do about it.
To provide some indication of what the naysaying is about in the context of AI self-driving cars, let’s first establish some foundational aspects about AI self-driving cars.
There are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.
For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.
Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.
Here’s the usual steps involved in the AI driving task:
- Sensor data collection and interpretation
- Sensor fusion
- Virtual world model updating
- AI action planning
- Car controls command issuance
Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.
Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.
Returning to the aspects of naysayers, there are a multitude of ways in which an AI self-driving car might be designed, developed, tested, and fielded, and therefore a lot of opportunity for disagreement about what is the appropriate way to do so. For auto makers and tech companies that are underway in developing AI self-driving cars, they each are taking a proprietary approach, involving making numerous assumptions about technology and about business and societal matters.
Example Naysayer Interactions
Let’s consider some of the “naysayer” indications that have been brought to my attention.
- Naysayer raising concern that the company is focusing on the less than Level 5 rather than the Level 5 self-driving cars and will be unable or incapable of ultimately achieving a true Level 5 because of the misguided path underway.
- Naysayer upset that the firm is not making use of LIDAR and that this is being done to the detriment of the sensory capabilities.
- Naysayer that finds themselves always being brushed aside during team meetings going over the team’s coding efforts and that there are qualms that the code is overly brittle and will not stand-up to the rigors needed to operate in a real-time on-board system.
- Naysayer expressing that there is insufficient time to test the systems being developed and that the assumption seems to be let the self-driving car in the wild discover any bugs and deal with it then, rather than trying to ask for more time now to do a more careful job of things.
Naysayer worried that the Machine Learning (ML) and neural networks are being established with little or no attention to the potential of adversarial attacks and that this could be a potential hole that might be exploited later on by a bad actor.
- Naysayer indicating that some crucial assumptions about how pedestrians will abide by self-driving cars and about how human drivers will provide leeway to self-driving cars is shaping the AI in a very constrained and unrealistic way.
- Naysayer claiming that the AI developers are so overworked that they are tending to cut corners and are being treated like hamsters on a wheel that just need to shut-up, do the coding, and work around the clock to meet the stated deadlines.
- Naysayer that says the whole effort underway is a “cluster mess” (actually used a harsher four-letter word) and no one seems to be really in-charge and the whole AI system seems to be a hodgepodge which will either not work or work in unpredictable and dire ways.
If you simply read each of those naysayer remarks, I’d dare say that it is not readily feasible to assess whether or not the naysayer has a valid point or not. It could be that they are stating exactly what is taking place and that it bodes for rather serious and notable problems regarding the production of a safe and sufficient AI self-driving car. Or, it could be that the naysayer is jaded, perhaps having been passed over for some reason, and they are just lashing out for reasons of their own divine.
When I mention this aspect that without further context we cannot discern for sure what the situation really is, I usually get attacked right away by either the naysayer or those around the naysayer such as their supervisor or manager.
Lance, how can you possibly give credence to such preposterous claims, I am at times admonished.
Lance, you care about AI self-driving cars and how they will impact society and the public, so you must recognize that of course the remarks are true and must be immediately acted upon, I am sternly told.
There’s no question that all of those naysayer remarks are worthy of getting attention. I am not suggesting that those remarks aren’t worthy. As the proverb indicates, sometimes where there is smoke there is fire. Some of the remarks suggest that there might be really serious and damaging aspects that are not being taken into account for the AI self-driving car effort being undertaken. That’s something to be given due attention.
I also want to mention the other side of the coin too. There are some AI developers that came from a university research lab or a governmental AI group that are unfamiliar with the ways of private industry and systems development therein. As such, they can sometimes get upset about practices that are different from what they experienced before. Those practices might or might not be valid. I’m just suggesting that the naysayer can be commingling the aspects of doing things differently than they are used to, along with the notion that what is being undertaken is “wrong” to do.
As such, I don’t think we can axiomatically declare that any of those naysayer comments are right-or-wrong per se. Instead, it would be important to look into each matter and try to gauge what is taking place and how the comments potentially reveal actual aspects of concern.
If I am otherwise unable to get involved in exploring the naysayer remarks, I usually offer some overall suggestions of what the person might consider doing. Likewise, if a manager asks me about what to do about a naysayer, and assuming that I’m not able to further probe into the matter, I offer some overall advice. Let’s get to those general pieces of advice and hopefully it might be of aid to you too.
I usually bring up that getting input from fellow colleagues of the naysayer can be helpful toward trying to ascertain the validity of the naysaying. Is the naysayer the only one that perceives things the way that they do? I’m not suggesting that the naysayer is therefore mistaken if they are the only one, since perhaps they are the only one that sees things more clearly than the rest. Also, the fellow colleagues might share the same beliefs as the naysayer, but be hesitant to state as such, due to concerns about losing their jobs or becoming labeled as a naysayer.
As such, it can be a difficult aspect to try and get open feedback from the naysayer colleagues. Any tech leader that just thinks to bring them all into a room and ask outright what’s going on, well, it might work or it more likely probably would not work. There are many tech-oriented managers that are not especially versed in the subtilties of human behavior and are so focused on the tech side that they are unaware of how to actually interact with and manage people.
HR Department Could be Helpful
In theory, firms usually have an internal Human Resources (HR) department and any talent related matters should involve the HR team. This can be handy due to the HR team presumably being skilled in such matters, and also since there are potentially HR-related legal aspects that a tech leader or tech naysayer might unintentionally or intentionally violate in the process of pursuing naysayer claims. I realize that not all firms have an HR group, or they might have an HR group that is seen as not especially capable or maybe even arduous to deal with. Anyway, as mentioned, in theory there should be involvement by the HR team or someone versed in HR matters.
I also bring up that whomever is the immediate supervisor or “boss” of the naysayer should be consulted on the matter. Again, I’m not saying that the supervisor will be fully impartial and necessarily provide a reasoned analysis of the naysayer remarks. It could be that the supervisor and the naysayer are not seeing eye-to-eye, maybe on a lot of things far beyond the naysayer remarks, and so there is a bias potentially about anything the naysayer might have to indicate.
It is important to try and do some “egoless” analysis of the naysayer claims. For those of you that are developers, you might be aware of the notion of doing egoless reviews of system designs and coding. This involves looking at the design or the code and separating who did it from what it is. It used to be that when someone criticized or verbally “attacked” a design as being poor or the code as being bad, it was simultaneously an attack on the person that rendered the design or the code. The idea is to try and separate the artifact from the person and look at the artifact on its own merits.
This is not easy to do. It takes some very delicate and professed human behavioral skills to carry out an egoless design review or egoless code review. In any case, I am saying that you should try to separate the claims of the naysayer from the naysayer per se and look at the claims as a kind of artifact, similar to looking at a system design or lines of code. This might give you a chance at more realistically assessing the validity of the claims and whether or not the naysayer is offering some potential gold nuggets that otherwise might have gotten obscured by the rancor involved.
If trying to go the internal route is not viable regarding naysayer aspects, another approach can involve going external. Some firms arrange to have an external company that will for example take anonymous complaints on behalf of the firm (or, sometimes non-anonymous but with a promise of confidentiality), and then explore those for the firm. I realize that it is easy to be cynical about such arrangements. Maybe the outside firm has been told to simply bury any complaints. Maybe the outside firm will secretly report the complaints to the firm leadership and do nothing about it other than get you in trouble. Etc.
Another approach involves contacting an outside advisory group. Some firms for example have an alumni group of former employees. This might or might not be a means to try and get some useful advice (you never know and be forewarned that it could also open up a can of worms). There are also various industry professional associations that can sometimes aid in such matters (again, exercise due caution).
Going outside the firm can be problematic.
If you are bound by various confidentiality agreements, you might be violating those and thus are taking on more problems than perhaps you had anticipated. Sometimes too the act of going outside is instantly considered wrong by the firm, and so no matter whether your naysayer remarks are valid or not, you’ll now be cast as a violator of company policy and procedures, for which your naysayer aspects will no longer count anymore, and it will become instead a focus on your wrongful action of going outside the firm. If you are considering going outside, likely wise to consult a qualified attorney, if you hadn’t already yet done so.
Some naysayers become determined to seek out internal retribution come heck or high water. Even if they have AI skills that are in high-demand and they can step across the street to get a new job, they have become so enmeshed in the matter that they won’t walk away from it. This can be out of a sense of duty, believing that they don’t want to later on in life look back and think that they didn’t do enough to stop what maybe later became a Titanic kind of disaster. Or, it can be that they are so emotionally wrapped up in the situation that they are willing to walk on hot coals to prove their points.
There are some AI developers that are aware of the bystander effect. This relates to the idea that sometimes bystanders will watch as something untoward unfolds and do nothing about it. You might be walking down the street and see a hoodlum that rushes up to an elderly person and steals their wallet or purse. Suppose that none of the nearby bystanders takes any action to prevent it or stop it. For those people that are conscientious and were bystanders, they will afterward be racked with guilt and anxiety that maybe they should have acted, maybe they should have tried to intervene.
For some naysayers, they are determined to not be part of a bystander effect. They might view that their cohorts are letting things happen that should not be happening. These so motivated naysayers are willing to fight at preventing what they perceive as misguided or ill-advised efforts of developing an AI self-driving car. For those that are indeed going to prevent some later calamity, we can be thankful that they are willing to dare to speak-up. On the other hand, for naysayers that are without merit, and if their efforts merely delay and confound efforts toward AI self-driving cars, we would likely look askance at whether they are doing the right thing in their actions.
Naysayers, they can be a pain in the neck. Naysayers, they can be a live saver. You cannot just out of context declare a naysayer as one or the other of those types. A firm would be wise to put in place internal mechanisms to allow for naysayers to share their concerns. These mechanisms should be fair and balanced. Try to get value out of naysaying. Try to prevent subduing of naysaying for which it could have saved the firm and saved lives of those that might someday be a passenger in an AI self-driving car.
Copyright 2018 Dr. Lance Eliot
This content is originally posted on AI Trends.