1. 程式人生 > >Artificial Intelligence Technology and the Law Blog

Artificial Intelligence Technology and the Law Blog

Digital conversational agents, like Amazon’s Alexa and Apple’s Siri, and communications agents, like those found on customer service website pages, seem to be everywhere.  The remarkable increase in the use of these and other artificial intelligence-powered “bots” in everyday customer-facing devices like smartphones, websites, desktop speakers, and toys, has been exceeded only by bots in the background that account for

over half of the traffic visiting some websites.  Recently reported harms caused by certain bots have caught the attention of state and federal lawmakers.  This post briefly describes those bots and their uses and suggests reasons why new legislative efforts aimed at reducing harms caused by bad bots have so far been limited to arguably one of the least onerous tools in the lawmaker’s toolbox: transparency.

Bots Explained

Bots are software programmed to receive percepts from their environment, make decisions based on those percepts, and then take (preferably rational) action in their environment.  Social media bots, for example, may use machine learning algorithms to classify and “understand” incoming content, which is subsequently posted and amplified via a social media account.  Companies like Netflix uses bots on social media platforms like Facebook and Twitter to automatically communicate information about their products and services.

While not all bots use machine learning and other artificial intelligence (AI) technologies, many do, such as digital conversational agents, web crawlers, and website content scrappers, the latter being programmed to “understand” content on websites using semantic natural language processing and image classifiers.  Bots that use complex human behavioral data to identify and influence or manipulate people’s attitudes or behavior (such as clicking on advertisements) often use the latest AI tech.

One attribute many bots have in common is that their functionality resides in a black box.  As a result, it can be challenging (if not impossible) for an observer to explain why a bot made a particular decision or took a specific action.  While intuition can be used to infer what happens, secrets inside a black box often remain secret.

Depending on their uses and characteristics, bots are often categorized by type, such as “chatbot,” which generally describes an AI technology that engages with users by replicating natural language conversations, and “helper bot,” which is sometimes used when referring to a bot that performs useful or beneficial tasks.  The term “messenger bot” may refer to a bot that communicates information, while “cyborg” is sometimes used when referring to a person who uses bot technology.

Regardless of their name, complexity, or use of AI, one characteristic common to most bots is their use as agents to accomplish tasks for or on behalf of a real person.  This anonymity of agent bots makes them attractive tools for malicious purposes.

Lawmakers React to Bad Bots

While the spread of beneficial bots has been impressive, bots with questionable purposes have also proliferated, such as those behind disinformation campaigns used during the 2016 presidential election.  Disinformation bots, which operate social media accounts on behalf of a real person or organization, can post content to public-facing accounts.  Used extensively in marketing, these bots can receive content, either automatically or from a principal behind the scenes, related to such things as brands, campaigns, politicians, and trending topics.  When organizations create multiple accounts and use bots across those accounts to amplify each account’s content, the content can appear viral and attract attention, which may be problematic if the content is false, misleading, and biased.

The success of social media bots in spreading disinformation is evident in the degree to which they have proliferated.  Twitter recently produced data showing thousands of bot-run Twitter accounts (“Twitter bots”) were created before and during the 2016 US presidential campaign by foreign actors to amplify and spread disinformation about the campaign, candidates, and related hot-button campaign issues.  Users who received content from one of these bots would have had no apparent reason to know that it came from a foreign actor.

Thus, it’s easy to understand why lawmakers and stakeholders would want to target social media bots and those that use them.  In view of a recent Pew Research Center poll that found most Americans know about social media bots, and those that have heard about them overwhelmingly (80%) believe that such bots are used for malicious purposes, and with technologies to detect fake content at its source or the bias of a news source standing at only about 65-70 percent accuracy, politicians have plenty of cover to go after bots and their owners.

Why Use Transparency to Address Bot Harms?

The range of options for regulating disinformation bots to prevent or reduce harm could include any number of traditional legislative approaches.  These include imposing on individuals and organizations various specific criminal and civil liability standards related to the performance and uses of their technologies; establishing requirements for regular recordkeeping and reporting to authorities (which could lead to public summaries); setting thresholds for knowledge, awareness, or intent (or use of strict liability) applied to regulated activities; providing private rights of action to sue for harms caused by a regulated person’s actions, inactions, or omissions; imposing monetary remedies and incarceration for violations; and other often seen command-and-control style governance approaches.  Transparency, which is another tool lawmakers could deploy, could impose on certain regulated persons and entities that they provide information publicly or privately to an organization’s users or customers through a mechanism of notice, disclosure, and/or disclaimer (among other techniques).

Transparency is a long-used principal of democratic institutions that try to balance open and accountable government action and the notion of free enterprise with the public’s right to be informed.  Examples of transparency may be found in the form of information labels on consumer products and services under consumer laws, disclosure of product endorsement interests under FTC rules, notice and disclosures in financial and real estate transactions under various related laws, employee benefits disclosures under labor and tax laws, public review disclosures in connection with laws related to government decision-making, property ownership public records disclosures under various tax and land ownership/use laws, various healthcare disclosures under state and federal health care laws, and laws covering many other areas of public life.  Of particular relevance to the disinformation problem noted above, and why transparency seems well-suited to social media bots, is current federal campaign finance laws that require those behind political ads to reveal themselves.  See 52 USC §30120 (Federal Campaign Finance Law; publication and distribution of statements and solicitations; disclaimer requirements).

A recent example of a transparency rule affecting certain bot use cases is California’s bot law (SB-1001; signed by Gov. Brown on September 28, 2018).  The law, which goes into effect July 2019, will, with certain exceptions, make it unlawful for any person (including corporations or government agencies) to use a bot to communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.  A person using a bot will not be liable, however, if the person discloses using clear, conspicuous, and reasonably designed notice to inform persons with whom the bot communicates or interacts that it is a bot.  Similar federal legislation may follow, especially if legislation proposed this summer by Sen. Diane Feinstein (D-CA) and legislative proposals by Sen. Warner and others gain traction in Congress.

So why would lawmakers choose transparency to regulate malicious bot technology use cases rather than use an approach that is arguably more onerous?  One possibility is that transparency is seen as minimally controversial, and therefore less likely to cause push-back by those with ties to special interests that might negatively respond to lawmakers who advocate for tougher measures.  Or, perhaps lawmakers are choosing a minimalist approach just to demonstrate that they are taking action (versus the optics associated with doing nothing).  Maybe transparency is seen as a shot across the bow warning to industry leaders: work hard to police themselves and those that use their platforms by finding technological solutions to preventing the harms caused by bots or else be subject to a harsher regulatory spotlight.  Whatever the reason(s), even something viewed as relatively easy to implement as transparency is not immune from controversy.

Transparency Concerns

The arguments against the use of transparency applied to bots include loss of privacy, unfairness, unnecessary disclosure, and constitutional concerns, among others.

Imposing transparency requirements can potentially infringe upon First Amendment protections if drafted with one-size-fits-all applicability.  Even before California’s bots measure was signed into law, for example, critics warned of the potential chilling effect on protected speech if anonymity is lifted in the case of social media bots.

Moreover, transparency may be seen as unfairly elevating the principals of openness and accountability over notions of secrecy and privacy.  Owners of agent-bots, for example, would prefer to not to give up anonymity when doing so could expose them to attacks by those with opposing viewpoints and cause more harm than the law prevents.

Both concerns could be addressed by imposing transparency in a narrow set of use cases and, as in California’s bot law, using “intent to mislead” and “knowingly deceiving” thresholds for tailoring the law to specific instances of certain bad behaviors.

Others might argue that transparency places too much of the burden on users to understand the information being disclosed to them and to take appropriate responsive actions.  Just ask someone who’s tried to read a financial transaction disclosure or a complex Federal Register rule-making analysis whether the transparency, openness and accountability actually made a substantive impact on their follow-up actions.  Similarly, it’s questionable whether a recipient of bot-generated content would investigate the ownership and propriety of every new posting before deciding whether to accept the content’s veracity, or whether a person engaging with an AI chatbot would forgo further engagement if he or she were informed of the artificial nature of the engagement.

Conclusion

The likelihood that federal transparency laws will be enacted to address the malicious use of social media bots seems low given the current political situation in the US.  And with California’s bots disclosure requirement not becoming effective until mid-2019, only time will tell whether it will succeed as a legislative tool in addressing existing bot harms or whether the delay will simply give malicious actors time to find alternative technologies to achieve their goals.

Even so, transparency appears to be a leading governance approach, at least in the area of algorithmic harm, and could become a go-to approach to governing harms caused by other AI and non-AI algorithmic technologies due to its relative simplicity and ability to be narrowly tailored.  Transparency might be a suitable approach to regulating certain actions by those who publish face images using generative adversarial networks (GANs), those who create and distribute so-called “deep fake” videos, and those who provide humanistic digital communications agents, all of which involve highly-realistic content and engagements in which a user could easily be fooled into believing the content/engagement involves a person and not an artificial intelligence.