1. 程式人生 > >AI's Problems Attract More Congressional Attention

AI's Problems Attract More Congressional Attention

As contentious political issues continue to distract Congress before the November midterm elections, federal legislative proposals aimed at governing artificial intelligence (AI) have largely stalled in the Senate and House.  Since December 2017, nine AI-focused bills, such as the AI Reporting Act of 2018 (AIR Act)

and the AI in Government Act of 2018, have been waiting for congressional committee attention.  Even so, there has been a noticeable uptick in the number of individual federal lawmakers looking at AI’s problems, a sign that the pendulum may be swinging in the direction favoring regulation of AI technologies.

Those lawmakers taking a serious look at AI recently include Mark Warner (D-VA) and Kamala Harris (D-CA) in the Senate, and Will Hurd (R-TX) and Robin Kelly (D-IL) in the House.  Along with others in Congress, they are meeting with AI experts, issuing new policy proposals, publishing reports, and pressing federal officials for information about how government agencies are addressing AI problems, especially in hot topic areas like AI model bias, privacy, and malicious uses of AI.

Sen. Warner, for example, the Senate Intelligence Committee Vice Chairman, is examining how AI technologies power disinformation.  In a draft white paper first obtained by Axios, Warner’s “Potential Policy Proposals for Regulation of Social Media and Technology Firms” raises concerns about machine learning and data collection, mentioning “deep fake” disinformation tools as one example.  Deep fakes are neural network models that can take images and video of people containing one type of content and superimpose them over different images and videos of other (or the same) people in a way that changes the original’s content and meaning.  To the viewer, the altered images and videos look like the real thing, and many who view them may be fooled into accepting the false content’s message as truth.

Warner’s “suite of options” for regulating AI include one that would require platforms to provide notice when users engage with AI-based digital conversational assistants (chatbots) or visit a website the publishes content provided by content-amplification algorithms like those used during the 2016 elections.  Another Warner proposal includes modifying the Communications Decency Act’s safe harbor provisions that currently protects social media platforms who publish offending third-party content, including the aforementioned deep fakes.  This proposal would allow private rights of action against platforms who fail to take steps, after notice from victims, that prevent offending content from reappearing on their sites.

Another proposal would require certain platforms to make their customer’s activity data (sufficiently anonymized) available to public interest researchers as a way to generate insight from the data that could “inform actions by regulators and Congress.”  An area of concern is the commercial use, by private tech companies, of their user’s behavior-based data (online habits) without using proper research controls.  The suggestion is that public interest researchers would evaluate a platform’s behavioral data in a way that is not driven by an underlying for-profit business model.

Warner’s privacy-centered proposals include granting the Federal Trade Commission with rulemaking authority, adopting GDPR-like regulations recently implemented across the European Union states, and setting mandatory standards for algorithmic transparency (auditability and fairness).

Repeating a theme in Warner’s white paper, Representatives Hurd and Kelly conclude that, even if AI technologies are immature, they have the potential to disrupt every sector of society in both anticipated and unanticipated ways.  In their “Rise of the Machines: Artificial Intelligence and its Growing Impact on U.S. Policy” report, the co-chairs of the House Oversight and Government Reform Committee make several observations and recommendations, including the need for political leadership from both Congress and the White House to achieve US global dominance in AI, the need for increased federal spending on AI research and development, means to address algorithmic accountability and transparency to remove bias in AI models, and examining whether existing regulations can address public safety and consumer risks from AI.  The challenges facing society, the lawmakers found, include the potential for job loss due to automation, privacy, model bias, and malicious use of AI technologies.

Separately, Representatives Adam Schiff (D-CA), Stephanie Murphy (D-FL), and Carlos Curbelo (R-FL), in a September 13, 2018, letter to the Director of National Intelligence, are requesting the Director of National Intelligence provide Congress with a report on the spread of deep fakes (aka “hyper-realistic digital forgeries”), which they contend are allowing “malicious actors” to create depictions of individuals doing or saying things they never did, without those individuals’ consent or knowledge.  They want the intelligence agency’s report to touch on everything from assessing how foreign governments could use the technology to harm US national interests, what sort of counter-measures could be deployed to detect and deter actors from disseminating deep fakes, and if the agency needs additional legal authority to combat the problem.

In a September 17, 2018, letter to the Equal Employment Opportunity Commission, Senators Harris, Patty Murray (D-WA), and Elizabeth Warren (D-MA) ask the EEOC Director to address the potentially discriminatory impacts of facial analysis technologies in the enforcement of workplace anti-discrimination laws.  As reported on this website and elsewhere, machine learning models behind facial recognition may perform poorly if they have been trained on data that is unrepresentative of data that the model sees in the wild.  For example, if training data for a facial recognition model contains primarily white male faces, the model may perform well when it sees new white male faces, but may perform poorly when it sees non-white male faces.  The Senators want to know if such technologies amplify bias in race, gender, disadvantaged, and vulnerable groups, and they have tasked the EEOC with developing guidelines for employers concerning fair uses of facial analysis technologies in the workplace.

Also on September 17, 2018, Senators Harris, Richard Blumenthal (D-CT), Cory Booker (D-NJ), and Ron Wyden (D-OR), sent a similar letter to the Federal Trade Commission, expressing concerns that the bias in facial analysis technologies could be considered unfair or deceptive practices under the Federal Trade Commission Act.  Stating that “we cannot wait any longer to have a serious conversation about how we can create sound policy to address these concerns,” the Senators urge the FTC to commit to developing a set of best practices for the lawful, fair, and transparent use of facial analysis.

Senators Harris and Booker, joined by Senator Cedric Richmond (D-LA), also sent a letter on September 17, 2018, to FBI Director Christopher Wray asking for the status of the FBI’s response to a 2016 General Accounting Office (GAO) comprehensive report detailing the FBI’s use of face recognition technology.

The increasing attention directed toward AI by individual federal lawmakers in 2018 may merely reflect the politics of the moment rather than signal a momentum shift toward substantive federal command and control-style regulations.  But as more states join those states that have begun enacting, in the absence of federal rules, their own laws addressing AI technology use cases, federal action may inevitably follow, especially if more reports of malicious uses of AI, like election disinformation, reach more receptive ears in Congress.