Skip to Content

Earlier this month, the Federal Trade Commission’s Bureau of Consumer Protection issued a new guidance on the use of artificial intelligence (AI) and algorithms, following a previous November 2018 hearing and 2016 report titled “Big Data: A Tool for Inclusion or Exclusion?”. While the technology behind AI may be complex, Bureau Director Andrew Smith’s message to companies was simple: be transparent, be fair and be secure.

Be Transparent…

  • When using AI to interact with consumers. AI often operates in the background and is removed from the consumer experience. But when AI tools interact with customers, such as via chatbots, companies should be sure not to mislead consumers about the nature of the interaction. 
  • When collecting sensitive data via AI. Secretly collecting audio or visual data—or any sensitive data—to feed into an algorithm could give rise to an FTC action. Companies must be transparent about what and how information is collected, as well as the purposes for which it is used. 
  • About automated decisions. Companies calculating consumer risk scores via algorithms should disclose key factors that may affect the score. To avoid trouble under the Fair Credit and Reporting Act, companies using AI to make automated decisions about eligibility for credit, employment insurance, housing or other similar benefits should ensure any required “adverse action” notices are given after certain automated decisions are made.

Be Fair…

  • By not discriminating. Even algorithms designed with the best intentions could result in discrimination based on a protected class. For example, if a company made credit decisions based on consumers’ zip codes, resulting in a “disparate impact” on particular ethnic groups, the FTC could challenge that practice under the Equal Credit Opportunity Act. Companies should rigorously test their algorithm, before they use it and periodically afterwards, to ensure it doesn’t create a disparate impact on a protected class. 
  • By allowing consumers to correct inaccurate information. The FCRA entitles consumers to obtain information on file about them, know the source of information and dispute the information if they believe it to be inaccurate. This concept applies even if the information is gathered using AI, and companies must have a means of allowing for the correction of inaccurate consumer information.

Be Secure…

  • And accountable for upholding compliance, ethics, fairness and nondiscrimination principles. Companies should develop written policies and procedures to ensure the accuracy and integrity of the data used in AI models. Companies should also consider whether an independent evaluation of its mechanisms would benefit its compliance efforts. 
  • By protecting the AI from unauthorized use. Companies should build-in data security testing and protocols to help avoid unauthorized use and access. Additionally, companies should consider protocols to vet users and keep the technology on their own servers and under their own controls.

Over the years, the FTC has addressed automated decision-making and machine-based credit underwriting models for decades under FCRA and ECOA. The recent FTC guidance reiterates that it is positioned to take investigative and enforcement action in this space, as it has broad discretion to prohibit unfair and deceptive practices.

Our Privacy, Cybersecurity and Data Management Team will continue to share the latest developments and provide insights as we continue to monitor the ever-changing, ever-shifting legal landscape on these issues.