FCC Explores Impact of AI on Robocalls and Robotexts
The Federal Communications Commission (FCC) released a notice of inquiry (NOI) seeking comment on the implications of artificial intelligence (AI) technologies on robocalls and robotexts. The inquiry aligns with other AI initiatives of the FCC and agencies across the federal government.
FCC Chairwoman Jessica Rosenworcel has made the reduction of robocalls and robotexts a priority for the commission throughout her tenure as chair. This NOI is the latest step in the agency’s efforts to identify tools to mitigate robocalls and identify sources of fraud in telecommunications networks. The information collected by the FCC in this proceeding will inform future policy changes. Accordingly, potentially affected parties can shape the FCC’s dialogue on AI going forward by engaging with the FCC as part of this proceeding.
In the notice, the FCC explores how new AI developments can and will affect the FCC’s current regulation of automated text and voice messages under the Telephone Consumer Protection Act (TCPA). The FCC solicits public feedback on the benefits and risks associated with emerging AI technologies – including voice cloning – and contemplates including machine learning in the definition of “artificial intelligence.”
The FCC asks wide-ranging questions in the notice, and parties that participate in the proceeding also can raise their own issues on the topic of using AI for automated calling and texting. Among the many issues raised in the notice, the FCC asks:
- Whether it should define “artificial intelligence” within the proceeding – and, if so, how can “artificial intelligence” be defined in a way that meets the FCC’s responsibilities under the TCPA?
- Whether AI technologies can be used to protect consumers from robocalls and robotexts, help the FCC enforce the TCPA, and/or promote accessibility for individuals with disabilities?
- How AI technologies might be used to harm consumers by facilitating “illegal, fraudulent, or otherwise unwanted robocalls and robotexts?”
- Whether the FCC should consider “ways to verify the authenticity of legitimately generated AI voice or text content from trusted sources, such as through the use of watermarks, certificates, labels, signatures or other forms of labels?”
- Future steps the FCC should take to address AI technologies and further the inquiry on AI technologies.
Comments are due on December 18, 2023, and reply comments are due on January 16, 2024.
This content is provided for general informational purposes only, and your access or use of the content does not create an attorney-client relationship between you or your organization and Cooley LLP, Cooley (UK) LLP, or any other affiliated practice or entity (collectively referred to as “Cooley”). By accessing this content, you agree that the information provided does not constitute legal or other professional advice. This content is not a substitute for obtaining legal advice from a qualified attorney licensed in your jurisdiction and you should not act or refrain from acting based on this content. This content may be changed without notice. It is not guaranteed to be complete, correct or up to date, and it may not reflect the most current legal developments. Prior results do not guarantee a similar outcome. Do not send any confidential information to Cooley, as we do not have any duty to keep any information you provide to us confidential. This content may be considered Attorney Advertising and is subject to our legal notices.