TL;DR:
- DHS updates AI use case inventory, highlighting facial recognition and machine learning tools.
- CBP deploys Traveler Verification Service for facial recognition in identity verification.
- TSA adopts the same technology for streamlining PreCheck procedures.
- FEMA employs machine learning and machine vision for geospatial damage assessments.
- CBP uses AI for port of entry risk assessment decisions.
- Concerns have been raised over delays in recognizing publicly known AI implementations.
- DHS emphasizes the cautious evaluation process for public disclosure.
- Federal agencies are mandated to disclose AI use annually, but inconsistency is observed.
- Ben Winters expresses concerns about transparency and accountability.
- There is no clear process for agencies to modify their inventories.
- DHS plans to reveal more about its generative AI projects soon.
Main AI News:
The Department of Homeland Security (DHS) has recently updated its AI use case inventory, shedding light on its adoption of advanced technologies. Among the featured applications are facial comparison and machine learning tools, both playing integral roles within the department’s operations.
U.S. Customs and Border Protection (CBP) leads the charge with its Traveler Verification Service, a robust tool employing facial comparison technology to authenticate travelers’ identities. Additionally, the Transportation Security Agency (TSA) utilizes the same technology to streamline its PreCheck process, enhancing security and efficiency.
The Federal Emergency Management Agency (FEMA) makes a substantial contribution with its geospatial damage assessments, harnessing the power of machine learning and machine vision to evaluate disaster-induced destruction. Meanwhile, CBP relies on AI to inform risk assessment decisions at ports of entry.
It’s worth noting that while these advancements have just made their official debut in the DHS inventory, they’ve been publicly known for some time. This delay in recognition raises concerns about the inventory’s ability to reflect the full spectrum of AI applications already in operation.
When questioned about the timing of these additions, a DHS spokesperson cited the agency’s meticulous process for evaluating public disclosure. “Due to DHS’s sensitive law enforcement and national security missions, we have a rigorous internal process for evaluating whether certain sensitive AI Use Cases are safe to share externally. These use cases have recently been cleared for sharing externally,” the spokesperson explained.
Apart from the Department of Defense, intelligence agencies, and regulatory bodies, federal agencies are mandated by a Trump-era executive order to publicly disclose their AI implementations in an annual inventory. However, consistency in terms of categories, formats, and timing has proven elusive. Researchers and advocates have highlighted the apparent omission of publicly known AI uses from these inventories.
For instance, the Traveler Verification Service’s facial comparison technology has been featured on the TSA’s website since early 2021 and on CPB’s website since 2019. Furthermore, according to a Government Accountability Office report, the Traveler Verification Service was developed and implemented in 2017. Similarly, AI-powered geospatial damage assessments have been available on FEMA’s website since August 2022.
The spokesperson added that DHS Chief Information Officer and Chief AI Officer Eric Hysen had testified on CBP’s port of entry risk assessment use case during a September hearing before the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation.
Ben Winters, a senior counsel at the Electronic Privacy Information Center, who also leads its AI and Human Rights Project, expressed concerns about the timeliness and completeness of these disclosures. “AI use case inventories are only as valuable as compliance with them is. It illustrates why the government does not have the adequate oversight, transparency, and accountability mechanisms in place to continue using or purchasing sensitive AI tools at this time,” Winters said.
Winters also emphasized the importance of transparency and accountability in the use of AI tools, hoping that the Office of Management and Budget guidance does not broadly exempt “national security” tools.
Presently, there is no established process for agencies to add or remove items from their inventories. The Office of Management and Budget has previously indicated that agencies are responsible for maintaining the accuracy of their inventories.
In a previous update in August, DHS had introduced Immigration and Customs Enforcement’s use of facial recognition technology, along with CBP’s technology for identifying “proof of life” and preventing fraud on an agency app. Notably, a reference to a TSA system described as an algorithm for addressing COVID-19 risks at airports was removed.
The DHS spokesperson also hinted at forthcoming developments, with the agency actively exploring pilot programs involving generative AI technology. More details on this initiative are expected to be shared in the coming weeks.
Conclusion:
The Department of Homeland Security’s adoption of facial recognition and machine learning technologies underscores the growing significance of AI in the market. However, concerns regarding transparency and timeliness of disclosures raise questions about the need for standardized oversight mechanisms. The market should anticipate increased scrutiny and regulation surrounding AI implementations in sensitive areas such as national security and law enforcement.