In November, the Internal Revenue Service launched an online security system that uses face recognition to confirm a person’s identity. Public attention to the project last week triggered an outcry. The ACLU called the project “deeply troubling,” saying face recognition “has been shown to be less accurate for people of color.”
Some IRS functions, like scheduling payments—but not filing taxes—now require first-time users to verify their identity with Virginia startup ID.me, which also works with 27 state employment agencies and the Veterans Administration. The process involves photographing a government-issued ID and uploading a video selfie so algorithms can match face and document.
ID.me has said it uses algorithms ranked highly in US government tests of face recognition and offers alternatives for people who can’t get through its automated checks. But the company’s CEO stoked distrust Wednesday when he said the company uses face recognition more widely than previously disclosed.
One certainty amid the dispute: Submitting selfies to access online government services is set to stay—and spread. It’s required by US federal security guidelines from 2017 that aim to prevent fraud.
“Many elements of ID.me’s enrollment process are effectively set in stone,” says Cameron D'Ambrosi, managing director with Liminal, a research firm that helps companies with digital identity projects.
More than 20 federal agencies, including the Social Security Administration, use a digital identity system called Login.gov run by the General Services Administration. For some uses, it too asks for selfies to check against photos of a person’s ID, and is built on services from LexisNexis. The GSA’s administrator said last year that 30 million citizens have Login.gov accounts and that it expects the number to grow significantly as more agencies adopt the system.
“ID.me is supplying something many governments ask for and require companies to do,” says Elizabeth Goodman, who previously worked on Login.gov and is now senior director of design at federal contractor A1M Solutions. Countries including the UK, New Zealand, and Denmark use similar processes to ID.me’s to establish digital identities used to access government services. Many international security standards are broadly in line with those of the US, written by the National Institute of Standards and Technology.
Goodman says that such programs need to provide offline options such as visiting a post office for people unable or unwilling to use phone apps or internet services. Making any digital service universally accessible in a large and varied nation like the US is a challenge. An agency like the IRS has to serve a user base similar in scale to that of a large tech company, but unlike a hot startup must also include society’s least connected. “Usable security is really, really hard,” Goodman says. The US government’s track record on digital inclusion is mixed. ID.me says it has 650 locations where people can complete enrollment in person—a small number in a big country.
Services like Login.gov and ID.me are underpinned by NIST Special Publication 800-63-3 from 2017, an overdue overhaul of guidelines for passwords and other digital identity protections for a time of sophisticated computer crime.
That document recommends encouraging people to use long, memorable passwords rather than forcing them to frequently change them or specifying they include special characters. It also lays down tougher ground rules for providing remote access to systems like those of the IRS and many other agencies with sensitive data.
In person, government departments generally ask for a photo ID like a driver’s license. Online or over the phone, many agencies have previously verified identity by asking for information that could be checked against a person’s government file or credit report. But harvesting the personal data needed to spoof that kind of check has become easier in the era of social networks and mass data breaches.
NIST’s 2017 standard says that access to systems that can leak sensitive data or harm public programs should require verifying a person’s identity by comparing them to a photo—either remotely or in person—or using biometrics such as a fingerprint scanner. It says that a remote check can be done either by video with a trained agent, or using software that checks for an ID’s authenticity and the “liveness” of a person’s photo or video.
ID.me was well positioned to take advantage of the new standards, which federal agencies must comply with. The company was founded in 2010 as a deals website for veterans and active military and developed a system for checking military IDs used by the Department of Veterans Affairs. It won millions of dollars in federal grants to explore new approaches to digital identity that helped inform the 2017 standards and became the first company accredited as compliant with them. In 2019, ID.me signed a contract with the VA that has so far paid out more than $30 million.
During the pandemic ID.me has won a surge of new business—and scrutiny. States hired ID.me to screen claims for Covid-19 aid that overwhelmed many employment departments. But nonprofits and lawmakers have complained about its use of face recognition and said some vulnerable citizens can’t get through the company’s checks. California’s Employment Development Department said that ID.me blocked more than 350,000 fraudulent claims in the last three months of 2020. But the state auditor said an estimated 20 percent of legitimate claimants were unable to verify their identities with ID.me.
Caitlin Seeley George, director of campaigns and operations with nonprofit Fight for the Future, says ID.me uses the specter of fraud to sell technology that locks out vulnerable people and creates a stockpile of highly sensitive data that itself will be targeted by criminals. “A tool that creates more problems can’t be hailed as a solution,” she says. “Facial recognition is notorious for misidentifying Black and brown faces, gender-nonconforming people, women, and children.”
In an interview this week, ID.me CEO Blake Hall claimed that his company in fact widens access because its remote ID checking works for people without credit histories who often fail conventional checks. He claimed many problems with access to pandemic aid were caused by state agencies failing to provide adequate in-person services and that ID.me’s in-person locations provide a backstop.
Some of Hall’s claims have proven slippery. Bloomberg News questioned his estimate that $400 billion of federal pandemic relief was stolen; Hall says a detailed report on ID.me’s experience fighting unemployment fraud is coming soon. Wednesday, he reversed his earlier statements that ID.me used face recognition only to compare a person’s face to the ID they provided.
Hall told WIRED that ID.me retains images and videos uploaded during its verification process only to protect accounts from being taken over by fraudsters. He said the company used face recognition technology from Paravision, which is among the most accurate ever tested by NIST—although algorithms can perform very differently depending on how they are deployed. A 2019 NIST report on demographic bias in face recognition concluded that while many algorithms show different performance for different demographics, the most accurate can be equitable.
In his call with WIRED, Hall didn’t mention a second face recognition system that he revealed in his LinkedIn post. To prevent fraud, it checks whether a new applicant’s face matches one already in ID.me’s collection. Company spokesperson Madison Pappas says that when the system finds matches, users are referred to a video chat session for help. This function is powered by face recognition technology from Amazon, which has not been submitted to NIST testing and has been accused of showing racial bias by the ACLU. Amazon stopped selling its service to law enforcement in 2020 citing the need for federal regulation.
Jay Stanley, senior policy analyst with the ACLU’s Speech, Privacy, and Technology Project, says that Hall’s surprise disclosure illustrates the hazards of governments outsourcing critical functions to private corporations, which are less accountable and transparent to citizens.
What’s a government agency to do, given the requirements of today’s security standards? Stanley says online services should not be offered if there aren’t appropriate safeguards and accommodations. “Rushing to put things online before the security infrastructure is there can’t become a rationale for creating fairness and equity problems,” he says.