The Donald Trump administration has said it will require short term visitors to submit records of their social media activity from the past five years when applying for the Electronic System for Travel Authorization, or ESTA. The move increases the likelihood that artificial intelligence based analysis programs will be used in the screening process.
U.S. Customs and Border Protection, or CBP, under the Department of Homeland Security, announced on Dec. 10 that it would tighten ESTA application procedures. Under the revised policy, applicants will be required to provide information on social media accounts they have used over the past five years. CBP said it may also require, when deemed necessary, the submission of phone numbers used during the same period, email addresses from the past 10 years, personal information on family members, and biometric data, including facial images, fingerprints, DNA and iris information.
Cases have already emerged in which visas for foreign students were revoked after their social media records were analyzed using artificial intelligence. The developments followed the U.S. State Department’s decision in early March to formalize the use of an AI based program known as Catch and Revoke. The program conducts comprehensive reviews of international students’ social media activity to identify expressions of sympathy for the Palestinian militant group Hamas or indications of antisemitic sentiment. According to the State Department, about 300 visas were revoked through the process in roughly one month.
Experts say it is not realistic to review vast amounts of personal data on an individual basis and warn that AI based programs could be expanded beyond screening foreign students to broader entry inspections. Kim Myung Joo, a professor of information security at Seoul Women’s University, said the trend reflects the Trump administration’s emphasis on national security. He added that it is difficult to rule out the possibility that AI based programs currently applied to international students could eventually be extended to short term visitors.
As a result, calls are growing for stronger procedural safeguards in the visa screening process. Choi Kyung Jin, a professor of law at Gachon University, said problems can arise if authorities rely too heavily on automated AI judgments. He said that because a visa denial is effectively irreversible once issued, procedures must be guaranteed to explain the reasons for denial, allow clarification of potential errors and provide an avenue to request reconsideration.
The European Union has moved to ensure that AI decisions are not left entirely to machines. Under the Artificial Intelligence Act, AI systems that significantly affect fundamental rights, including those used in biometric identification, migration, judicial processes and hiring, are classified as high risk. The law requires clear human oversight responsibilities, including the designation of supervisors and mandatory logging of decisions.
최효정 hyoehyoe22@donga.com