Denmark’s Welfare Algorithms Face Backlash for Discrimination and Surveillance Risks
Welfare recipients in Denmark face increased monitoring due to algorithms designed to detect fraud, according to Amnesty International’s recent report. Hellen Mukiri-Smith, an AI researcher and report co-author, stated that mass surveillance may harm those the system aims to help.
Amnesty reviewed four algorithms out of 60 used by Udbetalning Danmark, the agency that manages social benefits in Denmark. These algorithms analyze personal data from public databases, which is permitted by Danish law. They cover various benefits, including pensions, parental leave, sick leave, and student grants.
The data examined includes sensitive information, such as residence, travel, citizenship, and income. Amnesty highlighted that this data can reflect a person’s race, ethnicity, or sexual orientation.
One significant algorithm, called “Model Abroad,” assesses beneficiaries’ nationality to check if they moved overseas while receiving benefits. David Nolan, another report co-author, argued that using citizenship in this manner violates discrimination rights. The report indicated that 90% of cases opened using this algorithm do not involve fraud.
Amnesty recommends greater transparency from Danish authorities and calls for algorithm audits. The organization urges a ban on using citizenship or nationality data, which can lead to discrimination.
The reliance on digital services may exclude vulnerable groups, such as the elderly and certain foreigners, from accessing benefits. Rights advocacy groups have criticized the use of AI and algorithms in social services across Western nations.
In October, 15 organizations, including Amnesty, lodged a complaint with French courts regarding an algorithm used by the French social benefit agency CNAF to detect improper payments.
