Advertisement

Dutch anti-fraud system violates human rights, court rules

By
Don Jacobson
The Dutch court ruled that the Netherlands' risk assessment system violates a European treaty that safeguards privacy rights. File Photo by Miroslav110/Shutterstock/UPI
The Dutch court ruled that the Netherlands' risk assessment system violates a European treaty that safeguards privacy rights. File Photo by Miroslav110/Shutterstock/UPI

Feb. 5 (UPI) -- A Dutch court ruled Wednesday that a government system that uses artificial intelligence to identify potential welfare fraudsters is illegal because it violates laws that shield human rights and privacy.

A court in The Hague found the Dutch government's "SyRI" automated learning program, used by authorities to predict which citizens are likely to commit some form of housing or welfare fraud, is a surveillance system that violates the principles of the European Union's treaty on human rights.

Advertisement

The program uses an algorithm to predict a citizen's likelihood of committing fraud by tapping vast pools of personal data collected by the Dutch government like employment records, personal debt reports, education and housing history -- information that was previously kept separate.

Privacy groups, the Netherlands' largest trade union federation and several Dutch citizens sued the government after SyRI was introduced in 2014 in a package of welfare reforms. They argued the system violates human rights because it was selectively used in predominantly low-income neighborhoods and created a "surveillance regime" that disproportionately targeted poorer citizens.

RELATED Underwater robot becomes first to autonomously collect an ocean sample

The suit also says there is a secretiveness and a lack of oversight in operating the system. The Dutch court agreed in its decision, saying SyRI violates a provision of the EU treaty that guards privacy rights.

Advertisement

"This decision sets a strong legal precedent for other courts to follow," said Philip Alston, United Nations special rapporteur on extreme poverty and human rights. "It is one of the first times a court anywhere has stopped the use of digital technologies by welfare authorities on human rights grounds."

Alston said last year the world is moving further into a "digital welfare dystopia" in which artificial intelligence technology is used by governments to cut welfare spending, use intrusive government surveillance systems and produce profits for private corporate interests.

RELATED Samsung unveils AI-powered rolling robot at CES

Opponents hailed Wednesday's ruling as a groundbreaking decision that could slow a global trend of governments using AI to "spy on the poor."

RELATED U.N. analysis: U.S., China winning race for AI development

Latest Headlines