Bias found in AI system used to detect UK benefits fraud

In an ‘Exclusive’ report, the Guardian on 6 December reports that

An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people’s age, disability, marital status and nationality,

In other words,

Age, disability, marital status and nationality influence decisions to investigate claims.

The report goes on that

“An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.

Only this summer, the DWP (Department of Work and Pensions) claimed the AI system “does not present any immediate concerns of discrimination, unfair treatment or detrimental impact on customers” because the final decision on whether a person gets a welfare payment is still made by a human.

No fairness analysis has yet been undertaken regarding potential bias centring on race, sex, sexual orientation and religion, or pregnancy, maternity and gender reassignment status.

The Just Algorithms Action Group (JAAG) was established by people concerned at the unjust treatment of some people by automated decision-making systems that use algorithms.

JAAG is currently examining closely the Government’s Data Use and Access Bill, which contains clauses that it fears might weaken even further individuals’ protection against such unfair decisions.

The full Guardian report is here.

Next
Next

Surveillance, AI, and Ethics