Retailers are increasingly turning to facial recognition technology (FRT) in response to rising levels of theft and abuse of staff. Yet high‑profile trials, such as Asda’s use of live facial recognition in selected stores, have brought the debate into focus.

The discussion is no longer confined to whether the technology is effective, says Foot Anstey‘s Nathan Peacey and Kristina Holt.

Now it extends to the practicalities – and extends into questions around legality, privacy and public trust – particularly where systems operate autonomously or influence decisions in real-time.

Meeting high thresholds for deployment

From a legal perspective, the use of FRT is already heavily regulated. In the UK, facial images capable of uniquely identifying individuals constitute biometric data and are treated as special category personal data under UK GDPR.

This creates a high threshold for lawful deployment – one that becomes more exacting where AI‑driven systems identify individuals or generate alerts with limited human intervention.

“Retailers must be able to establish a lawful basis for processing, meet additional conditions for special category data, and comply with core principles such as necessity, proportionality, transparency and accountability.”

Kristina Holt, Managing Associate, Foot Anstey LLP

Where FRT involves automated decision‑making, these principles require closer scrutiny of how systems operate, what triggers alerts, and how outcomes are reviewed or challenged.

Under the EU AI Act, certain biometric identification systems are classified as high‑risk, triggering requirements around risk management, data quality, human oversight and ongoing monitoring.

Although the UK has adopted a more principles‑based approach this still emphasises accountability, explainability and contestability. Retailers deploying autonomous security systems should therefore expect regulators to test not just compliance on paper, but control in practice.

Privacy & cybersecurity sit at the heart of the debate

Privacy risks sit at the heart of the FRT debate. Biometric data is uniquely sensitive: unlike passwords, it cannot be reset if compromised. Retailers must adopt robust technical and organisational safeguards.

Retailers often rely on third‑party vendors to supply and manage FRT systems, creating supply‑chain exposure. Contracts and due diligence should address not only security and breach notification, but transparency around how algorithms function, how models are updated, and whether data is reused.

Autonomous, real‑time systems present specific challenges. Automated matching may increase efficiency, but it magnifies the consequences of error. A false positive generated at speed can lead to confrontation before human judgment is exercised.

Effective governance requires human‑in‑the‑loop controls, clear escalation pathways, auditability, and the ability to pause or override automated outputs.

A roadmap for responsible adoption

Responsible deployment of FRT is not solely a legal exercise – it is also a question of ethics, governance and culture.

“To make this vital tool work, retailers must consider how autonomous technologies affect customers and frontline staff, not only those suspected of wrongdoing.”

Nathan Peacey, Partner, Foot Anstey LLP

FRT should be deployed for defined security objectives and not allowed to expand into broader monitoring or profiling. Transparency matters, particularly where individuals may be subject to automated analysis without direct interaction. Clear signage and accessible privacy information help mitigate perceptions of covert surveillance.

Staff training is equally vital. Employees need to understand how systems work, their limitations, and when human judgment should override automated alerts. Over‑reliance on technology risks misidentification, escalation and harm – undermining safety and trust.

A question of effectiveness

Debate around recent FRT trials reflects a pragmatic question: does the technology reduce crime?

Early indications suggest it may deter repeat offenders. There remains a risk of displacement rather than prevention. Where automated outputs are treated as determinative rather than advisory, the consequences of error are heightened.

Evidence suggests FRT works best as part of a broader strategy incorporating human judgment, staff training, store design and engagement with law enforcement.

More regulatory scrutiny & experimentation

The future of facial recognition is likely to involve closer regulatory scrutiny alongside continued experimentation.

Retailers see clear potential value, but long‑term adoption will depend on whether autonomous systems can be shown to operate lawfully, proportionately and accountably.

Emerging AI governance frameworks provide that technology must remain subject to human control. Retailers that treat FRT as a technical fix may struggle; those that recognise it as a trust‑sensitive technology requiring ongoing oversight, transparency and restraint are better placed to succeed.

Kristina Holt (Managing Associate) and Nathan Peacey (Partner) are from Foot Anstey LLP.

Foot Anstey is a global law firm, which specialises in sectors including Retail & Consumer.

Leave a comment

Trending