Ensuring the lawfulness of automated facial recognition surveillance in the UK

by | Sep 3, 2020

author profile picture

About Martin Kwan

Martin Kwan is a legal researcher currently focusing on public, human rights and electoral laws.

Citations


Martin Kwan, “Ensuring the lawfulness of automated facial recognition surveillance in the UK”, , (OxHRH Blog, September 2020),  <https://ohrh.law.ox.ac.uk/ensuring-the-lawfulness-of-automated-facial-recognition-surveillance-in-the-uk//> [Date of access].

In R(Bridges) v South Wales Police, the England and Wales Court of Appeal reviewed the lawfulness of the use of live automated facial recognition technology (‘AFR’) by the South Wales Police Force. CCTV camera­­s capture images of the public, which are then compared with digital images of persons on a watchlist.

The use of AFR was held unlawful, because firstly, the relevant laws and policies failed to normatively specify (1) who can be placed on the watchlistand (2) where the AFR can be deployed. Instead, these were left to the discretion of individual police officers ([91]). The absence of guidelines on the exercise of discretion means that the current legal framework lacks the ‘quality of law’ for the purposes of Art. 8(2) of the European Convention on Human Rights (ECHR) ([90], [94]).

Secondly, the above failure also means that there was a failure to address the risk to the rights and freedoms of data subjects, as required by s. 64(3)(b) and (c) of the Data Protection Act 2018 ([153]).

Thirdly, public authorities are required by s. 149(1) Equality Act 2010 to have due regard to the need to eliminate discrimination and advance equality. However, the police failed to ‘satisfy themselves, either directly or by way of independent verification, that the software program in this case does not have an unacceptable bias on grounds of race or sex’ ([199]).

Important check against bias/discrimination

The Court’s expectation of ‘independent’ verification is welcomed and very important, because it means that even if the supplier confirms the absence of bias, the police still must verify this. This vitally ensures that a public authority ‘does not inadvertently overlook information which it should take into account’ to advance equality and prevent discrimination on grounds such as sex and race ([182]).

Arguably, this decision also vitally sends a message to AFR suppliers. The legal duty on the police to check bias would indirectly incentivize suppliers to ensure that their products are in compliance.

‘Who’ and ‘where’ norms unlikely to improve privacy

However, whilst requiring guidelines/norms on ‘who’ and ‘where’ to deploy AFR is sensible and will improve transparency, these procedural safeguards are unlikely to dramatically improve respect for the right to privacy.

First, given the AFR scheme is already deployed ‘in an overt manner’ ([1]), there is no room for a miraculous improvement of privacy through a switch from secretive to open use.

Second, a ‘where’ norm, in reality, is most likely formulated based on effectiveness of the location in combating crimes. The Court arguably suggested the same: ‘it is not said, for example, that the location must be one at which it is thought on reasonable grounds that people on the watchlist will be present’ ([130] emphasis added). Firstly, this norm only helps improve privacy if the police irrationally chooses ineffective places. Even without such a norm, an officer would arguably choose effective places where suspects may appear. Secondly, when suspects can hide and travel anywhere, and on occasions where the police have no clue (i.e. no reasonable ground for a specific location, or not a location that is particularly effective), the ‘where’ norm may have the counter-effect of prohibiting the use of AFR as a means of searching.

Third, adding a ‘who’ norm that says, e.g., only suspects can be included on the watchlist ([124-125]), will not improve privacy dramatically. This is because, again, it only guards against irrational police who arbitrarily include non-suspects in the watchlist in the absence of a ‘who’ norm. Any rules or standard of care, that requires police to act proportionately, be rational/not abusive and effective/diligent when carrying out their duties, would have the same effect/safeguard as the norms.

Fourth, even without a ‘who’ norm, the proportionality of any inclusion can only be comprehensively and accurately checked by reference to the actual watchlist, and by privacy/data protection impact assessments which record the categories of persons that have actually been included ([13], [123]). In the case, the Court found it wrong for the police to have included the over-broad category of ‘persons where intelligence is required’ ([124], [152]). Therefore, having a pre-deployment norm allows a pre-emptive check regarding whether the police will target an impermissible (e.g. over-broad) category of persons. However, it is vital not to have a false sense of security, because mistaken/misjudged/abusive inclusion of irrelevant persons may still occur, contrary to the norms.

Share this:

Related Content

0 Comments

Submit a Comment