State Sanctioned Automated Decision Making: Is Public Sector Culture Violating Privacy Rights? (Part 1)

Emma Cooper, LLM Information Rights Law and Practice

Lead Consultant at Kafico Ltd

Chapters

1. Introduction

2. State of Technology

3. The Risks of State Sanctioned ADM

4. Public Sector and Article 8 Right to Privacy

4.1 Informational Privacy

4.2 Identity Management

4.3 Self-Determination and Decisional Privacy

4.4 Intrusion and Family Life

4.5 Algorithmic Bias

1. Introduction

This paper will explore the deployment of Automated Decision-Making systems (ADM) by public authorities, predominantly in the UK, within the context of the European Convention for Human Rights Article 8: The Right to Privacy[1].

Having provided an explanation of profiling, algorithms, automated decision making and machine learning, the paper will provide a brief historical context to the desire of the state, first to profile its citizens and then to automate decisions made about them.

The ubiquitous nature of this technology across Europe in public sector will be illustrated, as will the vulnerability of the citizen; particularly within the context of the power dynamic between the citizen and a paternalistic democracy that deploys automated decision making or decision support systems that are authorised by law.

Whilst there is prolific debate about the legal and regulatory framework for the ethical, lawful and consistent deployment and use of ADM systems, this paper will focus on data protection and equalities legislation in the UK and the potential for discretion rather than justification, contrary to the Rule of Law.

This work does not seek to propose any particular remedy in the form of regulations, codes or impact assessments. Rather, it serves to highlight the cultural predisposition of public sector to ‘indifference’, such that any proposed framework might prove ineffectual without cultural change or imposed accountability.

2. State of Technology

The supposed ‘insatiable need of the modern state for the registration of its citizens’[2] has its roots in ancient history, most notably in the form of the Domesday Book[3] and before that, the Roman Census. In these examples, authorities sought to survey the population to create a record of citizens and assets for the purposes of tax collection[4].

Eventually, these large data sets lend themselves to ‘profiling’, for which modern systems rely on large data sets[5]. The data collections are often formed from multiple contributors[6] such as welfare, health and social care services and can be drawn from multiple sources such as personal devices and surveillance systems[7]. The data can then be ‘mined’ to establish ‘categories’ that are linked with particular ‘attributes’ resulting in what is referred to as a ‘group profile’[8].

As an abstract example, a profiler might mine a bank database to sort the data into gender categories and then identify financial debt as an attribute. This could create group profiles for men and women with regards to their level of financial debt.

Profiling can give rise to the discovery of patterns or correlations in the population. This ‘social sorting’[9] allows observers to make judgements or predictions about an individual’s personality, preferences and lifestyle[10]. The observer may conclude from the correlations detected, for example, that women generally tend to have higher levels of debt then men.

Since personality is considered a key driver behind people’s behaviours[11], profiling often underpins decision-making, for example when credit scoring or identifying security risks[12].

The use of Algorithms reaches as far back as the Babylonian tablets[13] and are defined as ‘a set of mathematical instructions or rules that … will help to calculate an answer to a problem’[14]. Applying these instructions to large data sets allows data to be mined chronologically and consistently[15]. In 1932, Mathematician Alan Turing’s description of the computing machine laid the foundations for the hardware platform that would eventually facilitate the automation of both profiling and algorithmic decision making[16].

Increasingly, Artificial intelligence (AI) and machine learning are employed in the development and application of algorithms[17]. The term Artificial Intelligence (AI) generally refers to a range of technologies; ‘software, algorithms, processes, and robots that – contrary to machines only acting on human command – are able to acquire analytical capabilities and to perform tasks’[18].

In 1952, Samuel Arthur’s journal article illustrated ‘the programming of a digital computer to behave in a way which, if done by human beings or animals, would be described as involving the process of learning’[19]. Modern software programs are now ’trained’ to identify unforeseen correlations in vast, aggregated data sets[20], and statistical approaches can ‘inform how machine learning systems deal with probabilities or uncertainty in decision-making’[21].

For example, a healthcare algorithmic system might be ‘trained’ to provide a ‘desired outcome’, such as training a system to categorise patients with blood pressure exceeding a certain value as ‘high risk’ and below that value, categorising patients as ‘low risk’[22]. Here, the system learns how the data provided by the designer or ‘training data’[23] is structured and may learn correlations between those identified as ‘high risk’ and their age or existing diagnosis[24].

Learning ability clearly expands the range and ‘complexity of functions’ that automated systems can fulfil[25]. This paper heeds the warning of Algorithm Watch; that the notion of ‘human-like autonomy and intentionality’ of machine learning and decision making which is solely automated (the decision is ‘given effect without human intervention[26]‘) is distracting. It can divert one from scrutinising relatively simple systems, including those that have human intervention, such that those systems also have the propensity to have a notable bearing on citizens’ fundamental rights[27] and yet would not usually be classed as AI[28],.

The UK Information Commissioner (ICO) offer the example of an automated pass or fail attributed by a programmed system that marks exam papers, demonstrating how innocuous ADM can be. This system might comprise of a basic human-created algorithm applied to an existing data set[29]. Yet, if not operating correctly, the system could impact significantly on the lives of those subject to the resulting decisions, perhaps by affecting their opportunity to study at their university of choice or enter a particular career.

AlgorithmWatch helpfully distinguish these ‘simple rule-based analysis systems’ from the sprawling topic of Artificial Intelligence, defining them as ‘Algorithmically controlled, automated decision-making or decision support systems (ADM)’[30]. Here, decisions are partially or wholly delegated to a third party who may then use ‘automatically executed decision-making models to perform an action’[31]. It is this delegation of the execution involved in automated decision making that draws the attention of analysts, civil society groups and regulators[32] The chapters below will explore how this partial or complete delegation of decision making has the potential to dilute accountability and impact on the fundamental rights of those subject to its decisions.

Conversely, an ADM system could involve multi-layered, complex AI with machine learning. The system may demonstrate an element of autonomy, responding to the environment and taking actions[33]. In their Joint Opinion publication: ‘In the Matter of Automated Data Processing in Government Decision’ (Cloisters)[34], the authors use Durham Constabulary’s Harm Assessment Risk Tool (HART) as an exemplar of such a system. It applies machine learning to profile citizens according to their ‘risk’ of committing serious crime and supports custody officers to make decisions about whether an individual is permitted access to an out-of-court disposal programme.

HART applies what is described as an ‘eye wateringly complex’[35] algorithm, with some 4.2 million decision points underpinning machine learning. Cobbe explains that ‘While datasets may be extremely large but possible to comprehend and code may be written with clarity, the interplay between the two in the mechanism of the algorithm is what yields the complexity (and thus opacity)’[36]. This ‘black box’ environment can make it impossible to understand the detail surrounding the output of an ADM system[37].

In 2017, Edwards, Lilian and Veale suggested, perhaps rather conservatively, that algorithmic profiling and decision making are ‘increasingly familiar, even vital, in private, public and domestic sectors of life’[38]. Widespread digital transformation programmes in public sector seek to use technology to shape new services or enhance existing ones[39], covering a range of areas such as healthcare, education, transport, welfare and justice[40].

Reflecting on the scale and pace of deployment, by 2019, The UK Secretary of State for Business, Energy and Industrial Strategy put forward that ‘Technological breakthroughs in areas from artificial intelligence to biotechnologies … with the power to reshape almost every sector in every country[41]’ were heralding the ‘Fourth Industrial Revolution’.

Given the complex and ubiquitous nature of the technology, it is little wonder the superfluity of commercial interest groups[42], civil society organisations[43] and think tanks[44] lobbying to shape the policy landscape. In Europe alone, a Declaration of Cooperation[45] on AI, Commission Communications[46], High-Level Expert Groups[47] and specialist group opinions[48] all ostensibly seek to serve the public interest, cultivate business competition and inspire innovation. The EU’s algo:aware project, for example, seeks to analyse ‘opportunities and challenges emerging where algorithmic decisions have a significant bearing on citizens, where they produce societal or economic effects which need public attention.[49]

Responding effectively to this multi stakeholder attention and scrutiny is at the core of much international policy discourse, with stakeholders aiming to embed ethical principles and explainability into the design and deployment of AI-enabled systems[50]. The EU’s High-Level Group on AI highlights the need for a defined approach to achieving explainability[51] and in the United States, the Defence Advanced Research Projects Agency is investing in developing explainable AI[52].

In the UK, the House of Lords AI Committee asserted that ‘the development of intelligible AI systems is a fundamental necessity if AI is to become an integral and trusted tool in our society[53].

Interestingly, calls for a structured and ethical approach do not come solely from those naturally predisposed to consider public interest and ethics i.e. the regulators and social justice groups; but also from the innovators and technology think tanks themselves. AlgorithmWatch highlighted that the EU Robotics Committee, for example, has developed a Code of Ethical Conduct for Robotics Engineers and a Code for Research Ethics. In them they recommend that safeguards should be built into automation to allow for human oversight and verification, particularly because of the impact that algorithmic decision making has on the decisions of citizens and authorities.[54]

3. The Risks of State Sanctioned ADM

Systems that use technology to automatically profile and / or generate decisions or decision support data are pervasive in modern society and the public sector is no exception[55]. This chapter will explore the strategic drivers for governmental deployment of ADM internationally. The diversity of its application will be drawn out, illustrating the ‘socially consequential’ nature of its deployment[56].

The United Nations eGovernment Survey ranks its 193 member states each year in terms of their digital government, assessing the capacity and quality of its online services, telecommunication infrastructure and human resources[57]. Naturally, the digitisation strategies of the majority of these nations, if not all, aspire to streamline governmental services, improve health, reduce crime and manage welfare. Aiming to achieve these goals in a less costly, more efficient and accessible way, resulting in varying degrees of success and acclaim[58].

Profiling and ADM can provide authorities with new information and predictions[59] that can be used to simulate public policy outcomes before implementation[60], personalise public services and automate simple activities to liberate employees for more interesting tasks[61]. The systems are deployed in cyber-defence to protect infrastructure, to support soldiers in the battlefield[62], on power grids to manage electricity flows,[63], across clinical databases to spot rare diseases[64] and in the assessment of welfare applications[65].

The 2011, formation of the UK Government Digital Service (GDS); eventually replaced the majority of department and agency websites. The subsequent years saw the digitalisation of transactional services including Her Majesty’s Passport services, management of business tax and Universal Credit[66].

By 2014, the Local Government Association (LGA) report[67], highlighted over 50 examples of digital transformation projects across health and social care, Troubled Families, Public Health and Welfare Reform. The Digital Transformation Programme was established to assist local councils to foster digital solutions to support their work on national programmes.

As an example, although not mentioned in the LGA report, in April 2012 the UK Department of Work and Pensions (DWP) extended its ‘Risk Based Verification’ system (RBV) to all local Authorities to voluntarily adopt[68]. RBV is algorithmically controlled and uses the resulting group profiles to apply a risk rating to housing or benefit claims such that the level of verification required to scrutinise those claims is proportionate to the identified risk of fraud. It largely follows, although authorities can determine their own classification arrangements, that claims assessed as low risk require only ‘essential checks’, such as proof of identity. Applicants that are determined by the system to be higher risk trigger a more rigorous verification process that includes credit reference agency searches and interviews[69].

South Wales Police’s AFR Locate programme’s technology functions like a CCTV camera, scanning crowds, locating faces and comparing facial measurements with images of known suspects or persons of interest in a ‘watchlist’[70]. Where no match is found, the collected biometric data is instantly erased[71]. A match, however, would alert the nominated police operator to undertake a review and potentially arrange an intervention, such as making verbal enquiries or placing them under arrest[72].

The Danish social welfare model ‘Gladsaxe’ was designed to trace and profile children that are deemed ‘vulnerable due to their social circumstances’, initially through their neighbourhood being categorised as a ‘ghetto’ and then the combination of various other risk indicators which include mental illness, missing child health appointments and parental unemployment or divorce[73].

Clearly, there is huge scope for discussion around ADM but, from a privacy or human rights perspective, the contemporary and widespread use of ADM by the state or public bodies certainly seems most pressing. As noted by AlgorithmWatch, of more concern than ADM used to forecast issues relating to food production lines[74], are the systems described above. Systems like the aforementioned HART, which support custody officers to make decisions about whether an individual is permitted access to an out-of-court disposal programme, affecting their personal liberty[75].

In 2009, the Grand Chamber of the Strasbourg Court made it clear that the state has a particular responsibility to strike an appropriate balance between taking advantages new technologies and the impact that doing so has on the rights of citizens[76].

4. Public Sector and Article 8 Right to Privacy

Many ADM initiatives that have been implemented by public authorities have suffered criticism, not least from academics and social justice groups[77]. The arguments have centred around an absence of transparency and ineffective legal protections’[78] which result in human rights implications.

Article 8 of the European Convention on Human Rights provides a ‘Right to respect for private and family life, home and correspondence’[79] (Article 8) and is a right that has application way beyond informational privacy.

The following chapter will explore how Article 8 encapsulates what has been categorised as ‘psychological or physical integrity’; incorporating, inter alia, personal liberty, autonomy, self-determination, identity, societal participation and anonymity[80].

This chapter will explore a selection of these themes and contextualise them using some of the models described in the previous chapter. The discussion will seek to illustrate how the application of ADM technology and systems by public authorities has the potential to engage Article 8 on multiple fronts.

This paper recognises, as Herman T. Tavani explains in his exploration of informational privacy and the internet, the abundance of models and theories surrounding privacy[81]. That some regard privacy as ‘all or nothing’ – enjoyed or lost in totality, whilst others regard privacy as something that can be diminished or a spatial zone that may be intruded upon to varying degrees[82]. Also, it is recognised that where an interference with privacy rights occurs, it can be accepted as legitimate and lawful in accordance with Art 8 (2) (See Intrusion / Family Life).

To that end, in order to narrow the focus, this chapter will mirror Tavani’s approach, seeking to highlight where ADM raises privacy issues more broadly and potentially engages Article 8, without labouring on the extent of such interference nor whether it could be argued to be in accordance with Article 8 (2). The intention is to support the assertion that the risks of interference with fundamental rights are inherently greater when the subjects of the systems are citizens. Later chapters will explore some of the weakness found in available legal frameworks.

4.1 Informational Privacy

The collection and use of personal data to underpin public sector ADM poses risks due to the ‘pivotal role’ that ADM plays in societal infrastructure[83]. Systems used by public authorities are understandably a target, with cyber-attackers seeking to pollute training 12 data sets, attack the algorithm or exploit the model itself[84]. The attraction for malicious actors to interfere with or disrupt the ‘nuclear power stations, smart grids, hospitals and cars’[85] of a nation, risks the confidentiality and integrity of vast databases and the data and profiles of individual citizens ipso facto.

The need to control personal information was described by Westin in 1967 as ‘the claim for individuals, groups or institutions to determine for themselves when, how and to what extent information about themselves is communicated to others’[86]. The concept emerged following the advent of computers and the internet, giving rise to the need for particular rules to govern how personal data is col­lected and used; or ‘informational privacy’[87].

In 2008, Hildebrandt lamented the reduction of Informational Privacy to a ‘private interest in the hiding of personal data’, arguing that ‘privacy is not only about personal data … privacy is also a public good that concerns a citizen’s freedom from unreasonable constraints on the construction of her identity’. In CTB v News Group Newspapers and Imogen Thomas, Justice Eady explained that ‘the modern law of privacy is not concerned solely with information or ‘secrets’: it is also concerned importantly with intrusion’[88].

It therefore seems pertinent to highlight the impact that the use of personal information can have on the ‘psychological integrity’ of an individual and ensure that consideration of ‘informational privacy’ is not simply limited to data protection, regarded as something distinct from wider Article 8 considerations.

The provisions of the General Data Protection Regulation[89] (EU) 2016/679 (GDPR) (and its predecessors) is ‘broader’ than the right to privacy because there need not be an interference with privacy for the regulation to apply[90]. For example, limiting the purpose 13 of personal data processed by a Controller is provided for under GDPR but may not, in of itself, represent a privacy interference.

Viewing personal information only through the lens of protectioncan present Article 8 privacy concerns as demonstrated by the deletion of data collected by South Wales Police in their deployment of Facial Recognition Technology[91].

Civil rights campaigner Ed Bridges’ image was likely captured[92] and analysed by the aforementioned AFR Locate programme in South Wales.

In R v The Chief Constable of South Wales Police ex parte Bridges[93], which is covered in more detail in later chapters, the Divisional Court sought, inter alia, to explore whether the system was biased or discriminatory. Expert witnesses were unable to make a determination[94] because the data for images for which there had been no match to the police watchlist had been automatically deleted. Prima facie, this complies with the principle of data protection law, such that data should only be processed for its collected purpose[95]. Cloisters understandably raises concerns related to the opacity arising from this approach; that it creates a barrier to scrutiny and accountability for broader human rights or public duty violations[96].

This paper considers privacy as an almost wholly psychological concept. It labours under the assertion that; the collection, creation or use of information related to oneself by another party invariably has the potential to engage at least one of the ‘psychological’ components of privacy, as explored in the chapters to follow.

4.2 Identity Management

In his presentation of a model for assessing the intrusiveness of data surveillance[97], Thommesen and Boje Andersen draw on the work of sociologist Erving Goffman who described a person’s need to manage the impression that others have of them. Goffman suggests that having access to isolation and anonymity to manage the different faces we present to different audiences is a necessary quality of privacy. He likens it to being ‘onstage’ and describes a consequent need for private space where we might ‘relax …and prepare our onstage performance[98]

Thommesen and Boje Andersen describe our behaviour, activities and decisions as a privacy ‘domain’. A person may use these domains to construct their social identity by adopting behaviour, activities or decisions according to the social context[99].

As a result, they suggest that privacy is not about being merely able to protect ourselves through isolation or anonymity, but rather to selectively share[100]. If we are afforded privacy, we can ‘indulge in habits or pleasures that do not match comfortably with our self-presentation in some social circles’[101]. This can be the case, whether alone or enjoying anonymity in public – through being an indistinguishable member of a crowd, for example[102].

It follows then, that ADM systems that result in altering our behaviour and interfere with our anonymity, even in public, engage our Article 8 privacy rights. This is manifest in the ‘chilling effect’ as a consequence ‘governmental investigative and data-gathering activity’ [103] either in public places or online[104]. When people are aware that personal data is being collected and analysed, they amend or limit their behaviour[105], forcing them to manage impressions and identity in scenarios where otherwise they might have enjoyed anonymity.

In the case against South Wales Police, Bridges claimed that the deployment of AFR without his consent engaged and unlawfully interfered with his Article 8 rights to privacy since he was in a public space, not a suspect and not engaging in unlawful activity[106].

Haddon-Cave LJ and Swift J discussed South Wales Police’s contention that the surveillance is activity that the public would consider normal and expected, such that it would not represent a prima facie violation[107]. This argument was rejected on the basis that biometric data[108] derived from AFR is information of an intrinsically private character[109].

The arguments make much mention of the effort made by South Wales Police to be transparent and whether Mr Bridges was aware of the presence of the cameras[110] and was therefore party to what the Surveillance Camera Code of Practice, in place at the time, described as ‘surveillance by consent’[111].

The discussion did not appear to give much consideration to the reduced anonymity arising from the use of FRT[112]. Whilst arguably the public might ‘uncomplainingly submit’[113] to traditional CCTV being operated, the discussions may have benefitted from the acknowledgement that even overt surveillance engages privacy rights for both the individual and the wider public as a ‘collateral intrusion’[114] that potentially results in chilling effects. The concept that impression management and behaviour modification might be a resultant cause and effect are notably absent.

4.3 Self-Determination and Decisional Privacy

In the commercial field, the use of ‘cookies’ to track the behaviour of individuals online might render personality profiles and inferences made about individual preferences[115]. The ICO cautions the use of this type of ‘observed, invisible processing’, as it poses risks to the interests of individuals since they are unable to exercise any control over the use of their data[116].

Data subjects are often oblivious to the sensitive information that can be invisibly derived from seemingly innocuous behavioural patterns or collected information[117] from multiple sources. Even when non-sensitive personal data has been readily provided by the data subject, sensitive or intimate information can be inferred, derived or predicted. Privacy International provide an example; when someone contacts their best friend, visits a website of the National Unplanned Pregnancy Advisory Service, and then contacts their doctor, an observer seeking to profile the individual can assume that this person could be contemplating or planning a pregnancy termination[118].

Decisions may then be made on the basis of the information that has been inferred, for example, the production of tailored search results and targeted advertisements[119] or price discrimination[120]. Whether or not the inferences are accurate, this narrows the individual’s world view, carefully curating the options available to them which can manipulate the individual’s decisions and undermine their autonomy[121].

As American Civil Liberties Union Senior Staff Attorney, Galen Sherwin, is reported to have said of Facebook’s gender targeting for job advertisements ‘You don’t know … what you’re not seeing,[122]

Mireille Hildebrandt discusses the importance of being exercising ‘deliberate reflection on our choices of action’ as a key part of autonomy[123]. Where the opportunity for reflection and understanding is absent, this manifestly impacts on our autonomy and self-determination.

When created by public authorities, profiles may be drawn from data held within multiple integrated systems such as those held by NHS providers and Local Authorities[124] or perhaps collected across multiple devices as part of an Internet of Things (IoT)[125] .

In the case of criminal justice ADMs like HART and, to a lesser extent, RBV (where the decision may be impacted by previous criminality), it is key to consider the importance of understanding how one’s situation has arisen and why decisions concerning us have been made.

In considering the effectiveness of criminal punishment, Hampton stresses the importance of the human ability to understand, reflect and revise our attitude[126]. She expresses the importance of reflection on the impact of our past behaviours such that we can decide to alter our behaviour – preferably for moral reasons rather than simply to avoid the pain of punishment[127].

The circular issued by the DWP to Local Authorities in relation to the deployment of RBV included the direction to withhold the policies relating to the system[128].

This directive claimed that the risk categories, upon which automated decisions were based, as detailed within the policies were ‘sensitive’. The directive was intended to deter individuals wishing to manipulate the system[129] and, as Hampton might consider it, avoid the ‘punishment’ of additional scrutiny.

Arguably, however, this opacity also deprives individuals of the ability to understand the factors that are impacting on decisions made about them and therefore use that insight to alter their lives ‘for moral reasons’. Without knowledge of the causal factors for the decisions made about us – how can we change our fate for the future?

The decisions of citizens subject to state sanctioned ADM can also be influenced by outside actors[130]. When automation is deployed by public authorities to administer the democratic process[131], there is a risk that ‘interest groups or states … tempted to use these technologies to control and influence citizen behaviour’ can ‘damage the integrity of democratic discourse and the reputation of the government or political leaders’[132].

This ‘coercive interference’ constitutes an infringement of ‘decisional privacy’ as the concept of making ‘private choices’ concerning relationships, friendship, religion, and political association[133] and compromises citizens’ opportunity for meaningful participation in democratic society[134].

The impact of profiling and ADM in relation to Article 11 of the European Convention

on Human Rights: Freedom of assembly and association[135] has been the subject of debate for many years[136]. In recent decades, it has been discussed in connection with the ‘real name system’[137] which requires internet users to provide evidence of their true identity when creating online accounts. Predictably, it is reported to have ‘a severe chilling effect’, particularly in China when it impacted the political opposition to the importation of US meat[138].

The 2012 Council of Europe Study on Algorithms and Human Rights[139] limited discussion on surveillance and the human right to assembly, focussing on its impact in the online domain. But, as Christopher Slobogin suggested, in his advocacy for anonymity in public places, ‘surveillance can chill conduct, even though it takes place in public and is meant to be seen by others’[140].

4.4 Intrusion and Family Life

Along with correspondence – which arguably translates to ‘informational Privacy’ – ‘Private life’, ‘family life’ and ‘home life’ are the three other interests specifically provided for under Article 8, protecting citizens from a more tangible type of intrusion. Art 8 (2) justifies state interferences with an individual’s private life where ‘in accordance with the law and … necessary in a democratic society in the interests of national security, public safety or the economic wellbeing of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.’.

As an example, the DWP circular proposed that reduction of labour (and therefore costs) and managing fraud were the motivators for introduction of the RBV the system[141]. The manifest intrusion into the ‘high risk’ citizen’s private life, if the system is accurate and the response appropriate, is therefore conceivably justified under Article 8 (2).

Beyond the intrusion suffered through undergoing additional checks, in the most serious of circumstances a delay in processing the claim (as a result of triggering further checks) could conceivably result in homelessness[142], such that this would significantly interrupt the quiet enjoyment of their home and family life.

Intrusion was also a consideration of the Bridges judgement. In defence, South Wales Police argued that the claimant ought not to have a ‘reasonable expectation’ of privacy in a public space[143] and that the ‘near-instantaneous’ capture and deletion of the biometric data without availability to a human operator does not meet a threshold of ‘seriousness’ that would engage Article 8[144].

The judge dismissed this argument, saying that ‘Article 8 is triggered by the initial gathering of the information and that it is sufficient if biometric data is captured, stored and processed, even momentarily[145]

Both arguments could be seen as somewhat reductive, as Thommesen and Boje Andersen articulated;

‘observation is not simply invasive in terms of acquiring information … E.g., somebody spying on us in our homes through a telescope, may be regarded as invading our privacy – even if they fail to acquire new information (Gavison, 1980)’[146].

The judgement appears to fall short of recognising that traditional CCTV largely operates on group identification and that a person would be simply a face in the crowd. AFR systems are fundamentally more invasive whether biometrics are stored or not, because the observation is individualised and undermines an individual’s enjoyment of public anonymity[147].

4.5 Algorithmic Bias

This chapter seeks to draw out the Article 8 privacy implications of ADM systems deployed by public authorities that are affected by Algorithmic Bias. Having provided a definition for Algorithmic Bias, it will consider the impact of these systems through the lens of Article 8 and discuss identity, stereotyping, social deviance and discrimination.

ADM systems resulting in discrimination undoubtedly engage ECHR Article 14 which provides for ‘The enjoyment of … rights and freedoms … without discrimination on any ground such as sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status.’[148].

In Aksu v Turkey[149], the court found that negative group stereotyping, in this case of the Roma Community, constituted an interference with a group members private life, affecting their sense of identity. The court acknowledged that an individual’s ethnic identity should be regarded as an aspect of the individual’s physical and social identity and as such are captured by the personal autonomy principles of Article 8[150].

Recognising that algorithmic bias can incorporate the production of results that are skewed or inaccurate in relation to context as well as a subsequent methodical discrimination of the basis of those results, the Centre for Data Ethics and Discrimination describe it summarily as the ‘unfair treatment of a group (e.g. an ethnic minority, gender or type of worker) that can result from the use of an algorithm to support decision-making.’[151] This is the definition applied in this paper.

Algorithmic bias may be intentional, whereby systems are biased to serve the interests of the operator[152] as is clearly the case when airlines manipulate search functionality on websites to display their own flights first.

The president of American Airlines, in responding to criticism in Congress in 1983, described the preferential treatment of their flights as the ‘raison d’etre’ for having the system. Algorithm Watch coined this ‘Crandall’s complaint: Why would you build and operate an expensive algorithm if you can’t bias it in your favor?’[153]

In some cases, the bias may be accidental, perhaps the developer could not have anticipated the resulting discrimination, or it is the consequence of mistakes or poor data quality[154]. As an example, individuals can be ‘wrongly included in blacklists or “no fly” lists due to homonyms or inaccurate inferences’.[155]

Conversely, accuracy brings its own issues, Privacy International warn that profiling can ‘create uncannily personal insights, which may be used to the detriment of those who are already discriminated and marginalised’. Training data that underpins profiling and automated decision making can involve numerous ‘sources of unfairness’[156], including having its foundations in existing ‘structural discrimination’[157].

The ICO provides the example of loans offered by a bank[158]. If they have been historically granted to men more than women, the training data will include less information about women. This bias here is conceivable if demonstrable good payment history is a score factor for the training data. Having not been offered loans, women would not have had the opportunity to demonstrate this scoring factor and so the ADM output may skew towards men.

Even in scenarios where structural discrimination is ostensibly absent, the data itself may simply be unbalanced. In the bank loan scenario, there simply could have been more men applying for loans and therefore the training data simply isn’t available for women, rendering fewer favourable outcomes for women seeking loans.

If profiling itself risks dehumanisation, where the ‘complexity of human personality’ has been reduced to simply their digital profile[159], it follows then, that a human personality reduced to a digital version of a cultural, racial or gender stereotype is further dehumanised.

Even where decisions are not wholly automated, the humans developing, deploying and interpreting ADM can have their own sources of bias that can affect the decisions they 23 make and so profiling and ADMs can perpetuate existing discrimination based on entrenched stereotypes[160].

Of course, there are arguments that, even if not part of the wider intention of the application context, the intrinsic purpose of the ADM itself might be to remove, detect or obviate human preconception[161]. There is also the view that the consistent application of mathematical formulae to any context provides a platform for fairness[162]. Nevertheless, developers and decision makers can be ‘consciously or unconsciously influenced by’ certain characteristics, some of which may be obviously protected by equalities law (race, gender etc)[163]. But other data fields may, less obviously, act as a proxy for those characteristics, such as the use of postcodes where they may be representative of racially segregated cities[164] and result in discriminatory outcomes.

Drawing on the identity management theories of Goffman[165] and Becker’s[166] definition of deviance as non-conformance to social norms, Christy Halbert discusses how bias and discrimination affects women boxers. She describes how females participating in a violent or ‘masculine’ sport is considered contrary to social norms and therefore deviant. As a result, they are subject to ‘discrimination, stereotypes, and labelling’, resulting in an attempt to manage their identity through the introduction of pink into their uniforms to create an impression of femininity[167].

In 2019, over 85 social justice groups penned an open letter to Amazon imploring them not to provide FRT to the United States government[168] on the basis that ‘[s]ystems built on face surveillance will amplify and exacerbate historical and existing bias’[169]. These same concerns appeared to be the impetus for the high-profile departure of Google’s Ethical AI Co Lead in December 2020[170]. Scholars observe the ‘demonization and 24 criminalization’ of individuals from perpetuating an entrenched deviant stereotype applied to people of colour[171] that, of course, may exist within training data[172].

It is clear then, that ADM predictions or inferences may be systematically biased or inaccurate leading individuals being misclassified or misjudged.

In Rustad and Kulevska’s reconceptualization of the ‘Right to be Forgotten’, the authors identify the ability to ‘revise’ ones identity as an important factor of the right. To be ‘connected to current information and delinked from outdated information’[173] is core to this concept and so right to be forgotten is less concerned with protection or control of data and more with privacy rights[174] – in particular ‘the freedom from unreasonable constraints on the construction of one’s identity’[175].

Koops’ contention that people must have the ability to define their own lives which are not ‘fixed in the perception of others by their past’[176] lends itself to the use of ADM systems where historic biases are ‘baked into’ the systems, without the subjects having the opportunity to contest the data.

Algorithmic discrimination or bias introduced or perpetuated by ADM systems create issues of loss of control over one’s identity, enforce identity management, group stereotypes and discrimination. They have the potential to single out and intrude in the lives of individuals, particularly those that are part of ‘over-policed and over-surveilled communities’[177].

Continued..

References

[1] Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR). [2] Mireille Hildebrandt, ‘Profiling and the rule of law’ (2008) 1, IDIS <https://link.springer.com/article/10.1007%2Fs12394-008-0003-1> accessed 12 December 2020. [3] ‘Domesday Book: Britain’s Finest Treasure | The National Archives’ (Nationalarchives.gov.uk) <https://www.nationalarchives.gov.uk/domesday/> accessed 5 January 2021. [4] ‘Census-Taking In The Ancient World – Office For National Statistics’ (Ons.gov.uk, 2021) <https://www.ons.gov.uk/census/2011census/howourcensusworks/aboutcensuses/censushistory/censustakingintheancientworld> accessed 5 January 2021. [5]The Royal Society, ‘Explainable AI: The Basics’ (Policy Briefing, 2019) <https://royalsociety.org/-/media/policy/projects/explainable-ai/AI-and-interpretability-policy-briefing.pdf>accessed 19 May 2021 [6] Ibid. [7]Data is Power: Profiling and Automated Decision-Making in GDPR (Privacy International, 2017) 9 < https://privacyinternational.org/report/1718/data-power-profiling-and-automated-decision-making-gdpr> accessed 6 January 2021. [8] Ibid 4 para 4. [9] Monique Mann and Tobias Matzner, ‘Challenging algorithmic profiling: The limits of data protection and anti-discrimination in responding to emergent discrimination’ [2019] Big Data and Society 1 para 1< https://doi.org/10.1177/2053951719895805> accessed 22 May 2021. [10] Data is Power: Profiling and Automated Decision-Making in GDPR(n 6) 5. [11] Wu Youyou, Michal Kosinski, and David Stillwell, ‘Computer-based personality judgments are more accurate than those made by humans’ [2015] 112 (4) 1036-1040 PNAS < https://doi.org/10.1073/pnas.1418680112> accessed 15 February 2021. [12] Hildebrandt (n 1). [13] Donald E. Knuth, ‘Ancient Babylonian Algorithms’ [1972] Commun. ACM 671 < https://doi.org/10.1145/361454.361514> accessed 14 January 2021. [14] (Cambridge Dictionary) < https://dictionary.cambridge.org/dictionary/english/algorithm > accessed 18 February 2021. [15] Hildebrandt and Gutwirth (n 7) 22. [16] B.J. Copeland, ‘Alan Turing: British mathematician and logician’ [2021] Encyclopedia Britannica <https://www.britannica.com/biography/Alan-Turing> accessed 23 May 2021. [17] What is automated individual decision-making and profiling?(Guidance, Information Commissioner’s Office 2018) 4 <https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling/what-is-automated-individual-decision-making-and-profiling/ > accessed 11 March 2021. [18] European Parliament, Connected Digital Single Market: White Paper on Artificial intelligence Including Follow Up(White Paper, Cm 2018-5) 1 para 1 < https://www.europarl.europa.eu/legislative-train/theme-connected-digital-single-market/file-white-paper-artificial-intelligence-and-follow-up/07-2019> accessed 20 February 2021. [19] A. L. Samuel, ‘Some Studies In Machine Learning Using The Game Of Checkers’ (1959) 3 IBM Journal of Research and Development 1 para 1 <https://ieeexplore.ieee.org/document/5392560> accessed 11 February 2021. [20] Hildebrandt (n 1) 4. [21] Explainable AI: the basics (n 1)7 para 3. [22] This type of technology is known as ‘Risk Stratification’ Software and is procured by National Health Service providers to identify risks to individual patients or to plan services across geographical areas, based on patient profiling and risk categories <https://www.digitalmarketplace.service.gov.uk/g-cloud/services/198405605081528> accessed 23 May 2021. [23] Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For’ (2017) 16 Duke L. & Tech. Rev. 18 <https://dltr.law.duke.edu/2017/12/04/slave-to-the-algorithm-why-a-right-to-an-explanation-is-probably-not-the-remedy-you-are-looking-for/> accessed 25 January 2021. [24] Explainable AI: the basics (n 1)6. [25] Ibid 7 para 1. [26] Jennifer Cobbe, ‘Administrative law and the machines of government: judicial review of automated public-sector decision making’ (2019) 39 Legal Studies 363 ch 1 para 2 <https://www.cambridge.org/core/journals/legal-studies/article/abs/administrative-law-and-the-machines-of-government-judicial-review-of-automated-publicsector-decisionmaking/09CD6B470DE4ADCE3EE8C94B33F46FCD> accessed 3 April 2021. [27] See Chapter 4. [28] Automating Society: Taking Stock of Automated Decision Making in the EU (A report by AlgorithmWatch in cooperation with Bertelsmann Stiftung, supported by the Open Society Foundations 2019) 9 para 2 < https://algorithmwatch.org/de/wp-content/uploads/2019/02/Automating_Society_Report_2019.pdf> accessed 4 February 2021. [29] What is automated individual decision-making and profiling?(Guidance, Information Commissioner’s Office 2018) ch 2 <https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling/what-is-automated-individual-decision-making-and-profiling/> accessed 11 March 2021. [30] AlgorithmWatch (n 28) 9 para 2. [31] ibid [32] AlgorithmWatch (n 28) 19 – 35 [33] In the Matter of Automated Data Processing in Government Decision Making(Joint Opinion, AI Law and Cloisters) 4 Para 8 < https://www.cloisters.com/wp-content/uploads/2019/10/Open-opinion-pdf-version-1.pdf> accessed 24 April 2021 [34] AI Law and Cloisters (n 33) [35] Ibid5 para 10 [36]Jenna Burrell, ‘How The Machine ‘Thinks’: Understanding Opacity In Machine Learning Algorithms’ (2016) 3 Big Data & Society 5 para 3 <https://journals.sagepub.com/doi/full/10.1177/2053951715622512> accessed 10 May 2021. [37] Ibid 47 [38] Edwards and Veale (n 23) 3 para 1. [39] European Parliament: Panel for the Future of Science and Technology, ‘Understanding algorithmic decision-making: Opportunities and challenges (Study) PE 624.261 (2019) 2 para 2 < https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624261/EPRS_STU(2019)624261_EN.pdf> accessed 15 April 2021. [40] AlgorithmWatch (n 28) 35 – 141. [41] HM Government, Regulation for the Fourth Industrial Revolution(Cp 111, 2019) 6 para 3 <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/807792/regulation-fourth-industrial-strategy-white-paper-web.pdf> accessed 21 February 2021 https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/807792/regulation-fourth-industrial-strategy-white-paper-web.pdf . [42] AlgorithmWatch (n 28) 26 para 2. [43] Ibid 137 – 138. [44] Ibid 48. [45] EU Declaration on Cooperation on Artificial Intelligence [2018] < https://ec.europa.eu/jrc/communities/en/community/digitranscope/document/eu-declaration-cooperation-artificial-intelligence> accessed 3 May 2021. [46] European Commission, ‘Communication from the Commission to the European Parliament, The European Council, The Council, The European Economic and Social Committee and the Committee of the Regions: Artificial Intelligence for Europe’ COM (2018) 237 final < https://digital-strategy.ec.europa.eu/en/library/communication-artificial-intelligence-europe> accessed 20 April 2021. [47] The European Commission appointed a group of experts to provide advice on its AI Strategy. Members include representatives from academia, civil society and industry. ‘Expert Group On AI’ (Shaping Europe’s digital future) <https://digital-strategy.ec.europa.eu/en/policies/expert-group-ai#:~:text=on%20artificial%20intelligence-,High%2Dlevel%20expert%20group%20on%20artificial%20intelligence,academia%2C%20civil%20society%20and%20industry> accessed 20 April 2021. [48]Artificial Intelligence For Europe(European Economic and Social Committee 2019). [49] Raising awareness on algorithms(State-of-the-art Report, Algo:Aware 2018) 121 <https://actuary.eu/wp-content/uploads/2019/02/AlgoAware-State-of-the-Art-Report.pdf> accessed 1 May 2021. [50] Explainable AI: the basics (n 1)8 para 2. [51] Ibid. [52] Dr. Matt Turek, ‘Explainable Artificial Intelligence: XAI’ (DARPA) <https://www.darpa.mil/program/explainable-artificial-intelligence> accessed 23 May 2021. [53] HL Deb 13 March 2018, Paper 100 <https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf> accessed 23 May 2021. [54] AlgorithmWatch (n 28) 22 para 4. [55] Text to n 39 in ch 2. [56] AlgorithmWatch (n 28) [57] Digital Government in the Decade of Action for Sustainable Development(E-Government Survey 2020, Department of Economic and Social Affairs. 2020)<https://www.un.org/development/desa/publications/publication/2020-united-nations-e-government-survey> accessed 2 May 2021. [58] Ibid. [59] Big data, artificial intelligence, machine learning and data protection(Guidance, Information Commissioner’s Office 2017) 11 para 18 <https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-and-data-protection.pdf> accessed 18 March 2021. [60] Stephan Zheng and others, The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Harvard University 2020) <https://arxiv.org/abs/2004.13332> accessed 23 May 2021. [61] AI Law and Cloisters (n 33) para 57 . [62] Karel van den Bosch, ‘Human-AI Cooperation To Benefit Military Decision Making’ (The North Atlantic Treaty Organization (NATO) and S&T Organisation 2018) <https://www.researchgate.net/publication/325718292_Human-AI_Cooperation_to_Benefit_Military_Decision_Making> accessed 23 May 2021. [63] AlgorithmWatch (n 28) 81. [64] Dr Indra Joshi and Jessica Morley (eds) Artificial Intelligence: How to get it right: Putting policy into practice for safe data-driven innovation in health and care(Report, NHS X 2019) 75 para 3 <https://www.nhsx.nhs.uk/media/documents/NHSX_AI_report.pdf> accessed 18 April 2021 [65] Department of Work and Pensions, ‘Housing Benefit And Council Tax Benefit Circular, HB/CTB S11/2011’ (2011) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/633018/s11-2011.pdf> accessed 10 April 2021. [66] ‘Government Digital Service’ (GOV.UK) <https://www.gov.uk/government/organisations/government-digital-service> accessed 27 May 2021. [67] Transforming local public services: using technology and digital tools and approaches(Report, Local Government Association 2014) < https://www.local.gov.uk/digital-transformation-programme> accessed 23 May 2021 [68] Housing Benefit and Council Tax Benefit Circular (n 64). [69] Swee Leng Harris, ‘Data Protection Impact Assessments as rule of law governance mechanisms’ (2020) 2 Data & Policy 1 <https://www.cambridge.org/core/journals/data-and-policy/article/data-protection-impact-assessments-as-rule-of-law-governance-mechanisms/3968B2FBFE796AA4DB0F886D0DBC165D> accessed 5 April 2021. [70] ‘What Is AFR?’ (What is AFR? South Wales Police) <https://afr.south-wales.police.uk/> accessed 15 May 2021. [71] R (Bridges) v The Chief Constable of South Wales Police[2019] EWHC 2341 (Admin) [17]. [72] Bridges(n 71)[15]. [73] AlgorithmWatch (n 28) 50 para 5. [74] AlgorithmWatch (n 28) 14 para 1. [75] AI Law and Cloisters (n 33) . [76] S and Marper v United Kingdom (2009) 48 EHRR 50 [112]. [77] AlgorithmWatch (n 28). [78] Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation’ (2017) 7 [2] International Data Privacy Law 76 <https://doi.org/10.1093/idpl/ipx005> accessed 10 May 2021. [79] Convention for the Protection of Human Rights and Fundamental Freedoms (European Convention on Human Rights, as amended) (ECHR) art 8(1). [80] Guide on Article 8 of the European Convention on Human Rights: Right to respect for private and family life, home and correspondence(Guidance, Council of Europe 2020) < https://www.echr.coe.int/documents/guide_art_8_eng.pdf> accessed 1 April 2021. [81] Herman T. Tavani, ‘Informational privacy, data mining, and the Internet’ (1999) 1 Ethics and Information Technology 137 <https://link.springer.com/content/pdf/10.1023/A:1010063528863.pdf> accessed 3 May 2021. [82] Ibid 138 para 2. [83] European Parliament: Panel for the Future of Science and Technology (n 38) 20. [84] Ibid. [85] Ibid 20 para 3. [86] Alan F. Westin, Privacy and Freedom (New York: Atheneum 1967) 7. [87] Handbook on European data protection law(Handbook, Publications Office of the European Union 2018) 18 < https://fra.europa.eu/sites/default/files/fra_uploads/fra-coe-edps-2018-handbook-data-protection_en.pdf> accessed 11 February 2021. [88] [2011] EWHC 1326 (QB). [89] Council Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L119/1. [90] Handbook on European data protection law(n 87) 20 para 1. [91] ‘What Is AFR?’ (n 71). [92] Bridges(n 71) [51]. [93]Bridges(n 71). [94]Ibid[155]. [95] General Data Protection Regulation (n 89), Art 5 (1) (b). [96] AI Law and Cloisters (n 33) para 159. [97] Jacob Thommesen and Henning Boje Andersen, ‘Privacy Implications of Surveillance Systems’ [2009] Mobile communication and social policy <https://orbit.dtu.dk/en/publications/privacy-implications-of-surveillance-systems> accessed 21 May 2021. [98] Erving Goffman, The Presentation of Self in Everyday Life, (Doubleday 1959). [99] Jacob Thommesen and Henning Boje Andersen (n 95) 3 para 7. [100] Ibid 3. [101] Ibid 3 para 1. [102] Ibid 3. [103] Laird v. Tatum, 408 U.S. 1 (1972) [2]. [104] Jonathon W. Penney, ‘Chilling effects: online surveillance and Wikipedia use’ (2016) 31 Berkeley Technology Law Journal 117 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2769645> accessed 24 May 2021. [104] Jacob Thommesen and Henning Boje Andersen, ‘Privacy Implications of Surveillance Systems’ [2009] Mobile communication and social policy <https://orbit.dtu.dk/en/publications/privacy-implications-of-surveillance-systems> accessed 21 May 2021. [105] Understanding algorithmic decision-making: Opportunities and challenges (n 38) 13 para 1. [106] Bridges(n 71) [53]. [107] Bridges(n 71) [53]. [108] ‘personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data’ (General Data Protection Regulation, Art 4 (4)). [109] Bridges(n 71) [57]. [110] Ibid [39]. [111] Surveillance Camera Code of Practice(Code of Practice, Home Office 2013) < https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/282774/SurveillanceCameraCodePractice.pdf> accessed 2 April 2021. [112] Text to n 103 in ch 4.2. [113] R (Gillan) v Commissioner of Police for the Metropolis [2006] UKHL 12 AC 307 [35]. [114] Facing the Camera: Good Practice and Guidance for the Police Use of Overt Surveillance Camera Systems Incorporating Facial Recognition Technology to Locate Persons on a Watchlist in Public Places in England & Wales(Guidance, Surveillance Camera Commissioner 2020) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/940386/6.7024_SCC_Facial_recognition_report_v3_WEB.pdf> accessed 3 April 2021. [115] Data is Power: Profiling and Automated Decision-Making in GDPR(n 6). [116] When do we need to do a DPIA?(Guidance, Information Commissioner’s Office) < https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/data-protection-impact-assessments-dpias/when-do-we-need-to-do-a-dpia/#when10> accessed 10 March 2021. [117] Data is Power: Profiling and Automated Decision-Making in GDPR(n 6). [118] Ibid 2 para 1. [119] Blasé Ur and others, ‘Smart, useful, scary, creepy: perceptions of online behavioral advertising’ (SOUPS ’12: Proceedings of the Eighth Symposium on Usable Privacy and Security, July 2012) < https://doi.org/10.1145/2335356.2335362> accessed 9 March 2021. [120] Jakub Mikians and others, ‘Detecting price and search discrimination on the internet’ (Conference paper prepared for Proceedings of the 11th ACM Workshop on Hot Topics in Networks, October 2012) 79, 84 <https://dl.acm.org/doi/10.1145/2390231.2390245> accessed 10 May 2021. [121] Understanding algorithmic decision-making: Opportunities and challenges (n 38) 14. [122] Nitasha Tiku, ‘ACLU Says Facebook Ads Let Employers Favor Men Over Women’ Wired (18 September 2018) <https://www.wired.com/story/aclu-says-facebook-ads-let-employers-favor-men-over-women/> accessed 20 April 2021. [123] Hildebrandt (n 1) 61 para 1. [124] National Health Service, ‘Joining up health and care data’ (NHS) <https://www.england.nhs.uk/digitaltechnology/connecteddigitalsystems/health-and-care-data/joining-up-health-and-care-data/> accessed 20 May 2021. [125] ‘THIM Monitoring Service’ (Surrey and Borders Partnership NHS Foundation Trust) <https://www.sabp.nhs.uk/tihm> accessed 20 May 2021. [126] Jean Hampton, ‘The Moral Education Theory of Punishment’ (1984) 13 Philosophy & Public Affairs 208 <https://www.jstor.org/stable/pdf/2265412.pdf refreqid=excelsior%3A54672fe4976a6ace3faeda0ba77354dd> accessed 24 May 2021. [127] Ibid212. [128] Housing Benefit and Council Tax Benefit Circular (n 64). [129] Harris (n 68). [130] Elena L. Boldyreva, Natalia Y. Grishina and Yekaterina Duisembina, ‘Cambridge Analytica: Ethical and Online Manipulation with Decision Making Process’ (18th PCSF 2018) < https://www.researchgate.net/publication/330032180_Cambridge_Analytica_Ethics_And_Online_Manipulation_With_Decision-Making_Process> accessed 3 March 2021. [131] ‘Summary of Proceedings: Automated Voting and Election Observation’ (The Carter Center 2005) < https://www.cartercenter.org/documents/nondatabase/automatedsummary.pdf> accessed 3 March 2021 [132] Understanding algorithmic decision-making: Opportunities and challenges (n 38) 14 [133] Anita L. Allen, ‘Taking Liberties: Privacy, Private Choice, and Social Contract Theory’ [1987] Faculty Scholarship at Penn Law 1337 <https://scholarship.law.upenn.edu/cgi/viewcontent.cgi?article=2337&context=faculty_scholarship> accessed 25 May 2021. [134] Understanding algorithmic decision-making: Opportunities and challenges (n 38) 22 para 5 [135] Convention for the Protection of Human Rights and Fundamental Freedoms [136] Laird v. Tatum, 408 U.S. 1 (1972) https://rm.coe.int/algorithms-and-human-rights-en-rev/16807956b5 [137] Guangxing Song and Pingfang Yang, ‘The Influence of Network Real-name System on the Management of Internet Public Opinion’ [2013] Advances in Intelligent Systems Research <https://doi.org/10.2991/icpm.2013.9> accessed 20 May 2021. [138] Artificial Intelligence: The Rights to Freedom of Peaceful Assembly and Association and the Internet: Submission to the United Nations Special Rapporteur on the Rights to Freedom of Peaceful Assembly and Association by Association for Progressive Communication ((Report, APC 2019) para 35 <https://www.apc.org/sites/default/files/APC_Submission_FoA_Online.pdf> accessed 2 May 2021. [139] Council of Europe, ‘Convention For The Protection Of Human Rights And Fundamental Freedoms (European Convention On Human Rights)’ (Council of Europe 2020) <https://www.echr.coe.int/documents/guide_art_8_eng.pdf> accessed 1 April 2021. [141] Harris (n 68) 4. [142] AI Law and Cloisters (n 33) para 118. [143] Bridges(n 71) [51]. [144] Ibid. [145] Bridges(n 71) [59]. [146] Jacob Thommesen and Henning Boje Andersen (n 95) 5 para 7. [147] Ibid 7 para 5. [148] Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR). [149] Aksu v. Turkey (Application Nos. 4149/04 & 41029/04) (2012) ECHR. [150] Ibid [58]. [151] Dr Michael Rovatsos, Dr Brent Mittelstadt and Dr Ansgar Koene, ‘Landscape Summary: Bias in Algorithmic Decision-Making: What is bias in algorithmic decision-making, how can we identify it, and how can we mitigate it?’ (UK Government 2019) <https://www.research.ed.ac.uk/en/publications/landscape-summary-bias-in-algorithmic-decision-making-what-is-bia> accessed 24 May 2021. [152] Understanding algorithmic decision-making: Opportunities and challenges (n 38) 1 para 3. [153] AlgorithmWatch (n 28) 28 para 2. [154] Understanding algorithmic decision-making: Opportunities and challenges (n 38) 41. [155] Ibid 1 para 3. [156] Understanding algorithmic decision-making: Opportunities and challenges (n 38) iii para 3. [157] Tarleton Gillespie, Pablo J. Boczkowski, and Kirsten A. Foot, ‘The Relevance of Algorithms’ [2014] Media Technologies: Essays on Communication, Materiality, and Society 11 para 3 < https://mitpress.universitypressscholarship.com/view/10.7551/mitpress/9780262525374.001.0001/upso-9780262525374-chapter-9> accessed 18 May 2021. [158] Reuben Binns and Valeria Gallo, ‘Human Bias And Discrimination In AI Systems’ <https://ico.org.uk/about-the-ico/news-and-events/ai-blog-human-bias-and-discrimination-in-ai-systems/> accessed 18 March 2021. [159] Understanding algorithmic decision-making: Opportunities and challenges (n 38) 1 para 3. [160] Understanding algorithmic decision-making: Opportunities and challenges (n 38) 9. [161] Understanding algorithmic decision-making: Opportunities and challenges (n 38) 25 para 4. [162] Ibid. [163] Equality Act 2010, s 4. [164] Data is Power: Profiling and Automated Decision-Making in GDPR(n 6) 8 para 3. [165] Text in ch 4.2. [166] H. S Becker, Outsiders Studies In The Sociology Of Deviance. Free P (Free 1973). [167] Christy Halbert, ‘TOUGH ENOUGH AND WOMAN ENOUGH’ (1997) 21 Journal of Sport and Social Issues. <https://journals.sagepub.com/doi/10.1177/019372397021001002> accessed 1 May 2021. [168] ‘Coalition Letter To Amazon Urging Company Commit Not To Release Face Surveillance Product’ (American Civil Liberties Union, 2021) para 3 <https://www.aclu.org/coalition-letter-amazon-urging-company-commit-not-release-face-surveillance-product> accessed 25 May 2021. [169] Ibid. [170] Kimberly White, ‘A Leading AI Ethics Researcher Says She’S Been Fired From Google’ (MIT Technology Review, 2020) <https://www.technologyreview.com/2020/12/03/1013065/google-ai-ethics-lead-timnit-gebru-fired/> accessed 27 May 2021. [171] CalvinJohn Smiley and David Fakunle, ‘From “Brute” To “Thug:” The Demonization And Criminalization Of Unarmed Black Male Victims In America’ (2016) 26 Journal of Human Behavior in the Social Environment < https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5004736/pdf/nihms779615.pdf> accessed 3 May 2021. [172] AI Law and Cloisters (n 33) 44 para 1. [173] Michael L. Rustad and Sanna Kulevska, ‘Reconceptualizing the Right to Be Forgotten to Enable Transatlantic Data Flow’,28 HARv. J.L.TECH. 349, 367 (2015). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2627383> access 10 May 2021. [174] Ibid. [175] Philip E Agre and Marc Rotenberg, Technology And Privacy (MIT Press 1997) 6 < https://jolt.law.harvard.edu/articles/pdf/v11/11HarvJLTech871.pdf> accessed 3 May 2021. [176] Bert-Jaap Koops,‘Forgetting Footprints, Shunning Shadows. A Critical Analysis Of The “Right To Be Forgotten” In Big Data Practice’ (Script-ed.org, 2021) ch 4.2 <https://script-ed.org/?p=43> accessed 25 May 2021. [177] American Civil Liberties Union(n 169).

Like this article?

Leave a comment