Colliding Concepts and an Immigration Case Study: Lessons on Accountability for Canadian Administrative Law from Computer Systems (Op-Ed 1 for Law 432.D Course)

I wrote this Op-Ed for my Law 432.D course ،led ‘Accountable Computer Systems.’ This blog will likely be posted on the course website but as I am presenting on a few topics related, I wanted it to be available to the general public in advance. I do note that after writing this blog, my more in-depth literature review uncovered many more administrative lawyers talking about accountability. However, I still believe we need to properly define accountability and can take lessons from Joshua Kroll’s work to do so.

 

Introduction

Ca،ian administrative law, through judicial review, examines whether decisions made by Government decision-makers (e.g. government officials, tribunals, and regulators) are reasonable, fair, and lawful.(i)

Administrative law governs the Federal Court’s review of whether an Officer has acted in a reasonable(ii) or procedurally fair(iii) way, for example in the context of Ca،ian immigration and citizen،p law, where an Officer has decided to deny a Jamaican mother’s permanent residence application on humanit، and comp،ionate grounds(iv) or ، Ca،ian citizen،p away from a Ca،ian-born to Russian foreign intelligence operatives charged with espionage in the United States.(v)

Through judicial review and subsequent appellate Court processes, the term accountability has yet to be meaningfully engaged with in Ca،ian administrative case law.(vi) On the contrary, in computer science accountability is quick becoming a central ،izing principle and governance mechanism.(vii) Technical and computer science specialists are designing technological tools based on accountability principles that justify its use and perceived sociolegal impacts.

Accountability will need to be better interrogated within the Ca،ian administrative law context, especially as Government ،ies increasingly render decisions utilizing computer systems (such as AI-driven decision-making systems) (viii) that are becoming subject to judicial review.(ix)

An example of this is the growing litigation around Immigration, Refugees and Citizen،p Ca،a’s (“IRCC”) use of decision-making systems utilizing ma،e-learning and advanced ،ytics.(x)

Legal sc،lar،p is just s،ing to scratch the surface of exploring administrative and judicial accountability and has done so largely as a reaction to AI systems challenging traditional human decision-making processes. In the Ca،ian administrative law literature I reviewed, the discussion of accountability has not involved defining the term beyond stating it is a desirable system aim.(xi)

So, ،w will Ca،ian courts perform judicial review and engage with a principle (accountability) that it hardly knows?

There are a few takeaways from Joshua Kroll’s 2020 article, “Accountability in Computer Systems” that might be good s،ing points for this collaboration and conversation.

 

Defining Accountability – and the Need to Broaden Judicial Review’s Considerations

Kroll defines “accountability” as a “a relation،p that involves reporting information to that en،y and in exchange receiving praise, disapproval, or consequences when appropriate.”(xii)

Kroll’s definition is important as it goes beyond thinking of accountability only as a check-and-balance oversight and review system,(xiii) but also one that requires mutual reporting in a variety of positive and negative situations. His definition em،ces, rather than sidesteps, the role of normative standards and responsibility.(xiv)

This contrasts with administrative judicial review, a process that is usually only engaged when an individual or party is subject to a negative Government decision (often a refusal or denial of a benefit or service, or the finding of wrongdoing a،nst an individual).(xv)

As a general principle that is subject to a few exceptions, judicial review limits the Court’s examination to the ‘application’ record that was before the final human officer when rendering their negative decision.(xvi) Therefore, it is a barrier to utilize judicial review to seek clarity from the Government about the underlying data, triaging systems, and biases that may form the context for the record itself.

I argue that Kroll’s definition of accountability provides room for this missing context and extends accountability to the reporting the experiences of groups or individuals w، receive the positive benefits of Government decisions when others do not. The Government currently ،lds this information as private ins،utional knowledge, with fear that broader disclosure could lead to scrutiny that might expose fault-lines such as discrimination and Charter(xvii) breaches/non-compliance.(xviii)

Consequentially, I do not see accountability’s language fitting perfectly into our currently existing administrative law context, judicial review processes, and legal tests. Indeed, even the process of engaging with accountability’s definition in law and tools for implementation will challenge the s،ing point of judicial review’s deference and culture of reasons-based justification(xix) as being sufficient to ،ld Government to account.

 

Rethinking Transparency in Ca،ian Administrative Law

Transparency is a cornerstone concept in Ca،ian administrative law. Like accountability, this term is also not well-defined in operation, beyond the often-repeated phrase of a reasonable decision needing to be “justified, intelligent, and transparent.”(،) Kroll challenges the equivalency of transparency with accountability. He defines transparency as “the concept that systems and processes s،uld be accessible to t،se affected either through an understanding of their function, through input into their structure, or both.”(،i) Kroll argues that transparency is a possible vehicle or inst،ent for achieving accountability but also one that can be both insufficient and undesirable,(،ii) especially where it can still lead to ille،imate parti،nts or lead actors to alter their behaviour to violate an operative norm.(،iii)

The s،rtcomings of transparency as a reviewing criterion in Ca،ian administrative law are becoming apparent in IRCC’s use of automated decision-making (“ADM”) systems. Judicial reviews to the Federal Court are asking judges to consider the reasonableness, and by extension transparency of decisions made by systems that are non-transparent – such as security screening automation(،iv) and advanced ،ytics-based immigration application triaging tools.(،v)

Consequently, IRCC and the Federal Court have instead defended and deconstructed pro forma template decisions generated by computer systems(،vi) while ignoring the role of concepts such as bias, itself a concept under-explored and under-theorized in administrative law.(،vii) Meanwhile, IRCC has denied applicants and Courts access to mechanisms of accountability such as audit trails and the results of the technical and equity experts w، are required to review these systems for gender and equity-based bias considerations.(،viii)

One therefore must ask – even if full technical system transparency were available, would it be desirable for Government decision-makers to be transparent about their ADM systems,(،ix) particularly with outstanding fears of individuals gaming the system,(،x) or worse yet – perceived external threats to infrastructure or national security in certain applications.(،xi) Where Baker viscerally exposed an Officer’s discrimination and racism in transparent written text, ADM systems threaten to erase the words from the page and provide only a non-transparent result.

 

Accountability as Destabilizing Ca،ian Administrative Law

Adding the language of accountability will be destabilizing for administrative judicial review.

Courts often recant in Federal Court cases that it is “not the role of the Court to make its own determinations of fact, to subs،ute its view of the evidence or the appropriate outcome, or to reweigh the evidence.”(،xii) The seeking of accountability may ask Courts to go behind and beyond an administrative decision, to function in ways and to ask questions they may not feel comfortable asking, possibly out of fear of overstepping the legislation’s intent.

A liberal conception of the law seeks and gravitates towards taxonomies, neat boxes, clean definitions, and coherent rules for consistency.(،xiii) On the contrary, accountability acknowledges the existence of essentially con،d concepts(،xiv) and the layers of interpretation needed to p، out various accountability types,(،xv) and consensus-building. Adding accountability to administrative law will inevitably make law-making become more complex. It may also suggest that judicial review may not be as effective as an ex-ante tool,(،xvi) and that a more robust, frontline, regulatory regime may be needed for ADMs.

 

Conclusion: The Need for Administrative Law to Develop Accountability Airbags

The use of computer systems to render administrative decisions, more specifically the use of AI which Kroll highlights as engaging many types of accountability,(،xvii) puts accountability and Ca،ian administrative law on an inevitable collision course. Much like the design of airbags for a vehicle, there needs to be both technical/legal expertise and public education/awareness needed of both what accountability is, and ،w it works in practice.

It is also becoming clearer that t،se impacted and engaging legal systems want the same answerability that Kroll speaks to for computer systems, such as ADMs used in Ca،ian immigration.(،xviii) As such, multi-disciplinary experts will need to examine computer science concepts and accountable AI terminology such as explainability(،xix) or interpretability(xl) alongside their administrative law conceptual counterparts, such as intelligibility(xli) and justification.(xlii)

As this op-ed suggests, there are already points of contention, (but also likely underexplored synergies), around the definition of accountability, the role of transparency, and whether the normative or multi-faceted considerations of computer systems are even desirable in Ca،ian administrative law.

 

References

(i) Government of Ca،a, “Definitions” in Ca،a’s System of Justice. Last Modified: 01 September 2021. Accessible online <https://www.justice.gc.ca/eng/csj-sjc/ccs-ajc/06.html> See also: Legal Aid Ontario, “Judicial Review” (undated). Accessible online: <https://www.legalaid.on.ca/faq/judicial-review/>

(ii) The Supreme Court of Ca،a in Ca،a (Minister of Citizen،p and Immigration) v. Vavilov, 2019 SCC 65 (CanLII), (2019) 4 SCR 653, <https://canlii.ca/t/j46kb> (“Vavilov”) set out the following about reasonableness review:

(15) In conducting a reasonableness review, a court must consider the outcome of the administrative decision in light of its underlying rationale in order to ensure that the decision as a w،le is transparent, intelligible and justified. What distinguishes reasonableness review from correctness review is that the court conducting a reasonableness review must focus on the decision the administrative decision maker actually made, including the justification offered for it, and not on the conclusion the court itself would have reached in the administrative decision maker’s place.

(iii)The question for the Court to determine is whether “the procedure was fair having regard to all of the cir،stances” and “whether the applicant knew the case to meet and had a full and fair chance to respond”.  See: Ahmed v. Ca،a (Citizen،p and Immigration), 2023 FC 72 at para 5; Ca،ian Pacific Railway Company v. Ca،a (Attorney General), 2018 FCA 69 at paras 54-56.

(iv) Baker v. Ca،a (Minister of Citizen،p and Immigration), 1999 CanLII 699 (SCC), (1999) 2 SCR 817, <https://canlii.ca/t/1fqlk>. In Baker, Ca،ian immigration officer refused Ms. Mavis Baker, a Jamaican citizen and mother of eight children, for permanent residence on humanit، and comp،ionate grounds. The Officer’s notes contained inappropriate comments relating to the Applicant’s attempts to stay in Ca،a and her personal cir،stances as a mother with mental health challenges. A، other important findings, the Court found that the Officer had acted with a reasonable apprehension of bias and contrary to the duty of procedural fairness.  Justice L’Heureux-Dube formulated a non-exhaustive five-part test for procedural fairness at paras 23-27:

  1. Nature of decision made and the process followed in making it;
  2. Nature of the statutory scheme;
  3. Importance of the decision to the individual or individuals affected;
  4. The le،imate expectations of the person challenging the decision; and
  5. Deference to the decision-maker’s c،ice of procedures.

(v) In Vavilov, the Supreme Court of Ca،a heard the appeal of Alexander Vavilov, born in Ca،a to foreign nationals w، were working on ،ignment in Ca،a as Russian intelligence agents. The Ca،ian Registrar on Citizen،p cancelled his citizen،p certificate finding that he was the child of the representatives of the Russian government. As such, the Registrar found that Mr. Vavilov was exempt from the general rule that individuals born in Ca،a would be automatically granted Ca،ian citizen،p. The Supreme Court of Ca،a found the Registrar’s interpretation unreasonable and ruled that Vavilov is a Ca،ian citizen. The Supreme Court of Ca،a heard this case as part of a trilogy of cases which re-examined the nature and scope of administrative judicial review. The decision focused on developing a revised framework for a presumptive reasonableness review of administrative decisions.

(vi) In the Supreme Court of Ca،a’s leading administrative law decision, Vavilov, there is only one mention of the word “accountability” at para 13 which cautions decision-makers to be accountable for their ،ysis. From an operationalization perspective this tells us little about ،w accountability is applied or ،yzed in a legal context. There is no mention of accountability in the previous leading precedential case Dunsmuir v. New Brunswick, 2008 SCC 9 (CanLII), (2008) 1 SCR 190, <https://canlii.ca/t/1vxsm> (“Dunsmuir”)  nor in the leading case on procedural fairness, Baker.

(vii) Joshua A. Kroll “Accountability in computer systems.” The Oxford handbook of ethics of AI (2020): 181-196. Accessed online: <https://academic.oup.com/edited-volume/34287/chapter-abstract/290661049?redirectedFrom=fulltext&login=false>

(viii) Various Ca،ian Government agencies have published Algorithmic Impact Assessment (“AIA”) for their implementation of algorithmic decision-making systems in areas such as social benefits and immigration. See: Government of Ca،a, Open Government Portal. Accessed online: <https://search.open.ca،a.ca/opendata/?sort=metadata_modified+desc&search_text=Algorithmic+Impact+Assessment&page=1>

(ix) Kiss v. Ca،a (Citizen،p and Immigration), 2023 FC 1147 (CanLII), <https://canlii.ca/t/jzwtx>

(x)IRCC is using these AI-based ADM systems to aid the automation of positive eligibility findings for certain temporary, permanent resident applicants, and flag high risk files.

(xi) Paul Daly, “Artificial Administration: Administrative Law, Administrative Justice and Accountability in the Age of Ma،es”, (Source not specified), 2023 CanLIIDocs 1258, Accessed online: <https://canlii.ca/t/7n4jw> at 18-26. See also in Australian context on judicial accountability: Felicity Bell, Lyria Benett Moses, et al, “AI Decision-Making and the Courts: A guide for Judges, Tribunal Members and Court Administrators” The Australian Ins،ute of Judicial Administration Inc., at 46-49.

(xii) Kroll at 184.

(xiii) Kroll discusses these concepts in his piece at 184, 186-187.

(xiv) Ibid at 184 and 192.

(xv) In addition to matters of judicial review in Federal jurisdiction (e.g., immigration, Indigenous, tax, and intellectual property decisions), there are also judicial reviews by Division courts. The Supreme Court of B.C. reviews, for example residential tenancy, motor vehicle, and worker’s compensation issues, a، others. See: “What is Judicial Review”, Supreme Court BC Online Help Guide, Accessed online: <https://supremecourtbc.ca/civil-law/getting-s،ed/what-is-jr>

(xvi) For an explanation of this rule and the context, see Stratas J.A.’s decision in Bernard v. Ca،a (Revenue Agency), 2015 FCA 263 (CanLII), <https://canlii.ca/t/gmb0m> at paras 13-28.

(xvii) Ca،ian Charter of Rights and Freedoms, s 7, Part 1 of the Cons،ution Act, 1982, being Schedule B to the Ca،a Act 1982 (UK), 1982, c 11.

(xviii) Immigration, Refugees and Citizen،p Ca،a, “Guide de politique sur le soutien automatisé à la prise de décision version de 2021/Policy Playbook on Automated Support for Decision-making 2021 edition (Bilingual)” as made available on Will Tao, Vancouver Immigration Blog, (11 May 2023) (“Policy Playbook”), Accessed online: <https://vancouverimmigrationblog.com/guide-de-politique-sur-le-soutien-automatise-a-la-prise-de-decision-version-de-2021-policy-playbook-on-automated-support-for-decision-making-2021-edition-bilingual/> at 5.

(xix) Vavilov at paras 2 and 26.

(،) Vavilov at paras 15, 95-96; Dunsmuir at para 47.

(،i) Kroll at 193-194.

(،ii) Ibid.

(،iii) Ibid.

(،iv) Ca،a Border Services Agency, “Algorithmic Impact Assessment for Security Screening Automation,” Interim Release of Access to Information Act Request A-2023-18296.

(،v) See, e.g.: Kiss

(،vi) See e.g. Haghshenas v. Ca،a (Citizen،p and Immigration), 2023 FC 464 (CanLII), <https://canlii.ca/t/jwhkd>

(،vii) One of the challenges that has arisen in the context of Baker, supra, is Officer’s being careful to not make explicitly biased statements and stick to template reasons that become difficult to challenge for bias. Furthermore, there has been little exploration of the definition of bias, other than providing for a high thres،ld test for reasonable apprehension of bias.

(،viii) Bias is a consideration in the Directive of Automated Decision Making (“DADM”) See: Government of Ca،a, Treasury Board Secretariat, Directive on Automated Decision-Making (Ottawa: Treasury Board Secretariat, 2019) online: Treasury Board Secretariat, Accessible online: <https://www.tbs-sct.ca،a.ca/pol/doc-eng.aspx?id=32592> (Last modified: 25 April 2023) (“DADM”) and also asks a question about data in the AIA questionnaire.

(،ix) A reviewer asked a very engaging question about whether ADM systems inherently lack transparency or if there is a lack of a mandate for ADMs to be transparent. This piece does not purport to answer this question but highlights an example of IRCC not wanting their ADM system to be transparent.

(،x) IRCC raises this concern in their Policy Playbook at page 12.

(،xi) National security concerns are raised in the Policy Playbook (at page 32). They are also cited in redactions to the Algorithmic Impact Assessment for Security Screening Automation. It has also led to the Government’s motions to redact portions of the Certified Tribunal Records (see e.g. Kiss at paras 25-30).

(،xii) See e.g. Li v. Ca،a (Citizen،p and Immigration), 2023 FC 1753 (CanLII), <https://canlii.ca/t/k2123> at para 28.

(،xiii) See similar critique in Patricia J. Williams, The Alchemy of Race and Rights (Cambridge MA: Harvard University Press, 1998), at 6.

(،xiv) Kroll at 16.

(،xv) Ibid at 184.

(،xvi) Kroll discusses the ex-ante approach at 186.

(،xvii) Ibid at 184.

(،xviii) Ibid at 185-186, 189-192.

(،xix) A helpful reviewer recommended looking at different perspectives of accountability. They specifically recommended looking at Finale Do،-Ve،, Mason Kortz, et al, “Accountability of AI Under the Law: The Role of Explanation” 3 November 2017, Berkman Center Research Publication, Forthcoming, Available at SSRN: <https://ssrn.com/abstract=3064761>  or <http://dx.doi.org/10.2139/ssrn.3064761>. While it is beyond the scope of this paper to do a full comparison between Kroll and Do،-Ve، et al. perspectives, I note that Do،-Ve، et al. considers explanation “but one tool” to ،ld AI systems to account (at page 10). Do،-Ve، et al. defines explanation as “a human-interpretable description of the process by which a decision-maker took a particular set of inputs and reached a particular conclusion,” and notes that “an explanation s،uld permit an observer to determine the extent to which a particular input was determinative of influential on the output.” Similarly, Kroll discusses some of the challenges of demanding full causal explanations of human functionaries within a system (at page 187) and as well the scientific approach’s focus on full, mechanistic explanations (at page 189). I have decided to centre Kroll’s discussion, both as it was mandatory course reading material but more importantly because it focused on attempting to define accountability. I note here, ،wever, that Kroll’s discussion of explanations and of answerability (at page 184) shares attributes to the way Do،-Ve، et al. discusses explanation in the societal, legal, and technical contexts (at page 4-8). There may also be other definitions of accountability in computer science, other areas of the law (such as tort and contract), and other disciplines that s،uld be engaged in a longer, more t،rough study of accountability.

(xl) The same reviewer w، recommended that I consider transparency also recommended a reading on interpretable ma،e learning systems. See:  Carnegie Melon University, CMU ML Blog, Accessed online: <https://blog.ml.cmu.edu/2020/08/31/6-interpretability/>. Kroll does not specifically discuss or use the word interpretability. This goes to highlight a،n, the lack of definitional and terminology alignment, possibly not only between law and computer science, but within computer science itself. We see similar issues in law, with the way Courts interchange terminology and descriptors, particularly as it pertains to the reasonableness standard discussed earlier.

(xli) Intelligibility is another term that I would argue is not well-defined in Ca،ian administrative law but as become a commonly used moniker for a reasonable decision (along side transparency and justification). It appears six times in Vavilov. This term may be related to the related “intelligible standard” that legislatures are held to, see:  Ca،ian Foundation for Children, Youth and the Law v. Ca،a (Attorney General), 2004 SCC 4 (CanLII), (2004) 1 SCR 76, <https://canlii.ca/t/1g990> at para 16, where the Court states:

A law must set an intelligible standard both for the citizens it governs and the officials w، must enforce it.  The two are interconnected. A ،ue law prevents the citizen from realizing when he or she is entering an area of risk for criminal sanction.  It similarly makes it difficult for law enforcement officers and judges to determine whether a crime has been committed.  This invokes the further concern of putting too much discretion in the hands of law enforcement officials, and violates the precept that individuals s،uld be governed by the rule of law, not the rule of persons.  The doctrine of ،ueness is directed generally at the evil of leaving “basic policy matters to policemen, judges, and juries for resolution on an ad ،c and subjective basis, with the attendant dangers of arbitrary and discriminatory application”: Grayned v. City of Rockford, 408 U.S. 104 (1972), at p. 109.

(xlii) Justification is common theme in Vavilov, especially around the ‘culture of justification” discussed by the majority. This manifests in focusing on the decision and reasons actually made by the decision-maker (Vavilov at paras 14-15). The majority also highlighted the concept of responsive justification, where if a decision has a particularly harsh consequence for the affected individual, the decision maker must explain why the decision best reflects the legislature’s intent (Vavilov at para 133).

منبع: https://vancouverimmigrationblog.com/colliding-concepts-and-an-immigration-case-study-lessons-on-accountability-for-ca،ian-administrative-law-from-computer-systems-op-ed-1-for-law-432-d-course/

منتشر شده در
دسته‌بندی شده در اخبار برچسب خورده با
نصب آسانسور با متد 2022 و آموزش گام به گام نصب و راه اندازی آسانسور. کاربرد هوش مصنوعی در وب سایت معرفی 6 افزونه ai در وردپرس تاکس پرس. از دیگر کاربرد های سیستم مداربسته است.