An ethical framework for AI and humanity?

By Siani Morris

Ethics

Where does ethics come from? It’s made up by people in a cultural context; there is no standard upon which everyone would agree.

The ethics of decision-making systems is a live topic, and well-funded institutions are already conducting projects in this area.

There is plenty of background information about ethical AI already produced[i], with sets of broadly common principles that include privacy, transparency, accountability, sustainability, etc.

Broadly speaking, these ethical frameworks, designed to influence company behaviour, can be seen as a broadening out of the already existing privacy accountability governance frameworks, although there are additional aspects to be considered such as technical means to address bias.

But there is also currently quite a lot of ‘ethics washing’ going on, especially from large companies trying to water down ethical AI principles and regulation.[ii]

So some ethical frameworks will be more trustworthy than others, depending on an individual’s viewpoint and the background of the people producing them: for example, EU citizens might rate more highly the outputs of the EU ethics expert group influencing the upcoming EU AI Act[iii]

Key elements of an ethical process

In order to devise an ethics for ‘doing the right thing’, with respect to the behaviour of an accountor (such as the developer or deployer of an AI system), criteria need to be specified for judging ‘good’, ‘bad’, ‘right’, etc.  and then the ethical quality of the accountor’s actions needs to be judged according to these criteria, and if there are shortcomings, reasons need to be provided.

A core part of the accountability involved is to determine and clarify the rights and obligations of actors, including clearly allocating privacy and security responsibilities across supply chain actors and other parties involved in providing the service.

It is useful to distinguish between

  • ethics inside AI-based systems,

  • the ethics of businesses or people developing AI-based systems, and

  • the ethics of AI deployments.

Although it is important for all three aspects to be ethical, the ethical problems involved in these three aspects may be solved using different techniques. For instance, the question of ethics inside AI seems more abstract, and current efforts fall short of expectations[iv], especially as the newer types of AI system no longer directly encode heuristics or semantics. On the other hand, the ethics of people developing AI and the ethics of deploying AI in a particular context may be more amenable to being solved using currently available techniques. There are already some techniques that have been recommended to develop AI in an ethical manner. [v] [vi]

The deployment of AI-based systems requires the analysis of development techniques and social contexts, as well as the deploying the organisation’s own practices.

There are different approaches to ethics that may be taken within each of these:

  • rules-based versus outcome- based,

  • virtue ethics and

  • cultural sensitivities to be taken into account.[vii]

For example, Aristotelian human well-being / flourishing could be made central to an ethical approach when developing AI systems (as with IEEE[viii], where associated metrics are also provided).

Ethical decision making

In every aspect of human activity, different ethical approaches can be taken. Broadly speaking, these divide into

  • teleological approaches (an ethics of what is good – for example, utilitarianism) and

  • deontological approaches (an ethics of what is right – for example, using Kant’s categorical imperative).

Depending on which ethical approach is taken, the answer about the right course of action might be different.

  • A teleological decision looks at the rightness or wrongness based on the results or the outcomes of the decision.

  • A deontological decision instead considers the moral obligations and/or duties of the decision maker based on principles and rules of behaviour[ix].

Ethics in business

The ethical dimensions of productive organisations and commercial activities have been studied since the 1970s within the field of business ethics, and a number of different approaches can be taken corresponding to this[x], ranging from Milton Friedman’s view of corporate executives’ responsibility generally being to maximise profits while conforming to basic rules of the society to the opposing idea of corporate social responsibility (actions by businesses that are not legally required and intended to benefit other parties).

In order to use these ethical approaches in a practical perspective by embedding ethics within business operations, one approach is to try alternative approaches and see the extent to which there is agreement. This may sound simple, but actually it is not necessarily an easy process.

Let us consider comparing just one form of deontological judgment with one form of teleological judgment. If the result were that you would be doing the wrong thing and getting the wrong results (poor outcome), it might seem fairly obvious that a project fitting in that category should not go ahead, just as it needs little thought that a project doing the right thing and getting a good outcome is perfectly fine to go ahead. However, if you regularly deliver highly on the deontological spectrum but poorly on the teleological spectrum, you may well go out of business as it just might not be sustainable financially to continue. Conversely, if delivering highly on the teleological spectrum but low on the deontological spectrum, the drive for profit is taking precedence over consideration about what is (or is not) the right thing to do. Also, there is a zone of ethical nuances (especially along the boundaries between these) where the conclusion is not clear. Furthermore, when there is an economic slump, things can be perceived to be ethically questionable that would not have been before, so this ethical nuances zone can change.

Moreover, there tend to be different kinds of ethical perspectives for different types of organisations. For instance, guardian roles (such as regulators) seem to favour a deontological culture, whereas commercial institutions seem to favour a teleological culture and other actors (such as activists and technologists) may favour virtue ethics roles. Broadly speaking, governmental policy makers have outcome-based ethics, like commercial organisations, but are interested in economic and developmental outcomes at the national or regional level rather than the organisational level. Citizens have their own ethical framework.

These different ethical frameworks and potentially conflicting objectives can make designing ethical codes of practice for the configuration and commercial use of new technologies difficult. The code of practice could be a failure if it is unacceptable to any of these types of stakeholder. It must provide adequate protection of individuals’ rights and interests. It must also give guidelines and assist with compliance to laws and regulations, as well as being practical for information technologists to comply with, and allowing new innovative mechanisms to achieve their potential for driving socially and economically beneficial applications. In addition, it is beneficial to take into account a more nuanced understanding of “harm” including risk, potential harm, and forms of harm other than just physical and financial. In the data protection sphere, this is somewhat accounted for within the notion of Data Protection Impact Assessments (DPIAs), which extend the standard practices of security risk analysis to examine also harms to the data subject with regard to a proposed activity. In carrying out this assessment the summation of the harm across society needs to be properly evaluated and justified, as otherwise there is a risk that the potential harm to a single individual, as measured by an organisation that has a particular activity in mind, will typically tend to be overridden by other concerns.

There are different examples of ethical frameworks for decision making in contexts particularly influenced by recent technological development from different countries.

But many initiatives that purport to define principles and frameworks for the ethical development and deployment of AI are not truly independent, being sponsored by corporations.

Whose ethics?

But whose ethics would be used in an AI system?  There is no standard ethical framework universally accepted. Even if different nations or parties were to agree to one, this would not be enforceable and people could have a false sense of security, thinking things were fine. The issue of companies who ultimately control usage of AI and their selfish motives would remain.

How can we account for the subjectivity of ethical judgment? Solutions might include allowing customisation or tailoring, having inputs from the different parties involved or a process protecting the rights and interests of the less powerful (as suggested for algorithmic impact assessment procedures[xi]). Another approach tried was the BCS framework[xii], which was a meta-framework that allowed different ethics approaches to be slotted in, although the realisation of this system was not completed. Discussion amongst affected and involved parties should be part of the process: there are some current promising initiatives such as the Open Data Institute’s [xiii]Ethics Canvas, and others of a similar nature, which provide a framework for stakeholder discussion and information capture.

Some of these aspects and principles are easier to embed into AI systems than others, for example dependent upon the nature of the AI system (rules-based or machine learning (ML), etc.). Virtue ethics may be used as a regulating force as goal-directed behaviour becomes internally set. Techniques influenced by ethical considerations include the provision of flowcharts to encourage developers to think about ethical aspects at appropriate points, the provision of a common vocabulary for different involved parties, the education of developers, giving people impacted a choice and a say, banning certain types of development absolutely, etc.

We should also put in place mechanisms to address in a more ethical way the consequences of AI, such as introducing Universal Basic Income at a decent level, as jobs come more under attack, and making sure that the money to pay for this will come from the people who are benefitting most from the introduction of the new technology.

Substantive Basis for Ethics

How can we provide a substantive basis for ethical judgment? Most ethical frameworks do not provide a substantive basis for ethical judgment but rather a list of values for discussions, and this can prove inadequate in terms of independent assessment of a proposal. Domain by domain, it might be possible to take an analogous approach to that described by Gary Marx for the surveillance domain (as an example of new types of activity around enhanced surveillance possible using new technology)[xiv]; he shows how societal norms can inform an ethics for surveillance. To do this, he asserts that the ethics of a surveillance activity must be judged according to the means, the context and conditions of data collection, and the uses or goals; he suggests 29 questions related to this. The more one can answer these questions in a way that affirms the underlying principle (or a condition supportive of it), the more ethical the proposed activity would be deemed to be. In this case, four conditions are identified that, when breached, are likely to violate an individual's reasonable expectation of privacy, respect for the dignity of the person is a central factor and emphasis is put on the avoidance of harm, validity, trust, notice, and permission when crossing personal borders.

We need ethical frameworks for AI

Nevertheless, despite these problems, and any framework being potentially some kind of straightjacket that is kicked against, if we want AI to exist in our world then we do need to have principles and frameworks that will try to control its usage, as the alternative is even worse. 

Overall, the recent document produced by the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems on Ethically Aligned Design provides a very good consideration from a ‘neoclassical’ engineering perspective of how both classical Western and non-Western ethical approaches may be taken into account to benefit AI development and deployment.[xv]

One example solution: Quaker Ethics

As an organisation influenced by (although not centrally tied to) Quaker principles, JAAG considers a good starting point for an ethical framework to be Quaker business ethics[xvi], whose central principles are truth and integrity (including speaking truth to power); justice, equality and community; simplicity; peace; the light within/individual moral compass.

With regard to AI ethics, there are several aspects that should be considered in a given context; these include the truth of the outputs of the system (including replication of bias and ability to detect and correct inaccurate information); social justice and effects on vulnerable groups; sustainability; and not contributing to excessive inequalities of wealth distribution. In addition, certain activities should just not be allowed, including those that threaten peace and community.

Oversight

And who will enforce and judge this properly? Much can happen ‘under the radar’, regulators are typically very under-resourced and there will be some bad (or otherwise-focused) actors not willing to engage at all. Licensing and trust marks issued by trusted third parties which check what is actually happening in the AI black boxes and beyond could be useful. But ultimately who funds and who guards the guardians?

Brent Mittelstadt’s research[xvii] in “ethical AI” has shown that principled self-regulation, via agreed ethical principles, cannot be relied upon to be effective. These findings match our own experience.

Based on long experience with assessing safety-related systems, we are proposing not only standardisation and independent certification but also the use of protections at each relevant stage of a system’s lifecycle.

A future possibility: Automating techniques for ethical checking

It is interesting to speculate how the cause of a problem may also be used to help with its solution. AI-based systems are currently deployed through an exhaustive investigation of physical and digital artefacts by human beings. However, it is well within the realm of possibility that techniques to analyse digital artefacts and draw conclusions about the AI creation and deployment process would soon be available.[xviii]  In our opinion, there will be increasing efforts to automatically identify digital artefacts that could be used to infer ethically problematic practices. Initially, these might be limited to generating reports which are then evaluated by a human. However, as with as automated testing and natural language generation, or theorem proving, computer science has a history of automating small tasks that build on each other to perform a complex operation. This may result in more sophisticated tools that may eventually be able to perform deeper analysis on a company’s artefacts to infer ethical values, just as the ISO 9000 series of certifications allows us to infer product quality from process quality. We would speculate that simpler tasks such as checking whether datasets have been de-biased would be the first milestone in this endeavour, with other practices following suit. Automation thus functions as a marker of human ingenuity, where we are able to dissociate the task from the intelligence required to do it, by breaking it down into simpler objectively evaluated parts.

See also the companion blog post “The Ethical dilemmas posed by AI”

Notes

[i]              See for example the list available from the Institute for Ethical AI and Machine Learning: EthicalML/awesome-artificial-intelligence-guidelines. This repository aims to map the ecosystem of artificial intelligence guidelines, principles, codes of ethics, standards and regulation.

[ii]              An Evaluation of Guidelines - The Ethics of Ethics – Thilo Hagendorff’s research paper that analyses multiple ethics principles being proposed by different parties.

[iii]             European Commission's Guidelines for Trustworthy AI - The Ethics Guidelines for Trustworthy Artificial Intelligence (AI) is a document prepared by the High-Level Expert Group on Artificial Intelligence (AI HLEG). This independent expert group was set up by the European Commission in June 2018, as part of the AI strategy announced earlier that year.

[iv]                        Nallur, V., 2020. Landscape of Machine Implemented Ethics. In Science and Engineering Ethics, Vol. 26, pp.2381–2399. https://arxiv.org/abs/2009.00335

[v]                         Heidari, H. et al, 2018. A moral framework for understanding of fair ML through economic models of equality of opportunity.  arXiv:1809.03400.

[vi]                        Anderson, M. et al, 2019. A value-driven eldercare robot: Virtual and physical instantiations of a case-supported principle-based behavior paradigm.  In Proceedings of the IEEE, Vol. 107, No. 3, pp. 526–540. ; Gebru, T. et al, 2020. Datasheets for Datasets, arXiv:1803.09010 [cs]. Available at: http://arxiv.org/abs/1803.09010 

[vii]                       Harris, I: Commercial Ethics: Process or Outcome? Gresham Lecture, London, 6 Nov (2008).

[viii]                      IEEE's Ethically Aligned Design - A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems that encourages technologists to prioritize ethical considerations in the creation of autonomous and intelligent technologies. See for example ead1e.pdf (ieee.org).

[ix]                        More information about the various different sub-approaches and philosophers in such a taxonomy of commercial ethics is given in Harris’s lecture.

[x]                         Moriaty, J.: Business Ethics. Stanford Encyclopedia of Philosophy November (2016)

[xi]                        https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/

[xii]                       Harris, I., Jennings, R.C., Pullinger, D., Rogerson, S., Duquenoy, P.: Ethical assessment of new technologies: a meta-methodology. J. Inf. Commun. Ethics Soc. 9(1), 49–64 (2010). Emerald Group Publishing / Google Scholar

[xiii]                      https://theodi.org/article/data-ethics-canvas/

[xiv]                     Marx, G.T.: An ethics for the new surveillance. Inf. Soc. 14(3), 171–186 (1998) / Google Scholar

[xv]                      ead1e.pdf (ieee.org)

[xvi]                     Quaker Business Ethics Principles: The Quakers and Business Group (hubble-live-assets.s3.amazonaws.com)

[xvii]                    Principles alone cannot guarantee ethical AI by Brent Mittelstadt, Oxford Internet Institute and Turing Institute, 2019, and Counterfactual explanations without opening the black box: automated decisions and the GDPR by Sandra Wachter, Brent Mittelstadt, & Chris Russell.

[xviii]                   Automation: An Essential Component Of Ethical AI? Nallur, V.; Lloyd, M.; and Pearson, S. In ICT, Society and Human Beings, volume 15, pages 229–232, July 2021

Previous
Previous

Ethical dilemmas posed by Artificial Intelligence

Next
Next

Palantir win lucrative NHS data contract