Securing an Ethical Protection of Data Privacy

Lauren Keegan
15 min readApr 6, 2021

Preface: This paper was the culmination of my summer 2019 research grant on AI ethics from the Lewis & Clark Dean of Students’ Office, and the term paper for the fall 2019 philosophy course based on my research with my professor, Joel Martinez. I consider it my senior thesis even though it was not formally listed as such on my transcript. Though some of the sources may be outdated, I wanted to share as one of my favorite papers of my undergraduate academic career.

INTRODUCTION

As the abilities of computing machinery, artificial intelligence, and machine learning become more advanced, our relationship with data and the effects of its processing changes. Given the power of this technology, it is important to pay attention to what is at stake, what can be achieved, and what is to be avoided at all costs. We can accomplish amazing feats, or commit heinous crimes. Among the most important of these issues are concerns about personal data privacy and its use. In developed nations, it is almost mandatory, if not entirely inevitable to give up certain pieces of personal information. But how much ought we to give up? What is at stake when we give out personal information, or it is processed in a way that we don’t know about or don’t agree with? What ethical recourse do we have in cases where this processing reaches a problematic point?

My concern is with attempts to secure an ethical right to data privacy. Situated within a modern context, the protection that we seek must be as universal as possible. It must be viscerally apparent, and it must address all the technologies and all the agents that occur within the ecosystem of the processing of personal data, which reaches new ground every minute. I will analyze the work of Luciano Floridi, who offers an ontological grounding for his theory of information ethics that will secure the right to data privacy. I will explore his arguments in depth and examine what, if anything, is relevant and helpful in our quest to understand the ethics of data privacy. I argue that his position is insufficient to deal with the complexities of modern technology, and that his arguments against both traditional normative ethical theories and existing applied ethical theories work against himself. I theorize that a pluralistic applied approach to the protection of data privacy is the best way to understand the ethical issues that we face and protect against possible ethical harm.

LUCIANO FLORIDI’S INFORMATION ETHICS

Luciano Floridi, professor of ethics and data privacy at the Oxford Internet Institute has been working on this problem for over 20 years. He argues that the main three normative ethical theories of Consequentialism, Deontology, and Virtue Ethics are insufficient to deal with the problems that arise from our age of enhanced data processing and personal data use by information and communication technologies. The traditional means of dealing with the ethical issues in computing prior to 1999 was with Computer Ethics. Computer Ethics was traditionally viewed as a form of casuistry, or professional ethics that was not well-grounded in a theoretical sense. It was criticized for being too field-specific and not applicable to wider contexts. Computer Ethics dealt mainly with how humans use computers, and didn’t quite capture the complexities of ethical issues that stem from non-human things, like complex computing technology.

For example, if an artificially intelligent agent were to commit cybercrime against a bank, how would we deal with this? How would we even assign responsibility in the first place, when human actors are only indirectly involved? Floridi’s principal criticism is that the main three normative ethical theories (Consequentialism, Deontology, and Virtue Ethics) are too anthropocentric to deal with the novel ethical concerns that arise when such data processing is introduced. They also tend to focus on the actor in an ethical situation, rather than the recipient of that action, or other involved parties.

Other forms of applied ethics are addressed as well. Bioethics, medical ethics, environmental ethics, and land use ethics fall privy to the same problems that the main three normative ethical theories face, but to a lesser degree. They focus more on the recipient of the action rather than the actor, but they still show anthropocentric tendencies through their bias toward living things. The only exception to this is land use ethics, which comes closest to the object-oriented ontology that he proposes. Floridi categorizes these more as theories of nature and space rather than ethical theories. They still embody a dichotomy of having only an agent and a patient in every ethical situation. The problems that arise in the ethics of information privacy may not have these bounds, so it cannot address the full range of questions. He emphasizes that a proper theory must be both universal and impartial. Information ethics avoids all the problems that the main three normative ethical theories and the main applied ethical theories fall into through his ontological grounding.

ONTOLOGY

Floridi adopts an object-oriented ontology, which is a term taken directly from computer science. Object-oriented programming is a style of writing code that helps organize functionality in a computer program. His theory is nearly identical in that everything is an object. An object is the simplest form that something that exists can be in. Objects have states, which are qualities about them that they possess, as well as behaviors, which are methods or actions that they can do or perform. For example, my water bottle is an object that has the state of being gray, and the behavior of holding water. Every object has a bare minimum status of being an informational entity. This means that at the lowest possible state of being an object, that object has information as part of its state. A speck of dust has the bare minimum informational state of its being a speck of dust.

Each object’s states and behaviors make up something called an informational envelope, which can be conceptualized as a filter around the object that an object uses to interact with other objects around it. For a person, this envelope might be made of skills, beliefs, attitudes, learning styles, experiences, basically anything that you would use upon encountering new information to be able to process that information. This envelope is constantly changing and evolving. If I learn to ice skate, my envelope now includes ice skating, and that is a way that I interact with the world. If I want to learn hockey, I use what is in my envelope, namely ice skating, to be able to play hockey. My envelope will continue to grow and evolve as I have these informational interactions. The envelope can deteriorate as well as grow. If I don’t practice my Spanish, I will no longer be able to interact in the same way with information encoded in Spanish.

Whatever comes to the object is something that is relevant to the object. It has to be near enough to the object to see its relevance. This is because the information that comes out of an object is part of the infosphere, which is a term to encompass all objects and informational entities in existence. The relevant information that comes to an object is part of the info region, which is whatever area of the infosphere that an object occupies. For example, the information of the fishing habits of a penguin in Antarctica usually does not reach a Redwood tree in California, because they occupy such different regions of the infosphere with so little informational connection between them.

The benefits of having this object-oriented ontology are that it does not treat humans as the end-all-be-all of ethical considerations. A computer can be just as much of an informational agent as a human can be, without ascribing human qualities to the computer. As this informationality is in direct correspondence with ethics, this ensures that the treatment of information is inherently ethical. Floridi does this in order to be able to avoid the pitfalls he identified for all the other forms of ethics, namely that this theory is universalizable and impartial.

ETHICS

As informational entities, agents have a duty or goal to contribute to the improvement of the infosphere. They also have a duty to prevent entropy of the infosphere. Entropy is described as “…a semantic, not a syntactic concept, and, as the opposite of information capacity, it indicates the decrease or decay of information leading to absence of form, pattern, differentiation or content in the infosphere.” Entropy is characterized by confusion, or loss of information. It is most akin to computer science entropy, which is also called noise. This is any data that is irrelevant or obstructs information processing.

There are four basic moral laws that outline what it is to be a caring and responsible moral agent in the infosphere:

0. entropy ought not to be caused in the infosphere (null law)

1. entropy ought to be prevented in the infosphere

2. entropy ought to be removed from the infosphere

3. information welfare ought to be promoted by extending (information quantity), improving (information quality) and enriching (information variety) the infosphere.

These laws are listed in ascending order of moral importance. An action is commendable if it is junction with the first law, and that it does not cause entropy. Good is non-monotonic and resilient: it exhibits fault tolerance and error recovery. Evil is monotonic and causes entropy. When we think of an evil action, such as torturing an innocent child, no subsequent actions can reverse the damage that was done by the action, just as entropy cannot be reversed since it represents the loss of order in a system that cannot be regained.

Each of the three main normative ethical theories can be used in conjunction with information ethics, but information ethics is distinct from those three. There is a resemblance between information ethics and deontology, but it is argued that they are distinct enough to be separate. Floridi asserts that the Kantian Categorical imperatives do not hold in a computer ethics context. They cannot account for non-human agents, or ethical issues concerning more than two agents.

PRIVACY

There are four types of privacy: physical, mental, decisional, and informational — we focus on the fourth. Due to the nature of how information is treated in our extremely informational age, and the impact of Information and Communication Technologies (ICTs), agents are therefore constituted by their information, by the envelope that we use to interpret the information that comes at us from the infosphere. We are this packet of information that encapsulates us. A change in this packet is a change in us because we are made of it. It is private, we are responsible for it, and it is under our control. When privacy is violated, a piece of that packet is taken and copied. It is no longer unique, or under our control. This alienates that part of ourselves from ourselves, and that part “dies.” It is the same “moral death” experienced in tragic dilemmas, where we have little to no freedom, and limited information, such as the Trolley Problem, Sophie’s Choice, or Jim and the 20 Indians. Acts violating informational privacy are then viewed as the destruction of part of an agent’s identity, an increase in entropy and therefore an act of evil, and effectively akin to a sort of underhanded murder. We pay a certain price to society by having some of our information made public, but things beyond this price are subject to this interpretation.

In Floridi’s 2006 paper, “On the Ontological Interpretation of Information Privacy,” he describes the reductionist and ownership-based theories of information privacy. The reductionist theory is utility-based and holds that respecting privacy is important because of what it can be reduced to, for example, that a breach of privacy would cause discomfort and therefore disutility. The ownership-based theory says that rights to property must be respected, and that respecting information as owned property is how we will secure this. The reductionist theory is consequentialist and has difficulty dealing with public good concerns, and the ownership-based theory is deontological and has difficulty dealing with information contamination, as well as dealing with public versus private cases. The strong foundations in these two theories make it possible to override the main goal of protecting information for its own sake, not for the greater structural concerns that consequentialism and deontology outline. The ontological interpretation does not have these problems, and skirts the ownership problems by dictating that information is more to do with belonging than ownership; the “my” in “my information” is the same “my” as in “my hand,” and different from the “my” in “my car.” This we already understand from his previous iteration of the ontological grounding and the object-oriented theory of information ethics.

Rafael Capurro also advances an ontological theory of information privacy. He directly responds to Floridi’s ontology, saying that an onto-centric account is incorrect and makes him privy to problems concerning the infosphere. Capurro advances a being-centric approach instead. Daniel Susser takes a similar departure from Floridi’s stance, positing that a control-based theory of informational privacy is uninformed, but rather that people produce and manage public identities in ‘social self-authorship’. The problems created by information technology is that it makes social self-authorship unnecessary and invisible. It obviates the need for it, and blocks us from our ability to exercise this authorship.

PROBLEMS

There are unanswered questions in the Information Ethics account and its ontological grounding. It is unclear in exactly what sense information constitutes an object. Whether this constitution is full or partial is never made clear, and if it is partial, the other non-informational components of an object’s constitution are never articulated. If the status of an entity as an informational object is what allows us to conceive of information in such a way to interpret privacy violations as an act of aggression toward one’s identity, it is important to be able to fully account for this status. Otherwise, a loophole could be exploited and privacy violated, even under a perfect application of this theory.

Secondly, the claims of this theory as standing apart from the three normative ethical theories seem to disintegrate when we break it down into its components. His account of minimizing entropy bears striking resemblance to minimizing disutility. The four basic moral laws of the infosphere and an agent’s duty to follow them is highly deontological. Being a good agent in the infosphere requires that the states and behaviors of the agent be ethically good, so basically developing character traits as a “caring and responsible moral agent.” These all correspond to cherry-picked aspects of the main three normative ethical theories, and could have been achieved with a pluralistic application.

One of the main criticisms of the three normative ethical theories is their anthropocentrism, but Floridi’s ensuing works use this position to secure special rights to privacy for humans. The largest evidence of this lies in a paper titled “On Human Dignity as a Foundation to the Right to Privacy.” The focus here is solely on how humans ought to be a special case in information ethical concerns, as we process and possess information in unique ways. This does not appear to be in line with his outlined goals of being universal and impartial.

Most of the explanation of the ethical considerations rely heavily on the terms “respect” and “dignity.” The word choice here is never fully qualified. Semantically, they are too thick to be brushed off as mere placeholding words; they imply more weight, and are used as such. Where does Floridi derive these from? It might be interpreted that “dignity” means “worthy of the highest moral status,” and “respect” to mean “worthy of moral consideration.” These aren’t worked out well into his ethical theory, nor is the idea of why one ought to be a “caring and responsible moral agent” in the infosphere in the first place.

If we refer back to the initial issues of why current ethical theories are unequipped to deal with the problems that information and communication technologies present, they were portrayed as being anthropocentric and narrowly agent-focused. From this, we can glean that the main challenge is that it is difficult to apply a human-centric theory to a non-human context. But Information Ethics merely flips the script — it applies a nonhuman-centric theory to a human context. The evidence of this lies in the cascade of Flordi’s work over the last two decades, in which humans still stand in the spotlight.

SOLUTIONS

There are two directions we can go in. The first is creative applications, and the second is in a more intuitive ontological appeal that may work in a situated context.

If the difficulty is in the application of our current theories, why not get creative with the application? When the conceptual challenges of the ethics of data privacy are this great, we need to be comfortable with our weapons and know how to use them. The ability for every instance of privacy protection to require an understanding of an object-oriented ontology as well as a normative ethical theory seems too great a hurdle to an already complex situation.

There are two well-developed theoretical tools that would greatly aid this creative application, accomplishing everything that Information Ethics sets out to solve but with a bit more establishment. The first is information theory. Already well-developed in epistemological contexts, information theory can help account for the informational nature of things without grounding it in a complex ontology. Coupling something like Shannon’s Mathematical Theory of Communication with a normative ethical analysis would secure the informational nature Floridi wants. This has been done in other areas, as Fred Dretske did with securing an externalist theory of justification in Knowledge and the Flow of Information.

The second is the notion of informed consent. Bioethics and medical ethics have both had great success in relying on informed consent for supplementing ethical analyses. It may even secure the wrongness of taking my information in the “my hand” sense; if you did not get my informed consent in taking my hand, you are wrong to do so. This can be applied to all forms of privacy, not just informational, and would also help treat privacy issues that arise in medical contexts.

The biggest difficulty is the acceptance of the ontological foundation of IE in a current technological context. Thinking about the ontology long and hard enables the reader to come from the perspective of everything being constituted by information, but if we are to apply it in the context that we wish to, it needs to be viscerally apparent to shareholders and stakeholders alike, as well as experts. Certain media trends in the existential threat of AI and the rapid proliferation of ICTs make this visible, but not enough to warrant this strong of a theoretical approach. Floridi triumphs by avoiding casuistry, but this deficit remains. We need to bring these things closer together, without falling too hard into either extreme. This is not to say that the ethical foundation itself is inapplicable, because I think Floridi succeeds in conceiving of ethical entities in an informational way.

Floridi’s theory of informational privacy accomplishes a great feat — it shows the gravity of the price we pay to technologies that use our data in unknown ways, and on an unprecedented scale. The price we pay to society in having our information taken or made public is becoming too high to pay, like runaway inflation. It demands parts of ourselves and needs to exist through these taken parts.

Artificial intelligence creates an opacity problem concerning privacy. The way our information is being treated is not opaque to us, and there is conceptual and practical difficulty in avoiding this opacity. We don’t even know how parts of ourselves are being lost to entropy, and there is no real way to. Complete transparency would render useless many systems we rely on, nor is transparency something we would know how to implement even if it were a viable solution. Floridi has papers on transparency that I will be analyzing soon to understand his position, in which he may deal with this concern.

Information ethics is clearly well-established and relevant in the literature, and makes privacy violations a clear moral wrong, but there are still rampant violations of privacy that seem impervious to this theory, violations that haven’t been around as long as the theory has been. Is it that no one has dealt with them in this context? It is clear that Floridi has been popularizing his approach in recent years, as his articles decrease in specialized terminology and increase in brevity as the years go on. This may reflect that he too recognizes that a more case-based application requires more popularity and exposure to the public, as we need to understand the importance of information in order to conceptualize privacy this way in a situated modern context. It calls into question if the breadth of ontocentrism is actually accomplishing what we need.

CONCLUDING REMARKS

We are sometimes uncertain about whether or not certain instances of the use of personal information in large data processing systems, such as those aided by artificial intelligence and machine learning, are committing some form of ethical wrong. There is a visibly growing trend in the media voicing concern over this, as technology asks for more and more information from us, and gets better and better at using that information. This reachers from personal recommendation, financial, medical, navigation and transportation systems. The media has focused on issues like the Cambridge Analytica scandal fronted by Facebook, uncanny targeted online advertisements, Amazon Echo’s accessing not only voice commands but entire home systems, and a boom in privacy and protection software as well as the growth of the cybersecurity industry. The New York Times has an entire series on privacy as well, called the Privacy Project.

These all reflect that people care about what happens to their data. There’s something unsettling about one’s personal data being treated in unknown or undesired ways. This may be a mere intuition, but it is widely shared and ought to be explored further as a potential issue. Our ability to articulate what exactly is going on here and what exactly needs to be done is a challenging battle that cannot be dealt with in a single treatment of the issue. However, we must chip away at the unknown in order to ensure that no massive ethical wrongs are being committed. As we seek to increase our understanding, we must dig deep into what the components of this conundrum are, and how to treat them using the best tools we have.

________________________________________________________________

WORKS CITED

Capurro, Rafael. “Towards an ontological foundation of information ethics.” Ethics and Information Technology, 8 (4): 175–186 (2006).

Capurro, Rafael. “ON FLORIDI’S METAPHYSICAL FOUNDATION OF INFORMATION ECOLOGY.” Ethics and Information Technology, Vol. 10, Numbers 2–3, 167–173 (2008).

Drake, Peter. Computer Science II Lecture. Lecture, Lewis & Clark College, Fall 2019.

Dreisbach, Sandra. Bioethics Lecture. Lecture, University of California, Santa Cruz, Fall 2016.

Dretske, Fred. “Precis of Knowledge and the Flow of Information.” Behavioral and Brain Sciences 6 (1):55–90. (1983).

Floridi, Luciano. “Information ethics: On the philosophical foundation of computer ethics” Ethics and Information Technology 1: 37–56, (1999a).

Floridi, Luciano. “Does Information Have a Moral Worth in Itself?” (1999b).

Floridi, Luciano. “The Ontological Interpretation of Informational Privacy.” Ethics and Information Technology 7, no. 4: 185–200, (2005).

Susser, Daniel. “Information Privacy and Social Self-Authorship,” forthcoming in Techné: Research in Philosophy & Technology.

The Privacy Project. New York Times, 2018-. URL: https://www.nytimes.com/series/new-york-times-privacy-project

--

--