Applied ethics - a real-life Product scenario
The world of ethics seems a long way away from the day-to-day practice of product management. There are certainly plenty of troublesome ethical considerations to think about in the technology world, but they often feel abstract, and removed from anything that we have immediate decision-making power over. I’m not in a position to be deciding what role Generative AI should be playing in our domain, or whether the efficiency offered by LLMs outweighs the potential risks of using it.
However, ethical decisions creep into the everyday world of product development, even in small ways. We all have the power to influence people’s experience of technology, through design, implementation, and decision-making, for better or for worse. In his brilliant book, Future Ethics, the technology ethicist and product designer Cennydd Bowles looks at two perspectives: how might current technology workers use ethics to develop products which are better for the world, and what emerging ethical trends do we need to equip ourselves to consider. For me, it starts with the former; if I can apply ethical theory to the small-scale decisions I make as a product leader, I will have more practice at dealing with tricky decisions, should I encounter a significant ethical dilemma.
I’ve recently had the opportunity to think through a product decision using an ethical framework. It can be hard to identify when to step back and consider something more carefully; at the moment, I have no better guide than ‘when something just doesn’t feel right’. The product decision was, on the surface, fairly straightforward; we wanted to collect some information about our users, and we had a choice to let them skip the flow, or block access to our application unless they filled it in. Nevertheless, I didn’t feel certain using our combination of revenue extrapolations and leading metrics to make this choice. Here’s how it breaks down, using some of the ethical frameworks discussed in Future Ethics.
The deontological approach
Deontologists believe that an ethical approach is one governed by rules and principles. The school does not specify what the rules are, and the rules which guide ethical decision making will vary depending on your situation. Bowles suggests two questions posed by Immanuel Kant which can be used to generalise ethical rules, namely: What if everybody did what I am about to do? And, am I treating people as ends or means?
Applying these questions to my particular tests yields some interesting results. It’s actually not that useful to ask what if everybody made it difficult for digital product users to progress their journey without giving up personal information, because this happens all the time. In fact, this has been used to argue in favour of introducing this type of flow, with the unconvincing cry of ‘Everyone else is doing it, why shouldn’t we?’ The second question is slightly more helpful. In order to answer whether or not we are using people as ends or means, we need to examine intent. There are two acceptable, sanitised reasons behind capturing this information; to improve our understanding of our customer base and improve marketing efforts, and to suggest the most useful actions they might want to take first to understand the product. The ultimate aim, of course, is to increase company profits through the use of this data. However, there is a ‘double effect’ behind this intent; as proven by our testing, this design pattern frustrates users and makes their experience of our product worse.
This leads us to the answer to our second question. By shipping a dark design pattern, we are treating our users as a means to our end, profit, rather than an end in themselves, something a deontologist would reject. We are putting our requirements for company growth ahead of their need for a positive experience of a product, for which we will ultimately be asking them to pay.
The utilitarian approach
This brings us to utilitarianism as an ethical methodology, the core principle being to act in a way that brings the greatest good to the greatest number of people. At first, there seems to be a fairly straightforward utilitarian equation to balance in our example. On the one hand, we have a large number of users who will have a small increase in frustration in their use of the internet. On the other hand, we have a small number of executives who may be enriched as a result, as well as another small group who may improve their KPIs, earning the approval of their superiors and a precious hit of dopamine. This type of equation plays out across almost every technology product, layering on the minor stresses of the many in order to result in the power and wealth of the few.
In this instance, the equation may not even be so clear-cut. It’s entirely possible that using this information won’t result in a meaningful increase in profits. Perhaps an experience built on other signals, besides personal information, will support users just as well. Maybe our understanding of key market segments is already sufficient. We don’t know the answer to these questions; ironically, allowing users to choose whether they want to provide this data or not will give us a natural, opt-in testing cohort. In addition, by refusing to let users skip this flow, we are contributing to the pile of evidence already visible to other technologists, which justifies the use of dark patterns and makes them seem more acceptable. Given that the certainty of business performance improvement is not so clear cut, and the potential for harm even greater than it seems at first, the utilitarian approach suggests that using an unskippable flow is not ethically viable.
The virtue ethics approach
Whilst a deontologist focuses on duty, and a utilitarian focuses on consequences, a virtue ethicist focuses on overall moral character. A virtuous person reflects on the kind of virtues they want to hold, and attempts to live up to them. This is a little bit difficult to apply to a company or team, as there might be many different kinds of virtues held among the various members. However, Bowles offers a useful ethical test which is easier to apply above the individual level: would I be happy for my decision to appear on the front page of tomorrow’s news? As with Bowles’ own example, there might be a few positive comments directed our way if this were to happen; we’re keeping up with the growing trend of ‘personalisation’, pursuing growth at any cost shows dedication to company performance. However, we don’t even need to imagine the negative feedback, as it’s been given many times over in many different companies, in the form of support queries, Trustpilot reviews and articles. It’s frustrating, exploitative, and self-centred. It’s not putting user experience at the core of our design decision-making. It’s short-sighted; we’re sacrificing long-term trust for short-term profit. Again, from a virtue ethics perspective, rolling out this design choice is not supported.
You might be thinking that this is a clear decision, which need have caused me no ethical dilemma whatsoever. The choice is obvious, this is not a good experience, therefore it should not be considered. However, whilst those working from a perspective of user-centred design find this choice easy to make, those who are profit-driven find it equally easy, but in the opposite direction. In order to find middle ground with these decisions, it’s helpful to have frameworks in place to break it down and find some common understanding. By saying ‘no, this isn’t right’, you end up seeming like you are trying to position yourself as morally superior. Whether this is the case or not, it’s more likely to result in resistance, than an open-minded and critical appraisal of your position. For once, I am happy to say that this decision is still not made, and we are thinking through the various implications with more nuance than before. If you are interested in learning more about how ethical frameworks can help you with development and design decision-making, I can’t recommend Future Ethics enough. I’ll be using more of the ideas to think through my own product decisions over the coming months.