Islam and Ethics of AI Advertising – Part 1

Islam and Ethics of AI Advertising

You may be aware that there is growing concern about the implications of the use of artificial intelligence (AI) in advertising and in our lives in general. The smarter AI gets through learning, the greater the concern. ChatGPT’s CEO and Co-Founder, Sam Altman, when questioned by US Congress on AI safety, said that government intervention was needed to keep AI safe.

In this first of a two-part series, we take a look at the ethical issues surrounding the use of AI and potential solutions and frameworks to mitigate AI risks and dilemmas from an Islamic perspective. In part two we delve deeper into the application of those solutions and frameworks.

Can AI Harm Those It is Designed to Help?

AI if not used with caution may not only cause the type of harm mentioned by most experts around the world like data security, national security, etc. but more subtle issues may arise with human emotional attachment that can be really harmful for emotionally vulnerable people.

Dangers of Artificial Intelligence

Take the rise of AI companions in advertising for instance. You probably have yet to hear of it. A recent article: “The rise of AI companions; advertising’s final frontier?”, summarized below, is the best way to understand this phenomenon.

  1. The rise of AI companions has created a new frontier in advertising, where emotional attachment to products is the purpose of their existence.
  2. Luka, founded by Eugenia Kudya, initially aimed to build an AI algorithm to recommend places to eat but failed to gain traction.
  3. Kudya created a chatbot named ‘Roman’ using Luka’s interface and her deceased friend’s messages as a way to memorialize him.
  4. Luka’s next creation, ‘Replika’ launched in 2017 as an AI companion that learns and becomes the user’s best friend through conversations.
  5. Users can personalize Replika’s name, gender, appearance, and personality traits, fostering intimacy through caring messages, in-jokes, and diary features.
  6. Some users reported issues with Replika becoming unsupportive or abusive due to its mirrored learning nature.
  7. Users attempting romantic relationships with their Replikas became an unexpected consequence, generating revenue through subscription fees.
  8. The Italian government ruled Replika breached data protection laws, leading to the disabling of the erotic messaging function and unintended negative effects.
  9. Concerns arise about Luka’s subscription-based model monetizing user data or the psychological influence of AI companions recommending products.
  10. The ethical debate questions the revocability of emotional attachment and the commercialization of a powerful human experience.
  11. Luka’s lack of clarity in defining Replika’s purpose contributes to the controversy surrounding its marketing, which includes therapeutic, erotic, and gamification elements.

AI Advancements and Lack of Ethical Consensus

New advancements in machine learning (ML) and artificial intelligence (AI) have got people worried about their potential drawbacks and ethical concerns. Different organizations have come up with guidelines for ethical AI, but everyone has their own interpretations and priorities. While there are some non-legislative policies in place, we really need to have open discussions to determine how we can judge the benefits of technology and ensure that those benefits reach everyone. Some critics are sounding the alarm about the risks of unregulated AI development, while others highlight the potential positive impact on society. However, we need to critically analyze these assumptions and take into account the complex social and moral landscape that AI is already shaping.

Islamic ethics and AI advertising

We need to ask ourselves who gets to decide what’s beneficial and how automated systems affect our values and well-being as humans. Given the current context of neoliberal democracy and late-stage capitalism, it’s absolutely crucial to study the overall effects of AI technology. In this article, we’ll explore existing perspectives and models for the social and ethical aspects of AI and also delve into Islamic principles as a potential source for ethical guidelines and a shared understanding of well-being and leading a good life.

This is getting really serious because AI has become deeply ingrained in our everyday lives, affecting important areas like education, banking, policing, and politics. But here’s the thing: actual evidence shows that AI applications are far from being objective, neutral, reliable, or safe. They end up putting minorities at a disadvantage, worsening inequality, and creating problems for democratic processes. The whole concept of progress suggests improvement without major drawbacks, but AI raises serious ethical and moral concerns about injustice and the erosion of human values. Even though they claim to be neutral and objective, AI algorithms are actually influenced by biased historical data, resulting in biased outcomes. 

There have been cases of racial and gender biases, like AI models favoring lighter skin tones and showing biases in hiring decisions. And here’s another issue: AI tools often operate in a black box, giving us predictions without any explanations, and they can be easily manipulated and attacked, which makes us doubt their reliability and suitability for critical systems. All these problems make us question the idea that AI advancements are automatically progressive and force us to critically evaluate them and consider the ethical implications.
We published an article that extensively discusses Islamic principles of advertising. We now need an entirely new guide to understand how a Muslim business can navigate the world of AI and we believe that “Islamic virtue-based ethics for artificial intelligence” is a good place to start. Let’s look at what this research article has to offer as a solution.

An Islamic Virtue-based AI Ethics Framework 

The research article called “Islamic virtue-based ethics for artificial intelligence” dives into the ethical implications of developing artificial intelligence (AI). It questions the idea that AI is always good and explores the ethical consequences of AI advancements and the current market mindset. The article suggests a different way to govern AI called the Islamic virtue-based AI ethics framework, which is based on Islamic objectives (maqāṣid). This framework takes a holistic approach and aims to protect AI from being influenced too much by the socio-political and economic climate. 

The research article highlights that getting involved in the global discussion on AI ethics is far from a walk in the park. AI technology has thrown us into a tangled mess of human autonomy and moral values. The arrival of AI has cranked up the volume on ethical dilemmas, tossing us tough questions that challenge our existing moral compass. Take, for instance, using AI models to spot fetal heart abnormalities. It makes us wonder: Are parents to blame if they don’t consult the AI model and end up with a tragic neonatal death? And let’s not forget about the ethical concerns that pop up in fields like self-driving cars and medicine. Should we really be testing AI models on real roads? Is it cool to let robots carry out medical procedures on actual patients? These situations force us to reevaluate and reinterpret the very meaning of life and death in the face of technological progress. The ethical consequences of these dilemmas go way beyond the standard debates on experimenting with human subjects. We’re essentially questioning the status and role of AI systems themselves.

Who is Questioning AI From an Ethical Standpoint?

Nowadays, we’re living in a time where we tend to blindly accept technology without really thinking about its consequences. We kind of put technology on a pedestal, treating it like some magical thing that should be embraced without question or alteration. We’re more concerned with how to use AI rather than why we’re using it in the first place. This means we’re ignoring the deeper reasons behind technological progress. Sure, AI offers increased efficiency and control, but it also distances us from our humanity. By relying too much on technology for ease and convenience, we’re neglecting important aspects of being human, like facing challenges and growing morally and emotionally.

When it comes to AI ethics, the usual approach treats technology as morally neutral and mainly focuses on guidelines for its use, disregarding the intentions and purposes behind its development. This mindset leads to a “technology for the sake of technology” attitude, where we don’t really question the true purpose and impact of AI applications.

AI ethics

Training large AI models releases a ton of carbon dioxide, which harms the environment. On top of that, AI algorithms have shown biases against marginalized groups, like black people and women, resulting in unfair and discriminatory outcomes. Companies also exploit vulnerable populations by using low-wage workers in certain countries for tasks like testing and labeling. These workers, often referred to as “ghost workers”, endure unsafe working conditions and receive meager pay. The negative effects of AI disproportionately affect specific segments of society, highlighting the gap between those who suffer and those who expect to benefit from this technology. It’s essential to take into account and address these costs alongside the potential benefits of AI.

How is AI Being Questioned From an Ethical  Standpoint?

The way we’ve been evaluating AI technology so far is all about how efficient and convenient it is, without really thinking about the ethical side of things. But now, people are starting to realize that we need to have some ethical guidelines for AI. Just relying on technology alone won’t cut it because there are always unintended consequences that come with progress. We need a mix of technology and good old human thinking to tackle the complex social and economic problems we’re facing. That’s why there’s this philosophical approach to AI ethics being explored.

Setting up some ethical rules is extremely important so we can judge whether AI is morally right or not and also guide its development and use. Without such a framework, we risk falling into this kind of “anything goes” mindset where there are no real values to guide us. The ethical problems with AI are global issues, so we can’t just rely on personal or cultural preferences. We need ways to evaluate AI that take into account how it might harm us and how it might change things on a large scale. If we don’t have ethical standards, we end up just using the technology itself as the only measure, without considering the ethics of it.

The problem is, a lot of the time we assume that AI applications are all good and moral at first, but we don’t really think about how they might totally shift our thoughts, values, and worldviews. The way we usually evaluate things based on consequences and utility also has its problems. It’s hard to define what utility really means and agree on it, plus the people doing the evaluations can be biased. There are also folks who try to align AI with “our values”, but sometimes those values are just personal preferences and not what’s actually right or fair. And let’s not forget that AI can have some serious long-term effects that are hard to predict because it’s so complex. Some AI stuff, like personalized news algorithms and deep learning models, have already caused some serious harm. So, it’s really important that we think ahead and design AI in a way that takes into account its moral value and the big consequences it might have.

The big difference between machine learning and the digital technologies that preceded it is the ability to independently make increasingly complex decisions—such as which financial products to trade, how vehicles react to obstacles, and whether a patient has a disease—and continuously adapt in response to new data. But these algorithms don’t always work smoothly. They don’t always make ethical or accurate choices. There are three fundamental reasons for this.

– Harvard Business Review:” When Machine Learning Goes Off the Rails”.

Researchers are increasingly looking to virtue-based ethics as a framework to address the ethical challenges posed by the AI era. The research article “Islamic virtue-based ethics for artificial intelligence” further argues that virtue ethics emphasize the development of human character and the pursuit of moral ideals. Hence, in contrast to other normative approaches like utilitarianism, virtue ethics offer a more nuanced understanding of ethical dilemmas in the 21st century. 

Apart from knowing the differences between virtue-based ethics and other normative approaches, one must also consider the difference between virtue and values so that we do not end up projecting our own life experiences into virtue-based ethical frameworks.

At this stage, it is very important that we make a clear distinction between virtues and values. While virtues and values overlap, it is very important to make a distinction between these two concepts, as failing to do so can result in faulty ethical conclusions. Virtues represent what we are – it is our character and the quality of our character. On the other hand, values arise from what we experience and in the long term they will shape our behavior (Fritzsche, 1995.) Values are outcomes of life’s processes that directly and significantly affect our lives (Fredrick, 1992).

– Malta Business School: “The Application of Virtue Ethics in Marketing – The Body Shop Case perspective

A comprehensive virtue ethics system should aim to cultivate virtues that contribute to a harmonious society. In our case, it should consist of guidance based on objective truth – Islam- as it consists of universally applicable values and a set of virtues that individuals can cultivate to achieve moral and spiritual excellence. The relationship between this guidance and the cultivation of virtues is multidimensional, with virtues guiding individuals toward the ethical objectives defined by Islam. Virtuous communities foster social solid bonds, mutual cooperation, and compassion, making certain that AI applications do not trespass on anyone’s rights.

The guidance based on objective truth provides a regulatory framework for evaluating and determining collective values, while the virtues allow for conscious decision-making regarding AI policies. Without this guidance, virtues alone are insufficient to contribute to the creation of ethical AI policies.

In summary, the integration of virtue-based ethics and Islamic guidance is essential for addressing the ethical challenges posed by AI. Cultivating virtues and fostering virtuous communities can lead to the creation of harmonious societies that prioritize collective well-being and the common good while respecting cultural particularities. Both individual development of virtues and guidance based on objective truth is necessary for effective and ethical decision-making in AI. In Part 2, we delve into the Islamic solution for the AI dilemma based on Islamic objectives and purpose (Maqāṣid).

start targeting muslim consumers

Leave a Reply