×

RECEIVE NEWS AND UPDATES

A Compassionate Approach to AI in Education

By: Maha Bali

I have been deeply involved in supporting educators at The American University in Cairo (AUC) and around the world to respond to the sudden change in their lives caused by the advent of generative artificial intelligence (AI), and how it has potential to disrupt, challenge or transform education, depending on whom you ask.

As a teacher, educational developer, and researcher, I try to approach my work from a feminist lens that centers socially just care and compassionate learning design. I believe we need to look critically at the kind of inequalities that exist or are being reproduced or exacerbated by new technologies like generative AI, and we also need to look at the kind of care needed in that context to support educators and learners to respond to this huge shift we are faced with. I believe in the importance of a compassionate approach to designing learning environments and experiences where learners are empowered to make their own decisions and have agency over how and what they learn (this is inspired by feminist pedagogies of care and Indigenous approaches to evaluation). My feminist approach to teaching criticality has always built on Women’s Ways of Knowing, where I don’t promote criticality and skepticism via debate, but rather nurture criticality by first cultivating empathetic dialogue and starting with intuition and personal experience in order to develop our critical stance collaboratively and in a participatory manner. Part of my feminist decolonial approach is to intentionally incorporate perspectives from non-male, non-white, non-Western authors in teaching about technology, a field traditionally dominated by Western males.

How does this all translate into the situation we now have with generative AI in education? First, we need to develop educators’ and learners’ critical AI literacy; second, decide on which uses of AI might be harmful, helpful, appropriate, etc., in different contexts, and where we might wish to prohibit or promote its use for ethical or other reasons; third, involve learners in the process of deciding the policies for AI use within classes and across institutions; and finally, understand why students might still be tempted to use generative AI in an unauthorized manner.

It is important for me to contextualize teaching about the digital world to our Egyptian/Arab postcolonial context, and I’ll give examples of this as I describe my approach.

1.   Critical AI Literacy

AI Literacy includes understanding how machine learning works, how generative AI tools are trained, and how to judge the quality of their outputs, in order to assess whether or not they are appropriate to use in a certain context. It also involves learning which tools to use for which purpose and how to prompt the tool effectively in order to approach the desired outcome.

 

Approaching AI literacy from a critical perspective additionally emphasizes noticing the ways in which inequalities and biases are embedded in and perpetuated by AI, as well as understanding how it works and how we might learn to harness it appropriately for the greater good.

In terms of the outputs of AI, we know that much of generative AI has been trained on mostly white, Western/Northern male-produced data sets, and tend to reproduce these inequalities in their outputs (see Where Are the Crescents in AI?). Among the first things I show my students is the white-Western bias in AI tools, one of the most prominent examples of which are the bias against Palestine and the Palestinian cause. We also discuss how a lot of the hype around using AI as a language tutor ignores that generative AI is not necessarily as powerful in languages other than English. For example, the audio version of the ChatGPT app cannot speak Arabic with a native accent (it sounds like a foreigner speaking Arabic).

Moreover, I share with learners how the lack of Arab-world content in AI training sets means it is more likely to produce hallucinations when referring to very local/regional content. As an example, in the early days of Gemini, I prompted it to give me a table of contemporary Egyptian leaders and give a photo, and for Mohamed Ali Pasha, it included correct info, but gave a photo of Mohamed Ali Clay the American boxer. More examples are in the article linked above. We should all, as Global South citizens, seek, as Owusu-Ansah, does,  “to know who has a say in determining what becomes a part of the corpus that feeds the output” of AI platforms.

The processes of creating AI tools also need to be scrutinized. We know that when OpenAI tried to hire humans to reduce the amount of offensive content produced by AI, the contract workers in Kenya who were hired to do so suffered mental health issues and the company did not support them. It is notable that the exploitation of labor occurred in a global South country, presumably where labor is cheaper and governments are less likely to take sufficient action.

We also know that not all people around the world have equal access to AI tools. Other than those who have trouble accessing the internet or even electricity regularly, even those who have regular internet access may not have the tools available to them (remember ChatGPT was not available in our region, Egypt, among others, when it first came out, and people had to find workarounds). Now that many of the originally free AI tools have paid versions of much higher quality, this will create a hierarchy of those who have access to the free versus the subscription versions of AI platforms.

Even when people have access to AI tools, people may not have the literacies to use them expertly – in some contexts, females do not develop confidence that they can be technically capable. As a computer science graduate myself, I have always felt like an impostor because males tend to assume that other males know better about technology, and I had to overcome this and push myself to study computer science and become a technically savvy person. However, I frequently see very intelligent and technically capable females shy away from learning new technologies because they think they will be too hard. If we plan to use AI in education, then we need to ensure that students of all genders and backgrounds have access and have the digital literacies needed to use the tools.

On a different note, there are ethical issues with AI platforms that are rarely spoken about: the environmental impact of training large language models (LLMs) and using them is said to exacerbate climate change and contribute to water scarcity. Another ethical concern with AI tools is the intellectual property rights of the original creators of content used to train the AI – even if the AI tool does not reproduce the content identically, the problem is that the AI platforms do not even provide a reference/citation trail for how it reaches its conclusions. These are all issues we will continue to grapple with.

2.   Which Uses of AI Are Appropriate?

Historically, AI that has been used for facial recognition, in the criminal justice system in the US, and for recruitment, has shown that it reproduces human biases in extremely harmful ways – they are often racist, ableist, and sexist, and the EU has outlined these as areas of either “unacceptable” or “high risk” use of AI. There are areas where accountability of a human being are important and we need to understand how they make their decisions and what evidence they have been using. Part of AI literacy is to know when using AI would be inappropriate or unethical.

In the educational context, when administrators or educators use AI in ways that may assess learners or gate–keep who is allowed to enter college, for example, these are areas where AI use may prove harmful, and should therefore be strongly discouraged, or involve a strong accountable human-in-the-loop element. Whenever AI is proposed for use in education in ways that claim to improve personalization, we should be very skeptical, because this is rarely the case. Whenever AI is proposed to act as a “tutor” to students, we should question this, first because teaching, tutoring and coaching have a socioemotional element and not just a cognitive element (it is not just about transferring knowledge – books and internet could do this before AI), but also because generative AI tools are known to “hallucinate”, so may confuse a learner who still does not have the judgment to recognize when a platform responds with an inaccurate answer to their question[1]. Also, because of the biased datasets generative AI tools have been trained on, they are more likely to “hallucinate” on particular topics not heavily represented in their training data – the kind of content that comes from the global South, reproducing epistemic injustice.

 

On the other hand, some uses of AI for medical diagnosis have been extremely helpful for speeding up diagnosis especially in low-resourced areas. Uses of AI for accessibility can be really helpful – people with visual impairments can use AI tools like BeMyAI to take photos of their surroundings or things sent to them digitally, and the tool will let them know what is on that photo – it’s not 100% accurate, but it helps. We can recognize and celebrate these successes while also continuing to question how well-represented data from the global South has been in training these AI tools as well – would the BeMyAI tool recognize images of things that are culturally very Muslim-specific? Are there differences in how certain illnesses manifest in dark skinned people, and will an AI diagnostic tool recognize that?

3.   Involve Learners in Setting AI Policies/Guidelines

Because I am an educator and an educational developer, I am most concerned with the use of AI in the classroom in higher education. My approach to this is to recommend that teachers and students have a conversation together about what the key learning outcomes are of a course, and decide together the extent to which AI use would be helpful or harmful in the course. This participatory approach entails using accessible metaphors. I use the analogy of making cake. Do students in the course need to learn to bake a cake from scratch (this is similar to not using AI at all), or can they use readymade cake mix, because they’re going to learn how to decorate a cake (this is similar to using AI to brainstorm a template as a starting point then adding your own personal touch)? Can they even possibly buy the cake from a good bakery because the assignment is actually to organize a wedding and they don’t need to make the cake themselves (this is like if the assignment was focused on writing and the students are allowed to use AI to create images to enrich their article)? Can they buy a ready-made low-quality cake from a supermarket like a Twinkie?  If they end up choosing the latter, then perhaps we can do something to upgrade our assignments so that students don’t all submit Twinkies.

4.   Understand why students might still be tempted to use generative AI in an unauthorized manner

A compassionate, feminist approach involves starting with empathetic listening and taking a collaborative approach with learners, rather than an antagonistic, punitive approach towards learners.

Even after all we have said above, learners may be tempted to use AI in an unauthorized context. I co-wrote a blogpost with a former student of mine, Yasser Tamer Atef, exploring reasons why students may use AI in an unauthorized way, and how to take a compassionate approach to dealing with this issue. Here is a summary of the issues and our proposed compassionate solutions:

  1. Time. When students feel they don’t have time, they’ll resort to shortcuts (AI or otherwise) to finish something. A compassionate approach would be to negotiate timelines with students ahead of solidifying due dates, and ask students if they need support from the teacher or teaching assistant (TA) along the way in order to finish on time – perhaps checking in before the deadline on their progress. Some people tend to need more time for certain tasks than others, and more than we as educators imagine. They also have competing interests, whether academic or personal, that may slow them down
  2. Lack of relevance/meaning. Sometimes if a student is not interested in the topic, does not see its relevance to their lives, or cannot find meaning in the work, they may resort to AI instead of doing the hard work that will help them learn. This is an opportunity for us as educators to give students options to choose the topic/theme of their work whenever possible, and if not, to have deep conversations with them on why this matters and how it is relevant to their careers or lives. Yasser says that when a professor assigns something creative or enjoyable, students are almost never going to try to take a shortcut for that.
  3. Lack of confidence in their own ability to do it well enough or as well as AI. This comes from a misconception that AI does things well. Generative AI tools do things quickly, and often appear like they’re doing them well, but they are often not really doing exactly what the teacher is asking students to do. Because generative AI tools write so fluently, it can appear impressive to a non-native speaker student. One solution to this is to figure out why students feel they cannot do this on their own, and offer extra support during office hours or through a TA, or even better, to foster a supportive community in class so that other students who are able to do this can help others.
  4. Competitive education systems. When all students focus on is their grades, and getting a better grade than their colleagues, there can be a race for who gets a better grade and gets things done faster so they can do more work. If, instead, we foster a supportive community as mentioned above, we can counter these competitive systems. Examples include removing any kind of “curve” in our grading system that compares students against each other rather than benchmarks of success, possibly even removing grades altogether and encouraging self and peer assessment. It also includes creating opportunities for learners to know each other and work together and encouraging and rewarding cooperative attitudes. If we can remove grades altogether, we may be better able or center our learner’s humanity. As Palestinian educator Munir Fasheh says, “grading students is degrading”.
  5. Not recognizing the harm to others in using shortcuts. This requires a conversation where they recognize the inequalities of them using a shortcut where their colleagues act with integrity and work hard. There is also harm to others beyond the class – because if they graduate without learning a thing because they’ve used AI instead, then in their careers beyond university they’ll be put in situations where they aren’t able to solve a problem or deal with a situation because they’d never learned to do it in the first place.

From the conceptual level of recognizing inequalities and harms exacerbated and reproduced by AI, to empowering learners to take agency over their learning and make judgments about when it is appropriate to use AI, I am hoping to nurture critically digitally literate learners able to participate in the future in creating more socially just AI platforms and education systems in the MENA region and beyond.


[1] In the research paper titled,”Explainable AI-Based Tutoring System for Upper Egypt Community Schools”,  together with Dr. Marwa Soudi,  my co-authors and I attempt to take into account such issues. This research was supported by  A+ Alliance and the International Development Research Centre (IDRC) through our MENA Hub Feminist AI Research Network f<a+i>r: Incubating Feminist AI: Paper to Prototype to Pilot model Paper citation:  Marwa Soudi, Esraa Ali, Maha Bali, and Nihal Mabrouk. 2023. Generative AI-Based Tutoring System for Upper Egypt Community Schools. In Proceedings of the 2023 Conference on Human Centered Artificial Intelligence: Education and Practice (HCAIep ’23). Association for Computing Machinery, New York, NY, USA, 16–21. https://doi.org/10.1145/3633083.3633085

Related Posts

Feminist AI in MENA – A Neutral Technology?

Ever wondered if Artificial Intelligence (AI) is truly neutral? Is AI genuinely free from human prejudice? Is the technology impartial?

Towards Gender Parity in AI Development and Deployment in MENA

Why is the underrepresentation of women in every stage of tech production a problem for datasets, for organizations, and for society as a whole?

Digital Entrepreneurship in Egypt: Opportunities and Obstacles

A knowledge economy, or a knowledge-based economy, is one where economic success is mainly based on the effective use of intangible assets...