AI for People: An Event Co-organized with Kasr El-Dobara Evangelical Church (KDEC)
2025-05-07
2025-05-07
On April 26th, Kasr El-Dobara Evangelical Church (KDEC) organized an event exploring the risks and opportunities of Artificial Intelligence (AI). The event was part of an ongoing initiative by the church called “Khatwa”—a service launched to deliver educational sessions aimed at helping individuals make more informed decisions in various areas of life. The event was divided into two main sessions: an educational session and a practical workshop.
The educational session was led by Marwa Soudi, Co-founder and CIO of Ideasgym, and Consultant and Gender Specialist for the MENA Observatory on Responsible AI. Marwa provided an in-depth overview of AI—its definition, its potential risks, and the opportunities it offers. She began by referencing the updated definition of AI by the AI High-Level Expert Group (AI-HLEG, 2018, p.6), which defines AI systems as:
“Software (and possibly hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting structured or unstructured data, reasoning on the knowledge or processing the information derived from this data, and deciding the best actions to achieve the given goal. These systems may use symbolic rules, learn numeric models, and adapt behavior by analyzing how the environment responds to their previous actions.”
Marwa stressed the importance of keeping AI human-centered, as it processes human data and directly impacts people’s lives. She explained that AI is generally categorized into two main types; general and narrow AI. General AI is the type of AI that has the same capacity and capabilities as humans, it can behave in multiple environments at the same time. Meanwhile, narrow AI behaves only in one environment and doesn’t have absolute intelligence. Marwa emphasized that AI is particularly helpful when it comes to implementing tasks that require long steps that the human memory can’t keep track of.
As AI continues to evolve, incidents of bias have become more frequent—often due to poor data quality. For instance, Judith Sullivan's Medicare Advantage plan denied coverage for nursing home care despite her ongoing needs. United Healthcare uses naviHealth's nH Predict tool to make coverage decisions based on data analysis. The tool often predicts discharge dates that align with coverage cutoffs, even when further treatment is necessary. Medicare Advantage plans have a history of denying nursing home care covered by original Medicare. Another example of bias is The National Eating Disorders Association’s (NEDA) chatbot, Tessa, which was discontinued after it provided weight-loss advice to users seeking help for eating disorders. This incident has highlighted the potential dangers of using chatbots and AI assistants in healthcare, especially for sensitive issues like eating disorders. NEDA is currently investigating the situation, stressing the importance of caution and accuracy when employing technology for mental health support. Additionally, Marwa emphasised the anticipated risks of using deep fake AI generators. Such tools can be used in harassment cases and revenge porn which promotes sexual violence and privacy infringement.
To overcome such bias and anticipated risks, Marwa highlighted the importance of adopting responsible AI which comes in three waves. The first wave includes understanding the sciences of AI tools and how they come up with the final output, the second includes understanding and fighting the AI bias, and the third one is concerned with addressing environmental risks that come from AI energy consumption. All of which should be accompanied by multi-stakeholder efforts from the government and the private sector to form data governance laws and policies to ensure transparency and fairness. The General Data Protection Regulation (GDPR) is a good example of such policies. Under GDPR, individuals have the right to access their data, erase it, data portability, object to processing, and to know when data is used for automated decision-making, including profiling.
Despite these risks, Marwa encouraged attendees to embrace AI tools in their personal and professional lives, as long as they do so with careful attention to data privacy. For instance, she recommended using DeepSeek for answering questions or completing tasks, but advised against sharing sensitive information. Similarly, she suggested using Magic Studio for photo editing, while avoiding uploading sensitive images or giving the app unrestricted access to one’s photo gallery.
The second half of the event featured a practical session led by Bishoy Nabil, Digital Marketing Manager at Circles Digital Marketing Agency. Bishoy engaged participants in a hands-on workshop, where they were divided into groups and asked to brainstorm and develop new product ideas along with their marketing strategies using AI tools such as Poe, DeepSeek, and Magic Studio.
He emphasized the importance of prompt engineering, explaining that the key to leveraging AI effectively lies in providing the right input to generate the desired output. The participants demonstrated impressive creativity and innovation. Their ideas included:
• A dieting app that suggests personalized meal prep and recipes.
• A sunglasses brand with a unique marketing concept.
• An instant mental health support app offering real-time assistance.
The event successfully equipped attendees with a foundational understanding of AI, its benefits, and the ethical considerations surrounding its use. It also gave them a chance to apply what they learned in a collaborative and creative environment.