A Deep Dive into the Future of AI Technologies: Risks, Opportunities, and Regulatory Challenges
2025-11-30
2025-11-30
Jessica Lamie
Earlier this month, I had the privilege of attending the Second Cairo Forum 2025, an influential gathering that brought together experts, policymakers, and industry leaders to examine the challenges and opportunities shaping our world today. The Forum focused on issues such as geopolitical shifts, economic transformations, and the rise of disruptive technologies, providing a platform to explore their implications for the Middle East and North Africa.
The Forum was held in Cairo, Egypt, on 3rd and 4th November 2025, and was organized by the Egyptian Center for Economic Studies (ECES). I represented the Access to Knowledge for Development Center (A2K4D) at the Onsi Sawiris School of Business at the American University in Cairo (AUC) and our flagship initiative, the MENA Observatory on Responsible AI, supported by the International Development Research Centre (IDRC). I had the opportunity to participate in a VIP lunch session on November 4th, titled “A Deep Dive into the Future of AI Technologies: Risks, Opportunities, and Regulatory Challenges”. The session brought together thought leaders from government, international organizations, and the private sector to discuss how the MENA region can harness AI responsibly while navigating its risks.
The session was opened by Mr. Ziad Aly, Chief Executive Officer at Peacock Investment Holding, who highlighted the massive AI-driven investments currently unfolding across the MENA region. He emphasized that AI represents not just technological progress, but a revolution with its own set of risks. Yet, he noted, the greater risk lies in failing to adopt AI and participate in this transformation, as countries and institutions that lag behind may face significant competitive and developmental disadvantages.
He then initiated the discussion with a key question: Will the MENA region be merely a consumer of AI and this major technological transformation, or will it take an active role in shaping it ?
Mr. Anirban Sarma, Director at the Centre for Digital Societies at Observer Research Foundation, opened his remarks by providing a global perspective on AI trends and risks, emphasizing the growing divide between the Global North and South. He began by highlighting projections that AI could add $15.7 trillion to global GDP by 2030. At the same time, he cautioned about a “staggering number of job losses,” noting that early signs are already visible in Silicon Valley.
Mr. Sarma identified three major trends and risks:
Data Privacy: He highlighted the non-consensual way in which personal data is often being used to train AI models, stating that scraping data from the web without consent is eroding public trust in AI.
Big Tech Dominance: He argued that AI development is highly dominated by big tech, leading to closed and proprietary systems. He pointed out that all research labs and startups depend on the computing infrastructure of companies like Microsoft, Amazon, or Google, giving them immense control.
The Global North-South Divide: Mr. Sarma expressed concern that most of the biggest decisions about AI's purpose, development, its functionality, and safeguards are all centralized in the Global North. He warned that the impacts are felt worldwide, but the needs of the Global South are not being adequately served. He proposed a “Coalition of AI middle powers,” including countries like Egypt, India, and Brazil, to collaborate, pool resources, and develop standards together.
Mr. Sarma concluded by advocating for India's approach of “enable first and regulate later” to avoid stifling innovation. At the same time, he stressed the need for risk-based rules, evaluation protocols, and international cooperation between national AI safety institutes to ensure responsible AI deployment.
Shifting focus to Egypt, Dr. Hoda Baraka, ICT Minister Advisor for Technology Talents Development & AI National Lead, outlined Egypt’s structured and forward-looking national strategy for artificial intelligence. She explained that Egypt's formal AI journey began in 2019 with the establishment of the National Council for AI to set governance structures. A major milestone was the launch of the Applied Innovation Center (AIC), designed to “transform technology into a reality” and create tangible impact rather than remain theoretical.
On the governance front, Egypt introduced its Ethical Charter for AI in 2023, aligning with global principles from the OECD and UNESCO while integrating local and Arabic cultural values. This was later expanded into an Arab Ethical Charter through the Arab League.
Dr. Hoda also highlighted the launch of the new National AI Strategy in January 2025, which responds to recent technological shifts with a strong focus on Generative AI. The strategy prioritizes key sectors for AI implementation, including health, agriculture, education, and justice. She concluded by acknowledging a persistent challenge: “technology is much, much faster than whatever regulation we are trying to work on.”
Dr. Hoda also expressed an appreciation for India's philosophy behind the phrase “enable first and then regulate after,” noting its emphasis on fostering innovation. However, she added an important warning: the pace of the AI is moving so rapidly that the “regulate later” component may never be effectively implemented. By the time regulatory frameworks are in place, the technology could have advanced so far that oversight becomes significantly more difficult, or even too late.
Dr. Ahmed N. Tantawy, Founding Director of the Applied Innovation Center (AIC), offered a hands-on perspective on implementing AI in Egypt. He described the center’s philosophy as not to “preach technology” but to actively “do it,” developing solutions that deliver tangible, business-relevant results. Acting as the government’s R&D arm, the AIC has spearheaded practical applications such as AI tools for screening diabetic retinopathy and breast cancer in healthcare and systems for processing and translating Egyptian colloquial Arabic for the judiciary.
Dr. Tantawy also highlighted key challenges:
Data Access: He described data availability as a hurdle, noting that it is often difficult for institutions, even those working closely with the government, to obtain the information they need. This is largely due to the absence of clear, standardized procedures for data sharing. He emphasized that without accessible and usable data, meaningful analysis becomes very limited.
Talent Retention: The second big issue is retaining high-caliber talent. He noted that the average tenure at the AIC is about two years before skilled individuals leave for opportunities elsewhere, creating a constant need to re-hire and train.
On a positive note, he challenged the common notion that AI only eliminates jobs, emphasizing that it can create employment opportunities. For instance, when AI detects more diseases, additional doctors and medical staff are needed to treat patients, generating new roles across sectors.
Mr. Hyun Goo Kang, Director of the Korea–Egypt Digital Government Cooperation Center (DGCC), explored the dual nature of AI and compared the strategic approaches of major global players. He framed AI as a technology that is rapidly becoming a “colleague,” offering benefits such as automation and personalized services, while also cautioning against risks including misinformation, fake news, algorithmic bias, and job displacement.
Mr. Kang contrasted different national approaches to AI: the United States, where innovation is largely driven by the private sector; China, which emphasizes state-led self-sufficiency; and South Korea, which focuses on applying AI strategically to its industrial base. He also highlighted key challenges in regulating AI: the speed of technological advancement often outpaces legislation, the “black box” nature of complex AI makes it difficult to assign responsibility for errors, and the cross-border operation of AI complicates enforcement.
Ms. Marwa Abbas, General Manager for IBM Egypt & East Africa, shared IBM’s corporate perspective, emphasizing the importance of co-creation, governance, and skill-building in AI adoption. She framed AI as an enabler, a tool to transform government and institutions to become more competitive.
Ms. Marwa stressed that technology must be co-created locally, taking into account regional languages and cultural contexts. This approach, she argued, allows countries like Egypt to become exporters of AI talent, rather than merely consumers of technology. Her central message was the idea that trust is the currency for AI adoption; without trust, deployment and uptake are unlikely to succeed.
She also highlighted that governance is not a one-time checkbox, but a continuous process throughout the AI lifecycle, ensuring transparency, explainability, and fairness. Interoperable governance frameworks between countries, she noted, are critical to fostering global collaboration. On the human capital side, Ms. Abbas showcased IBM’s work with partners like MCIT on upskilling initiatives, including the free IBM SkillsBuild platform, aimed at preparing the workforce for the jobs of the future.
Mr. Saad Sabrah, IFC Country Head for Egypt at the World Bank Group, provided a development finance perspective, emphasizing AI’s transformative potential in emerging markets. He likened AI’s impact to that of electricity and the internet, framing it as a critical driver for development.
Mr. Saad highlighted three main opportunities for countries like Egypt:
Enhancing service delivery in health and education.
Promoting financial inclusion by using AI for credit scoring to close the $17 billion SME finance gap in Egypt.
Innovating in human capital and creating new jobs in the AI value chain, such as data annotation.
On governance, he advocated for a “co-governance” approach, involving collaboration between the public sector, private sector, and civil society, rather than relying solely on a top-down regulatory model.
H.E. Dr. Mohamed Salem, Former Minister of Communications and Information Technology, Egypt, framed the global AI landscape as a “battleground” driven by a few American and Chinese tech giants. He highlighted the staggering scale of investment, noting that Microsoft (with OpenAI), Google, and Meta alone are collectively spending around $270 billion on AI infrastructure and development. This rapid, largely unchecked competition, he warned, is occurring with minimal oversight.
Dr. Salem transitioned to the potential consequences of this race, referring to it as an existential risk for humanity. He highlighted the warnings of Professor Geoffrey Hinton, the “godfather of AI,” who resigned from Google to freely voice concerns about the dangers of AI. He pointed out an imbalance in investment: over 95% of global AI funding goes to development, while less than 5% is dedicated to safety and risk mitigation, a gap he described as catastrophic.
He also highlighted the threat of military AI, including autonomous weapons and neural agents capable of making life-or-death battlefield decisions without human oversight. Dr. Salem warned that such systems could eventually exit human control, potentially triggering catastrophic outcomes, including nuclear-level consequences.
Finally, Dr. Salem commented on current global regulatory initiatives, such as President Biden’s executive order and the Bletchley Declaration, noting that many of these efforts remain non-binding or at an early stage. He suggested a modern approach to data governance that emphasizes an “open cloud stack,” strong encryption, and clear management frameworks. This, he explained, could help countries benefit from global technological infrastructure while maintaining effective oversight of their own data.
Attending this session at the Second Cairo Forum gave me a lot to think about. Hearing from experts, policymakers, and industry leaders made it clear that AI has incredible potential, but it also comes with serious risks. From the massive investments and global competition highlighted by H.E. Dr. Mohamed Salem, to the practical work happening at Egypt’s Applied Innovation Center, I realized that the MENA region is at an important turning point. One thing that stood out across all speakers was the importance of trust, governance, local relevance, and skills development in making AI work for people.
For me, this session connected directly to the work we do at the MENA Observatory on Responsible AI. Our mission is to help the region not just consume AI, but shape it in ways that are ethical, inclusive, and useful for society. By learning from global trends, supporting local innovation, and promoting responsible practices, we can help ensure that AI becomes a tool for sustainable development, better services, and new opportunities for people across the region.
This session reinforced why the Observatory’s work matters: AI is moving fast, and the choices we make now will shape the future of the MENA region.