study that aims to understand the current and potential use of AI by startups and small and medium enterprises (SMEs) in low- and middle-income countries (LMICs) in four regions: Sub-Saharan Africa, North Africa and South and Southeast Asia.
The report, which was produced by the GSM Association (GSMA), a UK-based organization representing the interests of mobile operators worldwide, asserts that "AI can radically alter and improve the way governments, organizations and individuals provide services, access information and improve their planning and operations."
Mapping "a sample of 450 start-ups by sector in alignment with the UN Sustainable Development Goals (SDGs) and, based on interviews with AI experts in LMICs," the report explores "trends and challenges in business models, barriers to innovation and the ethical and responsible use of AI." In doing so, the study answers the following research questions:
- What is the status of AI use in LMICs?
- Which sectors, geographies and business models are showing the most promise, and why?
- What are some of the barriers to implementing AI solutions in LMICs?
- How can AI be used ethically to accelerate the achievement of the SDGs?
Top AI use cases in LMIC include agriculture, administration and business processes, cities and infrastructure, climate change, disaster management, education, finance and microlending, government and public services, healthcare, and identity. In explaining the use cases by sector verticals, the report says: "Business intelligence and analytics had the highest number of use cases as it captures a wide range of business-to-business (B2B) solutions, from enhanced retail market analysis and predictive decision making to customer service. Customer service chatbots, automated IT consulting, big data analytics and automated records are some examples of AI use cases."
As for healthcare, this rapidly growing sector "had the second highest number of use cases and clearly benefits from AI solutions, including sophisticated diagnosis and treatment options, hospital management systems, lifestyle change recommendations and healthy eating habits." The report further notes that "[f]ood and agriculture, financial services, education and retail and consumer goods followed these sectors. Food and agriculture employs a range of AI-based services, including services for identifying and remedying crop diseases, linking producers more effectively to buyers and markets and helping farmers maximize crop yields based on climatic and soil conditions."
As for use cases by country and region, the report identified a few AI innovation hotspots (see map above). "India," for example, "was the most represented country in our sample. The country accounted for over 40 percent of the sample (180 use cases), indicating a high level of innovation and AI uptake in the country. Far more cases were identified in India, but were excluded due to limited alignment with development outcomes, apart from general economic development. Nigeria and South Africa were the next two most represented nations in the sample with 42 and 38 use cases, respectively. China was excluded from the study, along with the rest of East Asia, which are emerging as centers of AI innovation and investment. For example, in 2017, China submitted approximately 1,300 AI and deep learning-related patents, compared to 220 by the United States."
Data, ICT infrastructure and hardware challenges create barriers to implementing AI in LMICs. Such challenges include the availability, accessibility and quality of data, access to reliable and affordable internet, lack of access to sufficient computing power, increasing digital inclusion and connectivity including device access, ownership and capability, and unreliable power infrastructure.
Human capital and lack of funding and automation present additional barriers. According to the GSMA, "While there is growing access to upskilling and training in AI, many countries still lack a steady pipeline of home-grown talent and skilled AI development talent. [...] The lack of mentorship available to start-ups developing AI-based solutions is also a constraint in many countries."
Regarding the lack of investment, the report explains that "AI-based solutions typically need a lot of investment. Unlike countries such as China and the United States, investment and funding are extremely limited in most LMICs. Countries in Africa and South and Southeast Asia that appear to have higher levels of investment include India, Kenya, Malaysia, Thailand and South Africa."
On the topic of the ethical use of AI in LMICs, the report points out that "To genuinely contribute to the SDGs, AI innovators need to eliminate the potential negative impacts of their AI processes. AI applications should be ethical by design to prevent and mitigate any potential negative impacts on users, workers, communities and the environment."
Moreover, "The application of existing laws, regulations and privacy principles, such as the GSMA Mobile Privacy Principles, can help mitigate privacy and ethics risks associated with AI. In addition to these frameworks, the GSMA recommends the adoption of the following principles by all stakeholders using AI for social good."
- "Do no harm: Development and deployment of AI systems should respect human rights and should not cause human rights harm to individuals or groups. Particular care should be given to preventing harm to vulnerable individuals or groups."
- "Be inclusive: AI stakeholders should support inclusion and equity, and should strive to ensure that the benefits of their AI-based technologies are broadly accessible.
- "Be fair: AI systems should incorporate human oversight. All stakeholders should strive to ensure that the data used in AI is accurate and not unfairly biased. AI should not be used to make decisions that may affect any group or individual in an unfair or discriminatory way (e.g. discrimination based on protected characteristics such as race, gender, etc.).
- "Ensure transparency: Individuals should be informed when they are communicating with AI-powered systems instead of a human (e.g. conversational AI). Decisions made with AI should be clearly explained to the individuals affected.
- "Embed accountability: All AI stakeholders should be accountable for their use of AI and should promote these principles with the third parties they engage for social good purposes.
- "Adopt privacy and ethics by design: AI systems should be designed and deployed according to privacy and ethics by design ethos or methodology at each stage of the life cycle, with input from relevant teams.
- "Advance security and safety: Access to AI systems and their underlying data should be controlled and subject to audits or other accountability measures. State-of-the-art security measures should be used wherever possible. All AI experts and practitioners should implement best practices in security.
- "Support sustainability and societal well-being: Sustainability and societal well-being should be considered in the development and deployment of AI systems."
Maintaining business interests in many of the countries covered in this report, I concur with the GSMA that AI can have a transformative impact on LMICs. Such transformation, however, will require investments from the private sector and governments to overcome the aforementioned barriers to implementing AI. What is more, media sources are starting to present reports about the misuse of the technology. It is imperative that all stakeholders using AI adopt the GSMA's recommendations for using AI for social good.
Do you agree with the report's findings? How are you engaging in the development of AI solutions in LMICs?
Aaron Rose is a board member, corporate advisor, and co-founder of great companies. He also serves as the editor of GT Perspectives, an online forum focused on turning perspective into opportunity.