Student List


Paper presentation by Dr Chimezie Amadi on Fair and Responsible Use of A.I For Consumers


I thank the Director, Consumer Protection Council, of Imo state, Honorable Mrs. Obioma Okafor, for considering me worthy as the Guest Speaker for this year’s edition of the World Consumers Rights Day Celebration with the theme:  Fair and Responsible Artificial Intelligence for Consumers.

This theme is, indeed, topical and relevant within the context of our current socio-economy reality, where AI is already walking its way into the fabric of our daily lives. Wielding its transformative influence across diverse domains, including healthcare, finance, education, and entertainment. Its evolution from mere concept to practical application has heralded a new era of innovation and opportunity, promising unparalleled efficiencies and advancements.

Artificial Intelligence (AI), defined as the simulation of human intelligence processes by machines, AI encompasses a broad spectrum of capabilities, from problem-solving and learning to perception and decision-making. Its genesis can be traced back to the mid-20th century when pioneers like Alan Turing laid the theoretical groundwork, envisioning machines capable of intelligent behavior.

We are already seeing the next generation of AI as more and more objects we interface with become imbued with some form of AI-related functionality. Eg Alexa, Siri, Cortana, and Google Assistant are AI-powered digital assistants mainstreaming voice as the next interface over touch. Instead of typing a search query using your favorite connected device screen or keyboard, you can simply speak it.

According to recent reports by PwC and Accenture, AI will usher in a new era of technological and economic transformation over the next 2 decades. The global AI technologies market is vast and promising amounting to around 200 billion U.S. dollars in 2023 and is expected to grow well beyond that to over 1.8 trillion U.S. dollars by 2030. Everything from supply chain, marketing, product marketing, research, analysis, and more are fields that in some aspect adopt artificial intelligence within their business structure. Chatbots, image-generating AI, and mobile applications are all among the major trends improving AI in the coming years.

Where is Nigeria in the Global AI discussion? Are they on the dining table or on the Menu?

To avoid FOMO (Fear of missing out), Nigeria has formulated a strategic policy to harness the potential of AI inclusively and responsibly. According to the Nigeria Strategic Plan titled “Accelerating our Collective Prosperity through Technical Efficiency,” the goal of the AI Strategy is to elevate Nigeria as a top 10 location for AI model training and talents globally besides positioning  Nigeria as a global leader in accelerating inclusivity in AI dataset.

Going by this Strategic Plan, Nigeria wants to achieve the top 50 global rankings (currently 96) in AI readiness and adoption across metrics (Computing Power, Skills, Data Availability, Ethics, and Governance) by 2030. In addition, create over 50,000 jobs in Nigeria’s AI industry by 2030.

However, as AI continues its relentless march forward, the imperative of ensuring its fair and responsible deployment becomes increasingly paramount. This paper looks comprehensively at AI ethics, elucidating the critical roles played by Innovators, Regulators, and Users in shaping the ethical landscape of AI adoption. From the drawing board to the boardroom, from policymaking chambers to the fingertips of end-users, each stakeholder bears a distinct responsibility in fostering an ecosystem where AI thrives equitably and ethically.

Today we can see AI that can mimic human speech patterns, tones, moods, personalities, and behaviors. This next generation of AI will also have the ability to mimic subtle human emotions, by communicating empathy, sympathy, humor, and care to users when they most need them

Google’s Duplex project in 2016 was to develop an AI that sounds life out with the capability of analyzing a User’s mood and state of mind in real-time, remembering previous conversations in order to reference them in the future, remembering where a previous conversation stopped so it can pick it up where it left off at a later time, just like a human being can do.

AI-Powered CRM (Customer relationship management) systems are used to drive customer loyalty, purchases, and recommendations among customers. This algorithm can identify opportunities for companies to optimize their signal-to-noise ratio by engaging their customers with the right offer at the right time.

AI can identify which customer is losing interest in a product or service and target them with special offers or rewards before they are too far gone.

AI can play what Daniel Newman and Olivier Blanchard described as the Big Butler/Big Mother role in their book called Human/Machine-The Future of our partnership with  Machines by suggesting certain products and services, and providing unprompted advice when the occasion presents itself.

AI-powered CRM can build or optimize non-transactional relationships with customers eg. In Aviation industry, AI may send you text with friendly packing and travel tips once an Airline knows that you are 48 hours from a trip, 24 hours away from departure, an AI may send you reminders about what documentss to bring with you and links to useful information about your departure Airport and terminal. While in transit, AI can text you your next gate number and boarding information.

How could AI be used irresponsibly and unfairly in Different Sectors and the likely unintended consequences?

1. Education: Cognitive Dependence: (impedes critical thinking and cognatic skills development)

In some educational settings, overreliance on AI tools can lead to students losing essential cognitive reasoning skills. If AI becomes the primary source of information and problem-solving, students may struggle to develop critical thinking abilities, hindering their overall cognitive development.

2. Finance: Algorithmic Bias: (especially in the loan approval process by excluding a certain group and reinforcing societal inequality)

The financial sector has witnessed instances where AI algorithms perpetuated biases, leading to unfair outcomes. For instance, biased algorithms in loan approval processes can result in certain demographic groups facing discrimination, reinforcing societal inequalities rather than mitigating them.

3. Healthcare: Misdiagnosis and Lack of Human Oversight: (Over-reliance on AI tools may lead to wrong diagnosis and incorrect treatment Plan if not properly monitored by qualified medical professionals).

Overemphasis on AI in healthcare diagnostic tools, without adequate human oversight, can lead to serious consequences. Instances of misdiagnosis or incorrect treatment plans may occur if AI algorithms are not thoroughly validated and continuously monitored by qualified medical professionals.

4. Criminal Justice: Biased Predictive Policing: (AI tools may exercise social disparity through biased predictive policing eg. New York Experience)

In criminal justice, the deployment of predictive policing algorithms has raised concerns about perpetuating biases in law enforcement. If these algorithms are trained on biased historical data, they may disproportionately target certain communities, exacerbating existing social disparities.

5. Employment: Discriminatory Hiring Practices:

AI-powered hiring tools may inadvertently perpetuate bias in the recruitment process, leading to discriminatory practices in candidate selection, and undermining diversity and inclusion efforts.

6. Social Media: Amplification of Misinformation:

AI algorithms on social media platforms can unintentionally contribute to the spread of misinformation. If algorithms prioritize sensational content for engagement without adequately fact-checking, false information may gain traction, eroding the quality of public discourse.

7. Customer Service: Dehumanization and Lack of Empathy:

Overreliance on AI in customer service may result in dehumanized interactions. If AI chatbots lack empathy or fail to understand nuanced human emotions, customers may feel frustrated and underserved, negatively impacting the customer experience.


The task to ensure a fair and responsible development, deployment, and usage of AI technology requires a multi-stakeholder approach. Critical players in the AI-inspired transformation with our ecosystem must play their respective roles conscientiously in balancing full development of the technology and human safety.  


All partnerships are built on trust and anyone that isn’t will not survive. Therefore, Innovators and Developers must ensure that trust is at the heart of every platform,technology, and use case to drive adoption and ubiquitous deployment of such technology. We must be able to trust that our self-driving cars will not crash into homes, our smart homes are not used to spy us and invade our privacy, our robot caretakers will not accidentally administer wrong medication, our AI assistant will not accidentally share our financial, medical and personal records with unauthorized third parties. We must trust that algorithms that analyze our online and offline activities will not be used against us by hostile third parties.

Trust must be at the heart of every consumer-facing platform’s app and technology for it to reach its full potential. Every technology company that understands this will thrive.

 Other roles include: Ethical Design by considering potential biases, transparency, and accountability in the development process.

Innovators play a pivotal role in ensuring AI systems are ethically designed. This involves considering potential biases, transparency, and accountability during the development process.

Bias Mitigation-Addressing biases in AI algorithms is crucial to prevent discriminatory outcomes. Innovators must actively work to identify and rectify biases to ensure AI systems treat all individuals fairly.

Transparency and Explainability- AI systems should not be black boxes. Innovators need to prioritize transparency and explainability, allowing users and regulators to understand how AI decisions are made.

Regulators’ Role:

1. Policy Frameworks:

Regulators must establish comprehensive policies and frameworks that guide the ethical development and deployment of AI. These should include guidelines for transparency, accountability, and fairness.

2. Oversight and Auditing:

Regulators should implement mechanisms for oversight and auditing of AI systems to ensure compliance with ethical standards. This includes periodic reviews of algorithms and systems to identify and rectify potential issues.

3. Collaboration with Innovators:

   Collaborative efforts between regulators and innovators are essential. This partnership can foster the creation of effective regulations that balance innovation with ethical considerations.

Users’ Responsibilities: Consumers must realize that the responsibility to determine if AI platform they opt into is toxic or harmful falls entirely on them. Even if the regulatory bodies imposes the application on warning labels and disclaimers on these tools, consumers must be vigilant, aware and proactive themselves from the weaponization of AI tools, no matter where or how they are used.

 Consumers must also ensure the following:

Informed Engagement– Users should actively educate themselves about AI technologies, their capabilities, and potential ethical concerns. Informed users are better equipped to demand transparency and fairness from AI systems.

Feedback Mechanisms-Users should provide feedback on AI systems, reporting instances of bias or unfair treatment. This active engagement helps developers improve their algorithms and address unintended consequences.

Advocacy for Ethical AI- Users, as a collective force, can advocate for the responsible use of AI. By supporting and demanding ethical practices, users contribute to the development of a culture that prioritizes fairness and responsibility in AI


As consumers have become increasingly dependent on AI tools, the extent to which algorithm manipulation of popular recommendation can be used by entities (companies, groups and government alike) to shape public opinion, steer towards certain products and away from others and spread misinformation should concern all of us and demand closer scrutiny.

Addressing these challenges requires a concerted effort from developers, regulators, and users to ensure that AI technologies are developed and deployed ethically, with a focus on minimizing unintended consequences and fostering positive societal impacts.

Scroll to Top