In this Article:
What Organizations & Individuals Can Do to Ensure Responsible, Equitable Deployment of Artificial Intelligence
What Do Organizations Get Wrong When It Comes to Ethical AI?
What Is the Ethical Dilemma When It Comes to AI?
How Do You Foresee AI Technology Impacting People On An Individual Level?
What Can We Do To Ensure AI Is Ethical?
Any Parting Advice for Those Interested in Ethical AI?
Interested in Learning More?
What Organizations & Individuals Can Do to Ensure Responsible, Equitable Deployment of Artificial Intelligence
Artificial intelligence has the potential to revolutionize industries, improve lives, and drive unprecedented innovation. However, the seismic power of this technology also raises a number of critical questions in terms of known and unknown consequences – particularly when it comes to ethics. According to a 2023 survey by Salesforce, 63% of consumers are concerned about bias in AI, while just 57% trust that companies will use AI responsibly. This underscores the importance of ethical AI in building consumer trust and ensuring the responsible deployment of technology. To get to the bottom of these concerns, we sat down with author, entrepreneur, and AI public policy professional, Stacey Ann Berry.
Stacey’s work explores the impact and intersection of technology on marginalized communities, human rights, and civil liberties. She has a Master’s in Public Policy Administration and Law, as well as an Honors BA from York University; she also has a diploma in Court and Tribunal Administration from Seneca College. She is also the instructor of the new Udacity course, AI Governance, Policy, and the Public Good. In this article, we will highlight some of Stacey’s insights into a number of pressing issues surrounding the rise of AI and its implications for society.
What Do Organizations Get Wrong When It Comes to Ethical AI?
Stacey: Organizations often overlook the critical aspect of inclusivity in the development and deployment of AI technologies. There is a sizable disconnect between the technologists who are creating and deploying AI and the individuals who are most impacted by these technologies, which are often communities of color and other historically marginalized communities. The race to be first to market often takes precedence over considering the unintended consequences that AI might have on these groups.
For example, individuals from the Black community have been falsely accused of a crime due to being misidentified in facial recognition software. An article in the Innocence Project, cites a study by the National Institutes of Standards and Technology which found that the algorithms in facial recognition systems could not “distinguish between facial structures and darker skin tones” and is more likely to misidentify Black and Asian people. This example highlights the pressing need for a multidisciplinary approach that bridges the gap between technology developers and the communities they serve. Breaking down silos and fostering collaboration across different sectors is essential for preventing unintended harm.
Moreover, the lack of diverse representation at major tech conferences and other places where these important AI conversations are happening highlights the necessity for a more inclusive approach. The voices of those who are most likely to be harmed by AI technologies need to be amplified and included in these critical conversations. We need to be more intentional about the tables we’re sitting at, and invite those who are not there.
What Is the Ethical Dilemma When It Comes to AI?
Stacey: AI holds the potential for tremendous benefits, from improving cancer screenings to enhancing firefighter safety. However, the rush to deploy new technologies often overlooks the people who will be affected the most. We need to emphasize the importance of putting humanity before profits and ensuring that human rights are at the forefront of technological advancements.
One significant concern is what’s known as the ‘digital divide’, which refers to the gap between those who have access to modern information and communication technology and those who do not. As AI becomes more pervasive, this divide could widen, leaving marginalized communities further behind. Publicly available resources and equitable access to technology must be prioritized to prevent this.
During the Global Health Pandemic, the digital divide became glaringly apparent as some students struggled with at-home learning due to a lack of laptops and high-speed internet. A street-by-street map created by Microsoft showcases the extent of these digital deserts. Addressing this issue requires two key principles: ensuring the ever-present existence of choice in how individuals access publicly funded and essential services like education, as well as securing true and informed consent from consumers for the use of technologies to access essential services.
How Do You Foresee AI Technology Impacting People On An Individual Level?
Stacey: The future of work is one area where AI’s impact will be profoundly felt. There is a need for policies that protect workers from the disruptions caused by AI. The type of work people will have access to will require both soft and technical skills but also training in various components of AI such as Machine Learning, programming languages, data modeling, and data analytics. Those who lack these skills, or have no access to technology or limited broadband could be excluded from emerging job opportunities and meaningful participation in society.
While AI might free people from repetitive tasks, it is essential to consider the impact on those who find meaning in such roles. For instance, not everyone aspires to academic or high-level professional work; some people get true satisfaction from customer service roles in the retail and hospitality sectors. Policymakers and technology developers must recognize and respect the diverse ways people find fulfillment in their work.
What Can We Do To Ensure AI Is Ethical?
Stacey: On an individual level, I recommend engaging with non-partisan advocacy organizations and utilizing available resources. The Center for AI and Digital Policy provides legislative frameworks and updates on AI policy, allowing for public comment and participation. The Algorithmic Justice League is another excellent resource, advocating for independent audits of technology.
When it comes to the organizational level, companies must prioritize human rights and invest in impartial and routine audits of AI systems to build trust with their customers. This should also include audits of datasets to identify and remove bias. Ensuring that AI technologies are developed and deployed responsibly involves more than just speed and efficiency, it requires a commitment to ethical considerations and the well-being of all people from diverse socio-economic backgrounds and communities. Ethical considerations that include principles of fairness, accountability, privacy, inclusivity, and transparency can ensure societal harm from AI deployment is prevented.
Any Parting Advice for Those Interested in Ethical AI?
Stacey: What’s great about AI is that those with a passion for equity and social justice don’t necessarily need technical training or a policy degree to make a difference or contribute to the public narrative about technological advancement. If you want to remain equipped for the age of artificial intelligence that we find ourselves in, taking upskilling courses on platforms like Udacity can help position you as a thought leader in the space. It can also open the door to high demand or emerging career opportunities. There needs to be more people with diverse perspectives from various walks of life contributing to the public discourse about advanced technologies, especially as it pertains to the impact of AI on human rights or the environment.
Ethical AI is not a field reserved solely for technologists and policymakers – it requires a diverse range of voices and perspectives. By exploring credible information, attending tech events, engaging in conversations about AI, advocating for inclusive AI practices, and committing to continuous learning, individuals and organizations can contribute to a more equitable and responsible AI in the future.
Interested in Learning More?
For those interested in shaping the future of ethical and responsible AI, consider enrolling in Stacey Ann Berry’s course, AI Governance, Policy, and the Public Good. In this course, Stacey Ann dissects the complexity of AI governance, policymaking, trust, and participatory design, and examines how we can ensure that AI benefits society, respects human values, and promotes fairness. Equip yourself with the knowledge and skills to make a meaningful impact in the world of AI. Join the movement towards responsible and equitable AI today!