Exploring the Ethical Landscape of Artificial Intelligence
Highlighting key takeaways & related thoughts from AILA’s Future of AI Responsibility Symposium.
AILA’s recent symposium on the Future of AI Responsibility featured a variety of incredible keynote speakers, panels, and interactive sessions, devoted to exploring the ethical landscape of artificial intelligence (AI) when it comes to affecting communities, businesses, policies and beyond.
As the power of AI is becoming increasingly tapped into by businesses across the globe, the implications that such technology holds are permeating our everyday lives and becoming more widely recognized. And with great power, comes great responsibility.
Responsible artificial intelligence encompasses frameworks and principles to hold AI to certain standards and mitigate potential harms it could cause. This is done by wielding AI tools with core tenets of transparency, fairness, security and inclusivity in mind. But of course, this is easier said than done, due to considerations like the amount of factors that go into designing AI models and education and standard developments disproportionate to AI development.
From a high-level, issues can arise anywhere across the AI production pipeline from excluding already underrepresented groups in data collection, to the multitude of ensuing data privacy problems, to the silos and polarization that algorithmic personalization could be perpetuating (see a longer list here), making them so difficult to identify and control.
Some common examples of the harm these kinds of ethics gaps can foster include: racial profiling in predictive policing algorithms, failing to hire minorities in the future because of previous company makeup, and making consequential decisions based off data from places like social media or smart phones (such as in the case of Boston-based project Street Bump, an app intended to improve neighborhood streets, but that saw greater data collection and thus construction in wealthier areas due to the correlation with more users with smartphones, failing to serve neighborhoods that would benefit more from these kinds of fixes).
With an overview of the subject under our belt, let’s now dive into some of my biggest takeaways from the AILA event, and some related thoughts they prompted on the current and future ethical landscape of artificial intelligence!
1.Majority of work exists in thought, not tech: It’s easy to deflect to algorithms as this all-knowing technology, and blame the said ‘Algorithm’ for anything that goes wrong. This reference to it as it’s own individual actor serves to detach it from all the other factors that went into creating it, and pulls responsibility away from them. Dr. Lawrence Ampofo, the director of growth at AI Responsibility Lab, spoke to this saying that issues are often caused by people; not necessarily tech itself.
As raised in one of my favorite articles, ‘Big Data is People!’ by Rebecca Lemov, the way we discuss and interpret data has become centered around concepts like the 3 V’s or as this ethereal catch-all label, obscuring the privacy implications and the fact that it is describing and collected and manipulated by individuals themselves. It’s necessary to humanize this field of AI in our rhetoric of it, in order to begin to eliminate de-responsibilisation.
Beyond general rhetoric, companies need to ensure they are talking internally about the tenets of ethical AI and considering the steps they can take. Panelist Olivia Gambelin (Combatting Systematic Biases session) described how as a part of her AI ethics advisory work, they first ask teams about their definition of fairness. Often, individuals all have different views on what a fair model may look like and unaddressed this could lead to significant miscommunications (and resulting poor models!). These conversations help to build a strong foundation for company cultures of responsible AI building and deployment.
The technology itself is something we really don’t have that much power over. But what we do, is the way we work with the tech, identify and address problems within it, and talk about it. That starts with closing the computers, critically thinking and having conversations. And that could make all the difference.
2.Pandemic exacerbating AI usage: A common sentiment throughout the event was that the growth of AI is undeniable, and this pandemic is only serving to further accelerate it.
Companies are generally seeking to incorporate AI more into their operations, with 57% of managers saying their company was piloting or using it in 2020, and a McKinsey global survey reporting that the pandemic has been a trigger for even more initiatives. 88% of small and 80% of large business owners also reported in a KPMG survey that AI tech helped their company during the coronavirus outbreak, with use cases for everything from productivity monitoring to boosted service offerings to fraud detection.
3.Ethics as an asset: As we move forward with companies increasingly using AI, panelists also discussed the prediction and necessity of treating ethics as its own asset to pair the technology with.
They speculated that there will be an increased importance for this in companies over the next five years, and that will be reflected in the development of roles like chief data ethics officers and growing data governance teams. This concept is already utilized by Gambelin’s AI ethics advisory firm, where they treat ethics as a service for AI development, providing this guiding hand as a tool in training employees, identifying risk areas and shaping data strategies.
4.Mirroring and amplifying deep seated inequalities: Many of the systemic bias issues pervasive in AI aren’t new to us; we’ve seen them throughout history and are a reminder of our need to intervene both on and offline.
Going back to the earlier mentioned example of policing, predictive AI has been wielded in the criminal justice field to determine people’s sentences and has been found to disproportionately give harsher sentences to people of color (particularly black males) as opposed to white individuals. This bias, however, reflects systemic racism that has been prevalent in our criminal justice systems for years as minority communities have been more targeted and subjected to harsher sentences. The algorithm reflects and reinforces existing biases in society.
When algorithms are taking in information about the world, and spitting out inequitable outcomes, that’s them holding up a mirror to us and clearly highlighting the fact that these issues do exist and are deeply ingrained in facets of society. This presents even further problems when trust is put into models and consequential decisions are being made off them, serving to amplify biases and further widen the gaps between communities.
5.Working forwards, and backwards: A common effort to try and reduce bias on the basis of protected groups and societal inequalities as talked about in the last point, includes examining the training data to ensure fairness, equity and representation.
While this has immense value, a problem should be looked at both forwards and backwards. As several panelists discussed, models can develop proxies correlated with groups; these variables may be unidentifiable in the initial dataset but can be observed by looking at the outcomes, reverse engineering, and validating any potential significant similarities.
With a clear continual growth in the usage of AI, it’s critical for us to be doing the work to make ethics and AI synonymous; we must continue to critique, learn, and iterate, in hopes of reaping all the beautiful potential AI can have for societal good (a topic to be explored in a future post!), rather than letting it further rip us apart.
To learn more about AILA: https://www.joinai.la/.