Ethics of Artificial Intelligence (AI): How to Establish a Human-Centric Approach
While Artificial Intelligence (AI) has existed for over 60 years, recent exponential developments have transformed this technology into a central part of daily life. From virtual assistants like Siri to self-driving cars, we can now teach machines to solve complex problems with the power of computer engineering and robust data sets. AI presents exciting opportunities for businesses to become more client-focused and data-driven, but this technology still relies on humans to make morally sound decisions. If organizations don’t learn to address AI’s ethical deficits, they may cause irreparable damage to people’s lives. In this article, we present ongoing challenges organizations face in regard to AI usage and offer practical tips to help businesses shape a better future.
1.) Respect User Data With Consent
AI is created through data. For this reason, we must discuss ethical data collection before diving into the development of moral AI systems. From swiping a metrocard on the way to work, to entering a camera-filled lobby, to liking posts on LinkedIn, large swaths of personal information are gathered and surveyed by organizations every day. When a person gives data online in the form of social media likes or HTTP cookies (hard data), this information can be collected and used to make inferences about a person’s age, gender, interests and political beliefs (soft data). While this information is often used to improve a user’s online experience, it is often taken without clear, informed consent from the user. Additionally, when data is not safeguarded, it can be used by criminals to spy on users, sell their personal information on gray or black markets and put them at risk for fraud and identity theft. Interventions such as the European Parliament’s General Data Protection Regulation (GDPR) have set helpful restrictions on holding personal data. However, there are still organizations that skirt the line by designing their web pages with dark patterns. Dark patterns are user interfaces that intentionally confuse or coerce users into offering up their data (for example, putting “accept all cookies” in a large, colored font).
When designing user experience, organizations need to think deeply about the long-term ramifications of their actions. Transparency and informed consent can be promoted by helping users clearly understand how their data will be used and always documenting the origins of data. Information collected from users must also be treated with the same value and respect we would give a person in real life. By respecting user data with compassion and critical thought, we lay the foundation for ethical AI to be built upon.
2.) Teach AI With Compassion
Artificial Intelligence is created when machines are shaped to “think” like humans do. Simple AI will respond with outcomes which have been pre-determined by an engineer. However, through a subfield of AI called machine learning, the system will follow a set of rules (known as an algorithm) to analyze data and solve problems with high accuracy. Taken a step further, AI can layer algorithms into an artificial neural network and grow more accurate through repetition- almost like a human brain. This is called deep learning.
Despite its clear similarities to the human brain, AI is still a machine without consciousness. It may be able to write an incredible, collegiate-level essay on the history of voicemails, but it doesn’t innately understand why that essay should be truthful or why a voicemail from someone’s grandmother holds intrinsic value. AI, therefore, must be taught with care and conscientiousness, almost how a parent would teach a child. We must take time to help it understand complicated ideas and learn from its mistakes. We also need to be considerate about the type of information AI consumes. Data should be relevant, equitable and ethically sourced (not stolen, the way some art-generating AI use source material from artists without their knowledge).
Data sets should also be screened for implicit bias which would negatively influence the AI’s behavior. For example, an AI trained to assist with the hiring process might unintentionally create a barrier to diverse talent. Team members in all departments (but especially HR, Marketing, and Finance) should be aware of the consequences of bias in machine learning and know how to vet AI products for overlooked ethical flaws. For example, the Head of Marketing might learn to ask AI vendors for documentation on how they identified and mitigated bias in their product. Building organizational knowledge will also increase your team’s confidence around these issues. By understanding AI with compassion, we meet the machine’s need for a human touch and minimize potential harm.
3.) Invest in Empathy
AI’s greatest weakness lies in the human element, which means that human-centric skills will continue to grow in necessity as we augment the workforce. Investing in skills like empathy and critical thinking through Learning & Development will ensure data is chosen with care and analyzed correctly. Communication is another key skill, both for people without tech backgrounds to comprehend the jargon and for tech workers to understand and consent to what they are building.
Diversity is a critical human component needed for AI development. The tech and AI industries are notorious for a lack of diversity, which naturally leads to the development of discriminatory AI. A well-known example of this is facial recognition technology, which is largely inaccurate with studied demographic differentials. These demographic differentials mainly target racial and gender minorities as well as the poor, posing serious risks to these people in policing, education and employment. To promote diverse voices entering AI development spaces, two-tiered work forces that drive wage and opportunity inequality should be investigated. It’s also helpful when harassment and discrimination response rates as well as compensation levels are transparently available to the public. Finally, DEI training must be a priority in all industries, with clear incentive given to protect diverse workspaces. By investing in inclusivity, empathy and other integral human-centric skills, voices that are necessary to the development of ethical AI will have a seat at the table.
Conclusion
Ethics in AI are complex and nuanced, requiring a deep consideration of the ethical missteps in today’s technology use. Learning & Development can grow the human skills required to meet these challenges, tending to skills in empathy, inclusion, communication and critical thinking. As we prepare for future AI advancements, organizations must utilize their human workforce to make reforms to current data collection practices. AI also requires the support and empathy of people to achieve our ethical standards. By learning to become more human, we can become a compassionate guide and partner to AI upskilling and implementing ethical AI practices, businesses can mitigate risk and move in the right direction.
References:
https://hbr.org/2021/07/everyone-in-your-organization-needs-to-understand-ai-ethics
https://mindmatters.ai/2022/10/ai-art-is-not-ai-generated-art-it-is-engineer-generated-art/
https://houseofbeautifulbusiness.com/read/chatgpt-makes-us-human
https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
https://www.acm.org/code-of-ethics
https://ainowinstitute.org/AI_Now_2019_Report.pdf
https://www.moma.org/collection/terms/critical-design
https://help.madmimi.com/what-is-gdpr-and-how-does-it-affect-me/
https://oxylabs.io/blog/hard-data-vs-soft-data
https://www.kaspersky.com/resource-center/definitions/cookies
https://hbr.org/2022/03/ethics-and-ai-3-conversations-companies-need-to-be-having
Author