“Will AI replace humans?” is the question that has been looming over our heads for a while, but will AI be as good as we think?
The growth that the field of AI has seen in the last few years is astronomical. From unlocking our phones with facial recognition to using algorithm-driven streaming apps, AI has proliferated our everyday lives to such a degree that it’s becoming increasingly difficult to notice its presence. What’s more, many things that were once deemed impossible are now possible, thanks to AI, and the capabilities of AI continue to expand. For instance, a leading researcher in AI, Tony Walsh, predicts that AI will have human-like intelligence by 2062.
However, Tony Walsh has spoken out against unethical uses of AI and thinks we will continue to run into difficult situations due to autonomous systems being designed without alignment to human values. AI can radically reshape society in the coming years, but at what costs? Organizations need to consider the broader impact of AI and avoid myopic visions of the future. What that actually entails can be difficult to conceptualize, so the ethical implementation of AI is still an ongoing discussion among many organizations.
Bias in AI and it’s impact on organizations
One might think that technology can be used to eliminate any conflicts that arise due to human bias. However, our biases slip into the development and use of technology. Over or under-representation within data sets can produce unfair or erroneous results, causing both the intentional and unintentional perpetuation of inequality, bias, and discrimination.
What’s more, machine learning algorithms aren’t based on strict mathematical calculations, unlike traditional computing methods. There’s room for ambiguity in both the inputs and outputs. Teams working on the development of AI are faced with conflicting data bias that inherently determines the quality of the system.
A notable example came in 2016, when Reuters reported that Amazon tried to deploy AI into their hiring system. The company wanted to mechanize the process of filtering out hundreds of resumes. But shockingly, their new hiring tool did not like women. Amazon used resumes of its existing employees to train the AI and those resumes predominantly came from men. This caused the algorithm to prefer male candidates over females. Having found no solution to make the system gender-neutral, Amazon decided to scrap the AI-based hiring tool.
So, what can be done?
Creativity, empathy, and dexterity of human beings should be taken into account when developing a human-like technology using AI. Moral principles and values that govern humans should be applied to machines as well. To that point, organizations that work with AI must be aware of the subconscious influences that might affect the morality of business operations. Most importantly, ethical AI should not be limited only by what is permissible by law. An AI algorithm that tends to manipulate people into undesirable behavior might be legal, but isn’t ethical.
Ethical AI must have fundamental values included in its construction, such as individual rights, privacy, non-discrimination, and non-manipulation. Organizations that develop AI must find a way not to fall into ethical pitfalls and have firm discussions on problems that arise from collecting massive troves of data, particularly when that data is used to train machine learning methods. Just like other risk management strategies, an operationalized approach will help eliminate AI ethical risk in an organization. Biases in AI can be mitigated. After all, AI is only as good as the data it is fed.
ManageEngine offers AI-enhanced IT management, including services such as service management, IT security, and IT monitoring. ManageEngine’s AI model was built with business value and end-user experience in mind. These AI-based solutions are capable of providing well-defined, actionable insights, while also automating routine tasks, reducing the margin for error, engaging with clients, and maximizing employee productivity.