There are many instances of AI going rogue with alarming results. Therefore societies, governments and businesses are raising questions around the ethical challenges raised by AI. And they should. Placing blind trust in anything that takes life-changing decisions on behalf of humans is reckless. What we need is AI that does its magic by balancing the power of discrimination. It must assimilate socially and ethically acceptable nuances to slip unobtrusively into society. To achieve this, we need responsible AI.
The pitfalls of pure ‘machine thinking’ and the need for responsible AI
Responsible AI has varied definitions. They all aim to achieve the same outcome: transparent, fair, trustworthy, safe, interpretable, explainable, and accountable decisions from machines. The goal is to ensure machines mimic human intelligence. In this context it is worth heeding the words of cognitive psychologist and computer scientist Geoffrey Hinton, renowned for his pathbreaking work in training multi-layered neural networks, who once said, “I have always been convinced that the only way to get artificial intelligence to work is to do the computations in a way similar to the human brain.”
Hinton was right. Machine `thinking’ has several pitfalls as real-life examples have proven. Some years ago, Amazon had to abandon its AI-based recruiting tool1 because it was not rating candidates using a gender-neutral approach. The Amazon recruitment AI was biased, leading to a detrimental increase in the company’s workforce gender gap. In the UK, the Home Office had to abandon its AI-based algorithm2 because it had a racist bias against visa applicants.
Microsoft hastily took down Tay3, its AI-based chatbot, because Tay’s NLP and machine learning core turned it into a toxic, sex-crazed neo-Nazi within a few hours of launch. Apple Card used an algorithm that gave lower credit allowances to women, despite good credit ratings. The algorithm had to be abandoned. At the source of these mishaps were poor data sets and–ultimately–inexplicable models that reinforced learning.
Without doubt, these are unacceptable consequences of using AI. But they force us to answer the question, “Do we use AI to override humanness or to inject humanness into technology?” As Amit Ray, who specializes in mindfulness meditation for corporate leadership and management, says, “As more and more AI is entering into the world, more and more emotional intelligence must enter into leadership.” Some technology leaders are showing the way. Google turned down an AI powered money lending project. It has also blocked new AI features analyzing emotions, fearing cultural insensitivity. Microsoft has restricted software mimicking voices. And IBM has moth-balled an advanced facial-recognition system. These organizations are not rejecting AI. Instead, they are using their experience with AI as an opportunity for remediation and make AI serve humanity.
The inevitable emergence of governance and regulatory frameworks around AI
There are other efforts being made to improve the quality of AI outcomes. To “harness the transformative potential of AI by accelerating the adoption of trusted, transparent and inclusive AI systems globally” the World Economic Forum has launched the Global AI Action Alliance4 (GAIA). The GAIA attempts to fill the gap resulting from the general lack of centralized government regulations. Admittedly, the European Union (EU) is fixing this aberration independently, having proposed a comprehensive regulatory framework for AI5. In another AI-related development, the Chinese government recently laid down rules (English version here)6 requiring tech companies to share their data with the government. The government plans to use a layer of AI on top of the data to arrive at decisions that will shape and impact public life. It will be interesting to see how governments work with private businesses to harness data and deploy AI.
Meanwhile, the WEF’s GAIA will aid corporate boardrooms and the ethics committees of organizations by developing collectively agreed upon “interoperable governance protocols for the development and use of AI technologies”. This is an urgent requirement. While funding for AI technologies hit a peak7 of $20B in Q2 of 2021, education and awareness of challenges around AI have not kept pace. Today’s AI systems are a black-box. There are no certifications, the systems are opaque, and there is little publicly verifiable governance. AI powered systems need to be tightly regulated – just like the military, drugs and vaccines, food, aircraft, and vehicle systems in the interest of public safety.
Underlying these challenges is the real problem: The knowledge and expertise around responsible AI and the requisite data governance is limited to a few technological organizations. But an increasing number of organizations that want to leverage AI are waking up to the fact that they can avoid the pitfalls of AI adoption (and not delay adoption) by partnering technology specialists.
Using a technology partner, not missing the AI bus – and ducking the pitfalls of AI
Recently Marlabs worked with a large US-based university to help predict admission and enrollment for their MBA program. The goal was to find the right candidates who will also most likely enroll in the program using intelligence. We started with 400+ attributes for each candidate while recognizing that attributes like “gender”, “race/ethnicity” and “income” in historical data used for training the algorithms could lead to bias. We analyzed historical data and trained models on modified and enriched data to ensure representation and fair classification for all categories of applicants. The result was balanced admission and enrollment of candidates from a diversity perspective.
Marlabs recently used ML-based POS data matching with Dictionary of Items for a Top 10 Market Research Company that gathers data every week from 1500+ retailers. This was done to correct data quality issues leading to incorrect intelligence and an unacceptable volume of errors in matching results. The Marlabs solution combined human intelligence with ML models to inject context and lexical model interpretation. Marlabs delivered a 95% match and a massive reduction in errors.
Marlabs’ deep-tech innovation team is currently exploring approaches that uses AI to make AI responsible – by leveraging the same ML algorithms to detect biases and other anomalies in data sets and data sources, before making it available for production ready decision-making AI systems.
Organizations will continue to use increasing amounts of AI to improve outcomes. The technology will fight cyber security threats, run manufacturing plants, improve medical care, prevent industrial accidents, compose music, forecast wildfires, rescue victims of natural disasters, and help us understand the world and the universe better. It will open the doors to a safer, more comfortable, and enjoyable future. But AI is a double-edged sword. It can help unlock a better future if we polish it to be responsible. If we fail, it will erode our trust in the agencies that use AI thoughtlessly.
Sources:
- Amazon scraps secret AI recruiting tool that showed bias against women | Reuters
- Home Office to scrap ‘racist algorithm’ for UK visa applicants | Immigration and asylum | The Guardian
- Microsoft’s neo-Nazi sexbot was a great lesson for makers of AI assistants | MIT Technology Review
- Global AI Action Alliance | World Economic Forum (weforum.org)
- Regulatory framework on AI | Shaping Europe’s digital future (europa.eu)
- The Central Committee of the Communist Party of China and the State Council issued the “Implementation Outline for the Construction of a Government Ruled by Law (2021-2025)” – Xinhua English.news.cn (www-gov-cn.translate.goog)
- AI In Numbers Q2’21: Funding Trends, Exits, And Corporate Activity – CB Insights Research