Recently there's been a number of cases where a machine learning model was trained and deployed, and after deployment users realized that it was making predictions that were either biased against certain minority groups or just predictions that defy common sense. Such predictions can have obvious harmful consequences if not dealt with, and the damage done by such mistakes can derail an otherwise successful AI technological implementation program.
If we hope to prevent these mistakes we need to take fairness seriously from the outset. This begins by identifying all stakeholders that could be affected by this technology's effects from the outset, at the planning and development stage, and making sure that they're involved in the process from the beginning through the development all the way through the quality assurance step.
When it comes to data, we are all aware that data privacy is of utmost importance. Yet we've observed that many companies still don't have data management policies in place, and are not transparent about how they handle data. It's often difficult for users to understand what a certain company's data policy is, as it's buried under a long legal document that doesn't reflect the reality of their infrastructure. We believe that it's important to develop a simple, transparent data policy that helps customers make informed decisions about their data while at the same time increasing trust in the system.
Budgeting enough time for quality control and inspection is also often overlooked. Not only do we need automated regression tests and to test on separate data that was unseen during model development, we also should have human inspection to test for edge cases. Being able to explain certain decisions the model makes also helps to allay certain fears. The field of model explainability is still very young and in some cases we don't have adequate answers for how to explain the models' output, but certain visualization techniques and mathematical guarantees, when coupled with extensive testing can provide reassurance that the model is unlikely to create a catastrophic mistake. Coupling machine learning models with logic-based systems is another way to provide safeguards for unpredictable scenarios. Crucially, we need to budget enough time for these tests, which often are overlooked in a rush to accelerate deployment.
At recursive we believe that developing AI following such simple principles we can ensure that AI developments will have a positive effect on society. A recent publication "AI for social good" by leading AI researchers has suggested that AI development following certain recommendations will be a critical enabling technology for achieveing the SDGs.
In "AI for social good
" the authors outline the key principles for ethical AI development:
- Expectations of what is possible with AI need to be well-grounded.
- There is value in simple solutions.
- Applications of AI need to be inclusive and accessible, and reviewed at every stage for ethics and human rights compliance.
- Goals and use cases should be clear and well-defined.
- Deep, long-term partnerships are required to solve large problems successfully.
- Planning needs to align incentives, and factor in the limitations of both communities.
- Establishing and maintaining trust is key to overcoming organisational barriers.
- Options for reducing the development cost of AI solutions should be explored.
- Improving data readiness is key.
- Data must be processed securely, with utmost respect for human rights and privacy.
We believe these guidelines form only the minimum set of requirements, and as we learn more surely we will only add to this list. Broadening the diversity of AI developers and including the broader community in AI development is critical to this effort.
The deep long-term partnerships that are required to solve large problems successfully can only be achieved if all involved parties successfully build trust in each other.