According to Gartner, 85% of AI projects will deliver wrong outcomes due to bias in data, algorithms, or the teams responsible for managing them. In another study conducted by the University of Southern California, researchers found bias in up to 38.6% of ‘facts’ used by AI. What can be done to prevent algorithmic bias?
From a legal perspective, general counsels face potentially costly legal liabilities and significant reputational harm if their organization attempts to use new AI-powered technologies in their hiring. (See: An employment screening tool that doesn’t account for accents or a facial recognition software that struggles with darker skin tones.)
In a recent Legal Dive article, Mosaic Data Science collaborated with Epstein Becker Green and EBG Advisors to discuss why general counsel should work with data scientists to ensure that the algorithms their organizations employ are compliant with anti-discrimination laws. Mosaic has supported EBG’s expansion of its national algorithmic bias auditing and risk management solutions. Likewise, Mosaic has leaned on EBG’s legal expertise to offer comprehensive Explainable AI (XAI) & algorithm bias auditing services.
In the article, Mike Shumpert, managing director at Mosaic Data Science, and Bradley Merrill Thompson, member of the firm at Epstein Becker Green, analyze the technical obstacles surrounding bias in algorithmic decision-making. Bias can happen in many ways – sensitive information can be inferred from other valuable information, and when bias is found, often it cannot simply be eradicated. Improving equality in the overall output of the algorithm often means reducing overall accuracy.
So, how do we fight bias? The article goes on to highlight the several areas in which data scientists and attorneys must collaborate to prevent such biases from creeping into an AI model. These include:
- Adhering to existing regulatory AI frameworks (for health, FDA, HIPPA, NIST, etc.)
- Ensuring the data are high quality and representative of the population the model will serve.
- Implementing fairness constraints that help ensure the model does not discriminate against any group of people.
- Addressing bias over time through periodic auditing.
At Mosaic, we follow a rigorous algorithm bias auditing approach in alignment with the AI Risk Management Framework developed by the National Institute of Standards and Technology (NIST) to prevent algorithmic bias. By following this process, we help firms audit algorithms for bias and enhance their explainability, ensuring that AI systems are not only accurate but also fair and transparent in their decision-making processes. Mosaic recommends Explainable AI services and MLOps for any production deployment. Read our XAI thought leadership here.