Alexa Seleno
@alexaseleno

Elamigo

AI

DEFINING AI BIAS

To start, the U.S. needs a common understanding of AI and the related problems it can generate, which is important in a space where meanings can be ambiguous and in some instances fragmented. The National Artificial Intelligence Initiative has defined trustworthy AI as appropriately reflecting “characteristics such as accuracy, explainability, interpretability, privacy, reliability, robustness, safety[,] . . . security or resilience to attacks,” all while “ensur[ing] that bias is mitigated.” In a more recent report, NIST defined bias as “an effect that deprives a statistical result of representativeness by systemically distorting it.” Added to these definitions are general definitions adopted among the private sector which equate bias mitigation with fairness models. A previous Brooking report approaches the definition from a more comparative lens, framing bias as “outcomes which are systemically less favorable to individuals within a particular group and where there is no relevant difference between groups that justifies such harms.” Further, the authors suggest that algorithmic biases in machine learning models can lead to decisions that can have a collective, disparate impact on certain groups of people even without the programmer’s intention to discriminate.

At face value, the U.S. definitions tend to be broad and somewhat generalized when compared to those from the EU, which has positioned AI according to practical degrees of risk. Most notably, the EU Artificial Intelligence Act categorizes AI use into three different tiers. Those of unacceptable risk would be prohibited (taking, for example, the use of facial recognition for law enforcement), while high risk systems would be authorized but subject to scrutiny before they can gain access to the EU market (taking, for example, AI used for hiring and calculating credit scores). Meanwhile, limited and minimal-risk AI, such as AI chatbots and AI use in inventory management, will be subject to light transparency obligations. Civil and human rights are factored into the definitions offered by the Organisation Economic Co-operation and Development (OECD) and other international bodies. The OECD defines innovative and trustworthy AI as those that include: respect for human rights and democratic values; setting standards for inclusive growth; human-centered values and fairness; transparency and explainability; robustness, security, and safety; and accountability. Compared to the U.S., international entities have taken a more proactive, and perhaps proscriptive, approach to defining bias to ensure some common consensus on the harms being addressed.

While roundtable participants didn’t have full consensus on the most commonly accepted definition of AI bias, they did offer perspectives on the outcomes that should be further investigated, especially those that seem to collide with the public interest and equity. Generally, diversity and inclusion are treated as afterthoughts in AI development and execution, and flagged when systems go awry, resulting in quick fixes that do not address the breadth of such harmful technologies. Roundtable experts also shared that most biases occur as a consequence of poor data quality, which will be discussed later in the blog. Experts also pointed to the lack of privacy in this technological age, which continues to leave marginalized groups more vulnerable to unmitigated data collection without their knowledge. In sum, roundtable participants found that AI biases reflect larger systemic issues of societal discrimination, poor data quality, and the lack of data privacy protections. There was also mention of how the lack of workforce diversity in the computer and data sciences hinders more inclusive approaches.

These factors shared during the roundtable overwhelmingly support why the U.S. needs more focused guidance on how to attain inclusive, equitable, and fair AI. The Biden administration has already centered equity among federal initiatives, including AI. Executive Order 13985, Advancing Racial Equity and Support for Underserved Communities Through the Federal Government, directs the U.S. Department of Defense to advance equitable AI by “investing in agency-wide responsible AI development and investing in the development of a more diverse AI workforce, including through partnerships with Historically Black Colleges and Universities (HBCUs) and Minority Serving Institutions (MSIs).” The previous administration provided a running start on AI governance when they dived into discussions and strategies for how federal agencies could harness the transformative capabilities of AI. The Equal Employment Opportunity Commission (EEOC) started this process in its own work focused on mitigating disparities in AI-driven hiring tools for people with disabilities. Yet, more needs to be done in the U.S. to affirmatively own the existence of online data biases, and flush out areas for change.

Leave a comment

Your email address will not be published. Required fields are marked *