Although a definition of fairness as a word can be agreed upon, in concrete terms, its concrete application can be the subject of further analysis.
Just as defining what is fair or not can be a real dilemma for people, it is also a challenge for artificial intelligence and one that a new initiative at Michigan State University is seeking to ease.
Fairness classes for AI algorithms
Considering that artificial intelligence systems are increasingly present in day-to-day activities and services, it is necessary to provide sufficient impartiality to the platforms involved in deciding who gets the right medical care, who is eligible for a bank loan, or who is assigned a job.
With funding from Amazon and the National Science Foundation, Pang-Ning Tan, a researcher, and professor in the Department of Computer Science and Engineering at the aforementioned U.S. university has spent the last year training artificial intelligence algorithms to help them discern between the fairness and unfairness of their actions.
“We are trying to design AI systems that are not just for computer science, but also bring value and benefits to society. So I started thinking about what are the areas that are challenging for society right now,” the researcher said about the rationale behind his initiative.
This project raises the need to develop initiatives with a direct impact on its users. Developing this same point, Tan also commented that “equity is a very big issue, especially as we become more reliant on AI for everyday needs, like medical care, but also things that seem mundane, like filtering spam or putting stories in your news section.”
Even as automated systems, AI algorithms can carry certain inherited biases from the data used in their training or even be passed on directly by their creators. For example, according to a survey conducted by Tan’s research team, there are cases of AI systems that discriminate racially when managing medical care and sexual segregation against women in job application systems.
On this reality, Abdol-Hossein Esfahanian, a member of Tan’s research team, commented that “algorithms are created by people and people usually have biases, so those biases are filtered… we want to have fairness everywhere, and we want to have a better understanding of how to evaluate it.
With the support of theories from the social sciences, Tan and his team seek to approximate the most universal notion of fairness possible. To meet this end, the principles of fairness conveyed to the algorithm will not come from a single view, challenging it to decide between competing or contradictory positions.
“We’re trying to make the AI justice-aware, and to do that, you have to tell it what’s fair. But how do you design a measure of fairness that is acceptable to everyone?” noted Tan, adding that “we are looking at how a decision affects not only individuals but also their communities and social circles.”
The work is ambitious and despite the progress, it is just getting started. “This is very ongoing research. There are a lot of problems and challenges – how do you define fairness, how can you help people trust these systems that we use every day,” Tan reflected, adding that “our job as researchers is to find solutions to these problems.”
The full report of this research can be accessed on the Michigan State University website.