How Social Workers Can Contribute to Fairness in AI
From Tony Hillen
Algorithms increasingly make their way into our environments in ways we barely recognize, from recommendations about what book to buy or who to “friend” on social media. Often, these algorithms are helpful and introduce us to ideas that fit with our own. All of these subtle recommendations are built on information about not only our own past behaviors, but the behaviors of people that the algorithms assume are similar to us.
What does this mean when algorithms make their way into social service systems? One possibility is that it helps us make better decisions about who would most benefit from what interventions. On the other hand, because algorithms are derived from past datasets, they also embed human data that is often biased, and potentially amplify the wrong types of data in ways that generate bad recommendations. Even more troubling, many social services systems are partnering with for-profit vendors who promise cost-saving precision recommendations, but do not reveal the ways the algorithms work or assure that social service agency stakeholders understand the risks of embedded bias.
In this presentation, Dr. Sage offers an overview of her work exploring algorithm fairness regarding the use of machine learning and child welfare data. This work involves assessing which older youth in care receive which services, how services impact outcomes, and whether there are ways to use this data to improve fairness in service allocation through the use of algorithms. The National Science Foundation and Amazon jointly funded this work to identify ways to improve fairness and mitigate bias in machine learning. Dr. Sage is working with a team of computer scientists and other researchers to tackle this issue. She will discuss the ways that interdisciplinary collaborations can improve research in this area, their process for developing an ethics and values statement to guide their work, and the importance that social workers understand how algorithms work so that they know when to advocate for and against their use and assure that their agency stakeholders make informed decisions about the incorporation of machine learning in this work.