Why AI needs social workers and “non-tech” folks.

Desmond U.Patton,PHD
4 min readMar 24, 2019

In light of recent discussions regarding the lack of diversity and inclusion at Stanford’s new Institute Human-Centered Artificial Intelligence, I’ve started to think about my own inclusion and comfortability in AI spaces. I’m a 37 year old Black male tenured professor at the Columbia School of Social Work and for the past 6 years I’ve worked with colleagues in computer science (CS) and data science to co-design contextually-driven AI systems aimed at understanding root causes of community-based violence and working with community-based groups to augment gun violence prevention outreach efforts and services.

I was initially quite intimidated in this space, not really understanding the lingo, the purpose of a part of speech tagger or a binary classification or perhaps the difference between a neural net approach vs, semi-supervised or unsupervised analysis. But then two things happened that radically shifted how my engagement wit this work: 1) I doubled-down on my social work training- understanding my primary goals are to help people in need and address pressing social problems; underscoring the importance of human relationships and to always behave in a trustworthy manner. These are all the things I learned as an MSW student at the University of Michigan School of Social Work. 2) I have been extremely fortunate to work with computer science colleagues at Columbia who value my expertise as a social work scholar and take the time to explain any jargon or complex AI ideas that I may not have initially understood.

There are specific things that my computer science colleagues do that I believe honor my contribution to AI and allow me to bring others social work scholars to the table.

  1. Privilege my domain expertise. My expertise as a qualitative violence researcher and being a black man is what brought the team together. My computer science colleagues see this as integral to the development of any AI system. In fact, decisions regarding concepts that should be analyzed, framing of those concepts, error analysis of outputs and real-world implementation of the AI systems are all decisions made jointly.
  2. Work directly with youth and community groups to co-design AI systems. Again, another thing I learned in social work school. My colleagues in CS and I became keenly aware of what we didn’t know early on and knew that in order to talk about community-based violence or to even begin thinking about what AI could do in this space, we MUST have community support AND buy-in. We not only hire community members and create advisory teams, but privilege their suggestions, critiques and ideas at every turn. In all honestly, we do struggle here. We don’t always get it right, but my CS colleagues understand that our work is meaningless without this approach.
  3. Seek to understand social work approaches and value them. The way in which we create a training corpus and annotate those data are all based on things I learned in social work school. Things like understanding a person or in our case, a social media user, within their community context while working to uncover any bias we have regarding that community.
  4. Embrace what they don’t know. I’ve watched my CS colleagues grow in their humility. Specifically, I’ve watched them wrestle with the importance of context in interpreting social media posts and rethink approaches when our AI systems incorrectly predict content that may do more harm than good, particularly for communities of color.
  5. Use their privilege to advocate for diverse voices or in some instances, give up their privilege so that I can have a firmer seat at the table. This means, sometimes it’s better for me to speak to a group or community or perhaps I need to explain the larger context or strategy we engage.
  6. Support my research lab’s initiatives to create pipeline programs for underrepresented groups to enter AI. This includes, writing reference letters, providing funding support or access to support for grant writing, availing themselves to teach or mentor students.
  7. Share authorship ( and leadership)and publish in each others discipline. This means for the highly technical papers we publish in Computer Science proceedings and for anything that is theoretical, qualitative or social work-based, we publish in my discipline.

This is by no means perfect or meant to be directive. We’re still learning from each other and growing as we make mistakes. Within these teams, I’ve been able to cultivate relationships in which I feel valued, not tokenized. I know that i’m making a unique contribution and I feel my colleagues value that too.

📝 Read this story later in Journal.

👩‍💻 Wake up every Sunday morning to the week’s most noteworthy stories in Tech waiting in your inbox. Read the Noteworthy in Tech newsletter.



Desmond U.Patton,PHD

Public Interest Technologist, Associate Professor at Columbia School of Social Work and Director of the SAFElab at Columbia University.