There’s essentially no good way to prepare for a golden age, but as we enter this shiny period of AI advancements, the tradeoffs between powerful models and potential biases remain in flux. AI has brought about powerful tools to facilitate many fields and channels of modern life -- things like better clinical decision making, autonomous vehicles, and assistive technologies.
For all of its good, natural language processing (NLP) research has also had some buzz for its less “tactful” moments. Open AI’s NLP model “GPT-3” has created a stir for its ability to pick up human biases and then encode gender and racial biases in its predictions. For example, when shown text involving racial content, the output GPT-3 tended to be more negative with “Blackness” than corresponding white or Asian-sounding prompts.
Geeticka Chauhan, a PhD student from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), and her collaborator, Zhijing Jin from the Max Planck Institute and ETH Zürich, observed this potential toxic mirror of the Internet that NLP models were promoting.
To address this, the team, in collaboration with Brian Tse (Oxford), Mrinmaya Sachan (ETH), and Rada Mihalcea (University of Michigan), created a framework for researchers to comprehensively evaluate their NLP project during its starting phases. Their work, “How Good Is NLP? A Sober Look at NLP Tasks through the Lens of Social Impact”, which was published at the Findings of the Association for Computational Linguistics 2021 conference, looked at downstream impact and comparative advantage for topics of healthcare, education, combating online misinformation, social equality, and more. MIT news asked the researchers to explain their proposal and describe how the framework could help evaluate and set the future agenda surrounding NLP.
What kind of social impacts does NLP have? And what is your proposal of “NLP for Social Good” about?
Recent years have seen impressive development of NLP, and we can see its huge impacts: from Amazon Alexa to Google Home, from social media recommendation to Duolingo language learning, and many different aspects that are closely tied with our lives.
A rising question for NLP research is, what is the next step? As the advancement of NLP opens up so many new opportunities for applications, our research tries to provide some suggestions on the next step forward to bring NLP to the right place -- social good.
We envision the big picture of how NLP research can help social good in many different and surprising ways. For example, one recent, positive endeavor is to use NLP technology to automatically read medical papers on COVID to facilitate medical research. Another great example is integrating NLP for fake news detection to fight against the rampant misinformation online.
Our proposal to promote NLP for social good lies in three steps: 1) identify important areas that NLP should help, 2) evaluate multiple potential NLP research directions and concrete contributions to the areas, and 3) support research with large estimated social impact. We want to convey a strong message to the community that technical advancement can be closely tied with social needs, and the many researchers in the NLP community also aspire to help society in a very proactive way.
For the public, how should we direct resources to encourage more NLP research to tackle the most important social good applications, and what are some example directions to support that?
It’s clear that many popular NLP research topics are driven by commercial applications, such as chatbots and automatic translation. However, it is our choice to invest in, let’s say, a translation system for movie subtitles, or a translation system specifically designed to help immigrants get integrated into a foreign society faster and better.
With the same technology, it is totally our choice to choose what exact direction to advance more into. And in our study, we find a concerning problem that most existing research on applications is not tackling the most urgent and important areas. This becomes very clear when contrasting the hottest NLP research areas with the United Nations Sustainable Development Goals. For example, we find very few works on poverty, hunger, education, and climate change being published at top NLP conferences. We spot that there is a trend yet to be started, bringing more social needs to the horizon of the NLP research community.
We call for more interactions between the public and NLP researchers. On the public side, we want to draw more attention to let people know the impressive power of NLP: with big data, NLP can automate many functions involving conversations, written text and speech. Moreover, we want the public also to voice out their needs. There are workshops that encourage this conversation, inviting public sides to tell their needs to NLP researchers, and brainstorming together with NLP researchers about what social applications are meaningful to pursue. For example, NLP has a large potential to facilitate philanthropy work, and government-led resource distribution in society. Example topics to pursue for social good include using NLP to predict poverty regions, build decision support systems for agriculture, analyze clinical notes, and design personalized educational materials.
For researchers or labs, what perspective and steps should they take to align with social good when deciding a new project?
This is a very good question. For most researchers, it is highly valued to make a research direction sustainable, well-funded, and welcomed by society and people affected by the technology. To effectively push towards this goal, we encourage researchers to take our 3-step action plan before starting a new research project.
Step 1: Start with a set of possible research problems that are feasible to implement (limited by resources, strengths, interests, etc.), such as different ways to tailor automatic translation, for news, tweets, movies, legal documents, or clinical reports.
For each possible research problem to pursue, estimate the social impact if it is successfully developed in the future. Note that this social impact should be a comprehensive aggregation of all its various (positive and negative) impacts, and can be understood by speaking with the communities potentially impacted. If your research is very theoretical and the downstream impacts are not easily predictable, put a high uncertainty factor in your estimation.
Step 2: Estimate your contribution to each research area in your set. Especially consider your comparative advantage: the potential advancement brought by you per unit of resource (including time, effort, money, etc.), as opposed to other researchers in the same area, or other research areas you can choose from.
Step 3: Choose the project by which you can contribute to the most potential social good impact. This will differ for each researcher because their resources, strengths, and interests vary.
To benefit the NLP research community, we created the NLP for Social Good (NLP4SG) Initiative (Twitter account: @nlp4sg), with an annual workshop, NLP for Positive Impact, and a curated paper list of all NLP for Social Good-related research papers.