Artificial Intelligence engineers should enlist ideas and expertise from a broad range of social science disciplines, including those embracing qualitative methods, in order to reduce the potential harm of their creations and to better serve society as a whole, a pair of researchers has concluded in an analysis that appears in the journal Nature Machine Intelligence.
“There is mounting evidence that AI can exacerbate inequality, perpetuate discrimination, and inflict harm,” write Mona Sloane, a research fellow at New York University’s Institute for Public Knowledge, and Emanuel Moss, a doctoral candidate at the City University of New York. “To achieve socially just technology, we need to include the broadest possible notion of social science, one that includes disciplines that have developed methods for grappling with the vastness of social world and that helps us understand how and why AI harms emerge as part of a large, complex, and emergent techno-social system.”
The authors outline reasons where social science approaches, and its many qualitative methods, can broadly enhance the value of AI while also avoiding documented pitfalls. Studies have shown that search engines may discriminate against women of color while many analysts have raised questions about how self-driving cars will make socially acceptable decisions in crash situations (e.g., avoiding humans rather than fire hydrants).
Sloane, also an adjunct faculty member at NYU’s Tandon School of Engineering, and Moss acknowledge that AI engineers are currently seeking to instill “value-alignment”—the idea that machines should act in accordance with human values—in their creations, but add that “it is exceptionally difficult to define and encode something as fluid and contextual as ‘human values’ into a machine.”
To address this shortcoming, the authors offer a blueprint for inclusion of the social sciences in AI through a series of recommendations:
- Qualitative social research can help understand the categories through which we make sense of social life and which are being used in AI. “For example, technologists are not trained to understand how racial categories in machine learning are reproduced as a social construct that has real-life effects on the organization and stratification of society,” Sloane and Moss observe. “But these questions are discussed in depth in the social sciences, which can help create the socio-historical backdrop against which the…history of ascribing categories like ‘race’ can be made explicit.”
- A qualitative data-collection approach can establish protocols to help diminish bias. “Data always reflects the biases and interests of those doing the collecting,” the authors note. “Qualitative research is explicit about the data collection, whereas quantitative research practices in AI are not.”
- Qualitative research typically requires researchers to reflect on how their interventions affect the world in which they make their observations. “A quantitative approach does not require the researcher or AI designer to locate themselves in the social world,” they write. “Therefore, does not require an assessment of who is included into vital AI design decision, and who is not.”
“As we move onwards with weaving together social, cultural, and technological elements of our lives, we must integrate different types of knowledge into technology development,” Sloane and Moss conclude. “A more socially just and democratic future for AI in society cannot merely be calculated or designed; it must be lived in, narrated, and drawn from deep understandings about society.”
The Latest on: Artificial Intelligence
via Google News
The Latest on: Artificial Intelligence
- Report: Speed up drug development with artificial intelligenceon January 22, 2020 at 5:50 pm
More and improved use of artificial intelligence, and an overhaul of medical education to include advances in machine learning, could cut down significantly the time it takes to develop and bring new ...
- VA’s Artificial Intelligence Director Details AI Institute’s Early Effortson January 22, 2020 at 5:13 pm
Treasury Wants Better Information on Financial Entities’ Cybersecurity Practices CISA Says Agencies Have 10 Days to Patch NSA-Spotted Microsoft Vulnerability What Do You Think About Artificial ...
- Global Artificial Intelligence (AI) Market 2019-2023 | 33% CAGR Projection Through 2023 | Technavioon January 22, 2020 at 4:26 pm
market is expected to post a CAGR of more than 33% during the period 2019-2023, according to the latest market research report by Technavio. Request a free sample report This press release features ...
- Global Artificial Intelligence Platforms Market 2019-2023 | 28% CAGR Projection Over the Next Five Years | Technavioon January 22, 2020 at 4:23 pm
The artificial intelligence platforms market size is poised to grow by USD 6.95 billion during the period 2019-2023, according to Technavio.
- 7 books to read right now if you want to become the ultimate authority on artificial intelligenceon January 22, 2020 at 5:42 am
The picks cover the basics of the technology, along with accompanying issues like the organizational challenges facing companies that adopt AI.
- Artificial Intelligence (AI) in Cyber Security Market 2020 By Revenue, Price, Gross Margin, Production Capacity Estimates and Forecasts to 2026on January 22, 2020 at 4:39 am
Global Artificial Intelligence (AI) in Cyber Security Industry research report studies latest Artificial Intelligence (AI) in Cyber Security aspects market size, share, trends, industry summary and ...
- What Apple’s Xnor.ai deal says about its ambitions for privacy, battery life, and artificial intelligenceon January 21, 2020 at 10:53 am
Speaking on stage last year about the rise of artificial intelligence on devices, Arm Holdings CEO Simon Segars touted the capabilities of the chip giant’s Cortex-M4 processor to help identify objects ...
- IBM Proposes Artificial Intelligence Rules to Ease Bias Concernson January 21, 2020 at 1:00 am
IBM called for rules aimed at eliminating bias in artificial intelligence to ease concerns that the technology relies on data that bakes in past discriminatory practices and could harm women, ...
- Alphabet CEO Sundar Pichai says it’s ‘no question’ artificial intelligence needs regulationon January 20, 2020 at 3:43 pm
With several Bay Area cities moving to stop the use of facial recognition technology, the chief executive of one of the area’s tech giants has come out in favor of stricter, and more widespread ...
- Google CEO calls for regulation of artificial intelligenceon January 20, 2020 at 6:21 am
LONDON -- Google’s chief executive called Monday for a balanced approach to regulating artificial intelligence, telling a European audience that the technology brings benefits but also ...
via Bing News