One year ago, Timnit Gebru, a Google artificial intelligence researcher, tweeted that she got fired, igniting a debate about employees’ freedom to challenge the implications of their company’s technology. She founded a new research organization on Thursday to ask questions about the appropriate use of artificial intelligence that Google and other big companies won’t, according to Gebru.
“Instead of fighting from the inside, I want to show a model for an independent institution with a different set of incentive structures,” Gebru states, who is the founder and executive director of Distributed Artificial Intelligence Research (DAIR). The first half of the name refers to her desire to be more inclusive than other AI laboratories, which are predominantly white, Western, and male, and to recruit people from underrepresented sections of the world.
Gebru was fired from Google following a disagreement with his supervisors over a research paper advocating caution with new text-processing technology that Google and other tech corporations have aggressively adopted. Google claims she left and was not fired, although it later fired Margaret Mitchell, a researcher who co-led a team exploring ethical AI with Gebru. The corporation has added new restrictions to the topics that its researchers are allowed to investigate. Google spokesman Jason Freidenfelds declined to comment but referred WIRED to a recent story on the company’s AI governance efforts, which stated that Google had released over 500 papers on “responsible innovation” since 2018.
The backlash at Google brought to light the inherent conflicts that arise when tech companies fund or hire researchers to investigate the ramifications of the technology they want to benefit from. Google’s sponsorship of a major conference on technology and society was canceled earlier this year by the event’s organizers. DAIR, according to Gebru, will be freer to question the potential drawbacks of AI because it will be free of the academic politics and pressure to publish that can stifle university research.
“Instead of fighting from the inside, I want to show a model for an independent institution with a different set of incentive structures.” – Timnit Gebru
DAIR will also demonstrate AI applications that are unlikely to be developed elsewhere, according to Gebru, with the goal of inspiring others to take the technology in new directions. One such initiative is the creation of a public data set of aerial images of South Africa in order to analyze how apartheid’s legacy is still inscribed in land use. According to a preliminary examination of the photos, most unoccupied property developed between 2011 and 2017 was converted into upscale residential districts in a densely populated zone historically restricted to non-white people, where many poor people now dwell.
Later this month, at NeurIPS, the world’s most prestigious AI conference, DAIR will make its formal debut in academic AI research with a paper on that project. Raesetje Sefala, DAIR’s inaugural research fellow, is the principal author of the publication, which also contains contributions from outside researchers.
DAIR’s advisory board includes Safiya Noble, a UCLA researcher who studies how technology platforms impact society. Gebru’s project, she argues, is an example of the new and more inclusive institutions that are needed to make progress in recognizing and responding to the consequences of technology on society.
“Black women have been major contributors to helping us understand the harms of big tech and different kinds of technologies that are harmful to society, but we know the limits in corporate America and academia that Black women face,” says Noble. “Timnit recognized harms at Google and tried to intervene but was massively unsupported—at a company that desperately needs that kind of insight.”
Noble just founded her own nonprofit, Equity Engine, to help Black women achieve their goals. Ciira wa Maina, a lecturer at Dedan Kimathi University of Technology in Nyeri, Kenya, is a member of DAIR’s advisory board.
DAIR is presently a project of the nonprofit Code for Science and Society, but it will later become a stand-alone organization, according to Gebru. Grants totaling more than $3 million have been awarded to her project by the Ford, MacArthur, Rockefeller, and Open Society foundations, as well as the Kapor Center. She aims to diversify DAIR’s financial support in the future by taking on consultancy work connected to the organization’s research.
DAIR is part of a growing body of work and organizations that take a more wide and critical view on AI technology. New charities and university institutes, such as NYU’s AI Now Institute, the Algorithmic Justice League, and Data for Black Lives, have sprung up to examine and criticise AI’s effects in and on the world. Some AI researchers look at the effects and proper use of algorithms, and scientists from other domains, including law and sociology, have taken a critical look at AI.
This year, the White House Office of Science and Technology Policy engaged two notable academics who specialize in algorithmic fairness research and are developing a “bill of rights” to protect against AI damages. Last month, the Federal Trade Commission hired three AI Now employees to act as advisers on AI technology.
Despite these trends, Baobao Zhang, an assistant professor at Syracuse University, believes that the general public in the United States still has faith in tech corporations to guide AI development.
Zhang recently polled AI academics and the general public in the United States to find out who they trusted to guide the technology’s development in the public interest. The people had the greatest trust in academic researchers and the US military, according to the findings. As a group, IT businesses came in slightly behind foreign or nonprofit research institutions like CERN, but ahead of the US government. AI experts have lower trust in the US military and some digital businesses, such as Facebook and Amazon than the general population, but higher trust in the UN and non-governmental scientific groups.
Source: WIRED