Artificial Intelligence

Artificial intelligence seeks to create “intelligent” machines that work and react more like humans. AI developments rely on deep learning, machine learnings, and natural language processing that help computers accomplish specific tasks by processing large amounts of training data to help the system recognize patterns, input data to drive predictions, and feedback data for improving accuracy over time.

 

How It's Developing

One of the big stories highlighting AI’s development and reach was the AlphaGo program’s victory over 18-time Go world champion Lee Sedol in a five-game series in 2016. In an article in Nature, DeepMind researchers explained the development of AlphaGo's expertise through a combination of Monte-Carlo tree search (an algorithm for optimal decision making) and deep neural networks that have been trained via supervised learning, observing human expert games, and reinforced by playing games against itself. [1] A year later, in 2017, AlphaGo had another successful outing, defeating a team of five Go champions and then demonstrating a collaborative match where two teams, each composed of a human and an AlphaGo companion, played against each other – the researchers celebrated the collaborative approach as a future direction for AI where humans would work in step with Artificial Intelligence to elevate overall performance. [2]

If Google’s DeepMind AlphaGo represented the positive developments toward AI, Microsoft’s Tay artificial intelligence chat bot represented the challenges and limits of the technology. Microsoft’s research team launched the AI chatbot on Twitter, GroupMe, and Kik as a way to test and improve Microsoft's understanding of conversational language, including the nuances of teens’ online language. [3] The bot quickly began issuing offensive posts (disputing the existence of the Holocaust, referring to women and minorities with unpublishable words, and advocating genocide), partly in response to user commands for the bot to repeat users’ own statements, while also learning bad behavior as it ingested content from its social media forums. [4] Microsoft apologized for the unintended offensive tweets and tried to explain some of what happened while recognizing the pilot as part of a process for moving forward with the technology. [5] The experience with Tay could actually limit artificial intelligence development, as some technology companies have become reluctant to set conversational artificial intelligence systems free to talk with the large numbers of people needed to train them. [6]

Technology companies are finding roles for artificial intelligence in moderating online content. Facebook’s artificially intelligent language processing engine, Deep Text, applies deep learning to understand human language – the company initially pursued Deep Text to power chatbots in Messenger, to filter out spam and abusive comments from users’ Newsfeeds, and to help understand the topic area and even the content of just about anything people post on the social network. [7] The system quickly surpassed humans in flagging offensive photos, quarantining obscene content before it ever reaches users. [8] As Facebook’s use of AI has advanced, it has begun to explore artificial intelligence’s use in flagging material on the video platform Facebook Live, which requires computer vision algorithms that are fast, know how to prioritize policies, and when content should be taken down. [9] Facebook also sees opportunities for artificial intelligence to teach itself to identify key phrases that were previously flagged for being used to bolster a known terrorist group, to identify users who create fake accounts in order to spread extremist or terrorist content, or to identify users associated with clusters of pages or groups that promote extremist content. [10]  In 2017, Facebook announced plans to integrate AI into a program that allowed users to flag troubling image or status posts, helping to identify posts that suggest that a user may be suicidal; Facebook partnered with organizations like the National Suicide Prevention Lifeline, the National Eating Disorder Association, and the Crisis Text Line so that when users’ posts are flagged, they can connect immediately via Messenger. [11] Facebook’s suicide prevention program evolved to the point that the technology can proactively identify a post or Facebook Live broadcast "likely to include thoughts of suicide," and send those posts to Facebook's trained reviewers who, in turn, can contact first responders. [12]

Facebook has continued to pursue AI as a tool to better understand content on the network, including its Automatic Alternative Text tool which uses deep neural networks to identify particular objects in a photo and pick out particular characteristics of the people in the photo to create a caption that a text-to speech engine can then read aloud for users with low visibility – while the system doesn’t always get images exactly correct, it is an improvement and shows the growing potential for AI to recognize and describe photos and images. [13] Facebook is also exploring how artificial intelligence can process content and make suggestions based on that content. By integrating AI into its personal assistant technology M, Facebook can suggest users book an Uber or prompt them to send money to a friend based on whatever it is the user was talking about in Messenger. [14] And Facebook developers have also used deep learning and neural networks to train its Lumos system to recognize scenes, objects, animals, places, attractions, and clothing items in photos, allowing users to search for and retrieve photos even if they themselves have not annotated them. [15]  

In addition to content moderation, artificial intelligence is increasingly being used for content generation. From short films (Sunspring by the AI program Benjamin), to podcasts (Sheldon County), to short stories (Shelley), artificial intelligence systems are being used to develop creative or artistic outputs. [16] AI is also increasingly used to evaluate artistic outputs, such as a system of neural networks developed by Disney and the University of Massachusetts Boston that can evaluate short stories to predict which stories will be most popular by looking at different sections of each story and the holistic view of a story's meaning. [17]   

AI is also being used to develop informational content and news reporting. Systems like IBM’s Watson are providing real-time scores, assessments, and automated video captions for a range of sporting events and cultural activities. [18] Newspapers are turning to AI to produce news coverage, such as The Washington Post’s use of its Heliograf artificial intelligence program to cover every House, Senate, and gubernatorial race on election day, freeing up reporters to focus on high-profile contests. [19]

And artificial intelligence is also being integrated into education. The IBM Foundation and the American Federation of Teachers have collaborated to build Teacher Advisor, a program that uses artificial intelligence technology to answer questions from educators and help them build personalized lesson plans. [20]

These advances are all in addition to the ways that artificial intelligence research will transform higher education and research centers. IBM and MIT have signed a 10-year, $240 million partnership agreement that establishes the MIT-IBM Watson AI Lab where IBM researchers and MIT students and faculty will work side by side to conduct advanced AI research. [21]

As artificial intelligence makes its way into more and more sectors, the dominant concern remains the potential impact it will have on the workforce. A 2018 Gallup survey found that the American public widely embraces artificial intelligence in attitude and practice, with nearly five in six Americans already using some product or service featuring AI, but most Americans recognize the technologies’ potential impact on future employment. [22] Those concerns for employment are supported by ample research. A 2018 report from PwC predicts three waves of automation – a flood of algorithms where machines handle data analysis and simple digital tasks; augmentation inundation, when repeatable tasks and the exchange of information will come to be done by humans and automated systems working together; and, finally, an autonomy tsunami, when machines and software will make decisions and take physical actions with little or no human input – with experts noting that most developed countries are already well into the first stage. [23] Still other research places AI’s development and threat in a more nuanced context. An AI Index created by researchers at Stanford University and the Massachusetts Institute of Technology, a McKinsey Global Institute report, and a National Bureau of Economic Research article by economists from M.I.T. and the University of Chicago collectively suggest that AI can likely do less now than we think, but that it will eventually do more in more sectors than we could expect, and that it will probably evolve faster than past technologies. [24] As important as the question of when, might be the question of where – geographically and in which sectors. Several reports (a 2017 study from Northwestern University and MIT and a 2017 report from the Institute for Spatial Economic Analysis at the University of Redlands) indicate that AI might have its greatest effects on cities where more jobs are routine clerical work, such as cashier and food service jobs, which are more susceptible to automation – while that could include larger cities like Las Vegas, Orlando, and Louisville, it could also include smaller cities with fewer than 100,000 people, where such jobs may have higher concentrations. [25] As routine service and clerical jobs become susceptible to automation, other industries that rely on skills in statistics, mathematics, and software development, will likely see growth or stability, as they build and improve the systems that replace traditional manufacturing and service workers. [26] 

A preview of AI’s potential impact on clerical work might be available in Google Duplex. At its 2018 I/O conference Google debuted its Google Duplex AI System, which helps Google Assistant accomplish real-world tasks over the phone (book an appointment, make reservations) – initially, the system only operates in “closed domains” (exchanges that are functional, with strict limits on what is going to be said) and will have “disclosure built-in” so that a verbal announcement will be made to the person on the other end of the call. [27] Google has begun to explore options for Duplex’s use in call centers to improve call handling by giving the common but simple queries to Duplex, leaving a limited number of human workers to field more advanced call issues. [28] In a similar vein, IBM Watson and Japanese insurance company Fukoku Mutual Life Insurance introduced an AI solution that can scan hospital records and other documents to determine insurance payouts, factoring injuries, patient medical histories, and procedures administered – the system will replace 34 human insurance claim workers. [29]
Google’s Duplex is just one of several initiatives to make artificial intelligence systems that can communicate more like humans and accomplish more human tasks. IBM’s Project Debater seeks to interact and debate with people across 100 topics – the current scope of interactions are tightly constrained to a four-minute opening statement, followed by a rebuttal to the opponent’s argument, and then a statement summing up a viewpoint. [30] Amazon’s Alexa Prize competition has researchers create a chatbot using Alexa that can talk to a human for 20 minutes without messing up. [31]

Even if AI does not fully replace jobs, there is a clear desire to use AI to augment work. Google’s DeepMind has begun exploring avenues into healthcare with the creation of DeepMind Health that will create apps to help medical professionals identify patients at risk of complications and organize and prioritize admitted patients – while neither of the initial products use artificial intelligence, deep learning, or neural networks, the entry into the space indicates their longer-term interest in the technology’s deployment in this space. [32] 

Through all of these developments, governments will increasingly consider the technology’s potential effects on the economy and innovation. The U.S. government has accelerated its focus on artificial intelligence, hosting a White House summit on artificial intelligence that included representatives from 38 companies (including Amazon, Facebook, Google, and Intel) to discuss how the government can fund AI research and alter regulations to advance the technology and announcing a Select Committee on Artificial Intelligence made of up the leading AI researchers in government and charged with advising the White House on governmentwide AI research and development priorities and the establishment of partnerships between government, the private sector, and independent researchers. [33] The Trump administration has pledged to release more government data that might help fuel AI research in the U.S., but what kind of data would be released or who would be eligible to receive the information remains unclear. [34]

 

Why It Matters

Artificial intelligence could become an invaluable tool for organizing and making accessible large collections of information. Google’s Life Tags project is a searchable archive of Life magazine photographs that used artificial intelligence to attach hundreds of tags to organize the archive. [35] Another Google project, Talk to Books, lets users type in a statement or a question and the system retrieves whole sentences in books related to what was typed, with results based not on keyword matching, but on more complex training of AI to identify what a good response looks like. [36] The Allen Institute for Artificial Intelligence, a nonprofit created by Microsoft co-founder Paul Allen, unveiled Semantic Scholar, a search engine that uses machine learning and other AI to improve the way academics search through the growing body of public research, more easily accessing research papers, targeting specific results, and revealing images, by using natural language processing algorithms and computer vision technology. [37] 

As AI becomes more adept at generating content, it could further complicate users’ navigation of a complex information environment. Artificial intelligence will be able to create 3D face models from a single 2D image; manipulate facial expressions on video in real time using a human “puppet”; change the light source and shadows in any picture; generate sound effects based on mute video; and resurrect characters using old clips – and many of these effects have given rise to the “deep fakes” that manipulate video and other images. [38]
As with many other technologies, AI may become one more development that libraries help communities better understand. Facebook launched a campaign to educate people on the basics of artificial intelligence, focusing on the technology behind photo recognition, self-driving cars, and language translation. [39] In a similar way, the Urban Libraries Council (ULC) articulated a vision for libraries to serve communities by advancing algorithmic literacy while also ensuring an equitable and inclusive future by monitoring the storage, privacy, and application of data as AI technology becomes more ubiquitous.

If AI becomes a serious threat to jobs, libraries’ roles in workforce development may become even more important, but also more complicated. A compounded challenge may arise where workforce development will need to encompass not only the preparation for entry level individuals (into a market that is increasingly limited and competitive), but also solutions for a new vacuum in middle level management caused by the elimination of once plentiful entry level workers who matriculated into middle management. [40] The new workforce development demands will likely require higher-order critical, creative, and innovative thinking as well as emotional engagement, placing a greater value on the quality of thinking, listening, relating, collaborating, and learning. [41]

AI’s dependence on data sets can reinforce certain human systems, including bias. [42] Many researchers and practitioners are exploring options to address sexism and racism in AI development by curating new data sets that balance gender and ethnicity and more intentionally labeling and annotating data sets to show how the sets were collected. [43] To help change the way AI understands LGBT-related content, GLADD announced a partnership with Alphabet’s Jigsaw division to train AI with positive LGBT-related content and distinguish between phrases that are offensive to the LGBT community and those that are acceptable. [44] Coupled with efforts to change the scope and nature of data that trains AI systems are efforts to recruit women and other underrepresented groups into the field of artificial intelligence. [45] 

Issues of sexism, racism, and bias are just part of the larger ethical concerns around AI. In 2017, Google launched a DeepMind ethics group to oversee the responsible development of artificial intelligence by helping developers put ethics into practice and educating society about the potential impacts of AI. [46] A 2018  report, authored by two dozen researchers from Oxford, Cambridge, OpenAI, the Electronic Frontier Foundation, Endgame, and the Center for a New American Security, focused on the potential negative effects of AI, including malicious uses of the technology. [47] While computer science programs have been required to provide students with an understanding of ethical issues related to computing in order to be accredited by ABET, a growing number of universities are launching new courses on the ethics of artificial intelligence, the ethical foundations of computer science, and other offerings that will help train the next generation of technologists and policymakers to consider the ramifications of innovations before those products are made available to the public. [48] As technologist are increasingly motivated to consider the ethical implications of their innovations, religion, philosophy, and the humanities could play an increasingly important role in the development of artificial intelligence. [49]

Many technology leaders are working to open the artificial intelligence field to make it more collaborative. Organizations like OpenAI, which was established by tech leaders like Elon Musk, Peter Thiel, and Reid Hoffman, promote a beneficial goal of advancing digital intelligence in ways that benefit humanity, free from the demand to generate financial return. [50] Facebook, Amazon, Microsoft, Google's DeepMind, and IBM are among the major partners in the Partnership on Artificial Intelligence to Benefit People and Society, which seeks to conduct open-source research and investigate globally important AI issues such as ethics and human and AI system collaboration. [51] In 2016, Apple announced plans to allow its artificial intelligence teams to publish research papers, reversing an earlier strategy to keep their research in-house, in the hopes that engaging with the larger community might allow researchers to feed off wider advances in the field. [52]

Even as artificial intelligence research has sought to become more collaborative, it has also put a strain on traditional systems of research and knowledge production and sharing. Many universities in the United States and Europe are losing talented computer scientists and artificial intelligence experts, lured away from academia by private sector offers – the shift from academic settings to the private sector has implications for not only research production and dissemination, but also the teaching and training of future generations. [53] In the United States, some technology companies have shifted their artificial intelligence operations to be closer to the universities that produce leading researchers. Facebook opened new artificial intelligence labs in Seattle and Pittsburgh after hiring three AI and robotics professors from the University of Washington and Carnegie Mellon University – in addition to advancing Facebook’s research, the professors will be better positioned to recruit and train other AI experts from those universities’ programs. [54] Still other technology companies have developed research labs with specific commitments to academic institutions – Microsoft’s Research AI unit engaged in a formal partnership with MIT’s Center for Brains, Minds and Machines. [55]

Notes and Resources

[1] "Google’s AI Is Now Reigning Go Champion of the World," Daniel Oberhaus, Motherboard, March 12, 2016, available from https://motherboard.vice.com/en_us/article/3dak7w/googles-ai-is-now-reig... 

[2] "Google’s AlphaGo AI defeats team of five leading Go players," Darrell Etherington, TechCrunch, May 26, 2017, available from https://techcrunch.com/2017/05/26/googles-alphago-ai-defeats-team-of-fiv... 

[3] “Microsoft made a chatbot that tweets like a teen,” Jacob Kastreakes, The verge, March 23, 2016, available from https://www.theverge.com/2016/3/23/11290200/tay-ai-chatbot-released-micr... 

[4] “Microsoft Created a Twitter Bot to Learn from Users. It Quickly Became a Racist Jerk,” Daniel Victor, The New York Times, March 24, 2016, available from https://www.nytimes.com/2016/03/25/technology/microsoft-created-a-twitte... 

[5] “Microsoft shows what it learned from its Tay AI's racist tirade,” Jon Fingas, Engadget, March 25, 2016, available from https://www.engadget.com/2016/03/25/microsoft-explains-tay-ai-incident/ 

[6] “To Give A.I. the Gift of Gab, Silicon Valley Needs to Offend You,” Cade Metz and Keith Collins, The New York Times, February 21, 2018, available from https://www.nytimes.com/interactive/2018/02/21/technology/conversational... 

[7] "Facebook Is Teaching Its Computers to Understand Everything You Post," Will Oremus, Slate, June 1, 2016, available from http://www.slate.com/blogs/future_tense/2016/06/01/facebook_s_new_ai_eng... 

[8] "Facebook spares humans by fighting offensive photos with AI," Josh Constine, TechCrunch, May 31, 2016, available from https://techcrunch.com/2016/05/31/terminating-abuse/ 

[9] "Facebook developing artificial intelligence to flag offensive live videos." Kristina Cooke, Reuters, December 1, 2016, available from https://uk.reuters.com/article/us-facebook-ai-video-idUKKBN13Q52M 

[10] "Facebook Will Use Artificial Intelligence to Find Extremist Posts," Sheera Frenkel, The New York Times, June 15, 2017, available from https://www.nytimes.com/2017/06/15/technology/facebook-artificial-intell... 

[11] "Facebook leverages artificial intelligence for suicide prevention," Natt Garun, The Verge, March 1, 2017, available from https://www.theverge.com/2017/3/1/14779120/facebook-suicide-prevention-t... 

[12] "Facebook's suicide prevention AI can now do more to help people in trouble," Karissa Bell, Mashable, November 27, 2017, available from https://mashable.com/2017/11/27/facebook-ai-suicide-prevention/#4hI.WyNN... 

[13] "Facebook’s AI is now automatically writing photo captions," Cade Metz, Wired, April 5, 2016, available from https://www.wired.com/2016/04/facebook-using-ai-write-photo-captions-bli...

[14] "Facebook is using AI in private messages to suggest an Uber or remind you to pay a friend," Kurt Wagner, Recode, April 6, 2017, available from https://www.recode.net/2017/4/6/15203526/facebook-messenger-m-artificial... 

[15] "Facebook's AI image search can 'see' what's in photos," Billy Steele, Engadget, February 2, 2017, available from https://www.engadget.com/2017/02/02/facebook-ai-image-search/ 

[16] Please see any of the below as examples:

“Movie written by algorithm turns out to be hilarious and intense,” Annalee Newitz, ArsTechnica, June 9, 2016, available from https://arstechnica.com/gaming/2016/06/an-ai-wrote-this-movie-and-its-st... 

“What an ‘infinite’ AI-generated podcast can tell us about the future of entertainment,” James Vincent, The Verge, March 11, 2018, available from https://www.theverge.com/2018/3/11/17099578/ai-generated-podcast-procedu... 

“AI can write surprisingly scary and creative horror stories,” Swapna Krishna, Engadget, October 31, 2017, available from https://www.engadget.com/2017/10/31/shelley-ai-writes-horror-stories-on-... 

[17] “Disney Research taught AI how to judge short stories,” Rob Lefebvre, Engadget, October 21, 2017, available from https://www.engadget.com/2017/08/21/disney-research-taught-ai-to-judge-s... 

[18] Please see any of the below as examples:

“At This Year’s U.S. Open, IBM Wants To Give You All The Insta-Commentary You Need,” Steven Melendez, Fast Company, September 2, 2016, available from https://www.fastcompany.com/3063369/at-this-years-us-open-ibm-wants-to-g...

“Wimbledon to Use IBM’s Watson AI for Highlights, Analytics, Helping Fans,” Jeremy Kahn, Bloomberg, June 27, 2017, available from https://www.bloomberg.com/news/articles/2017-06-27/wimbledon-to-use-ibm-... 

“IBM is sending Watson to the Grammys,” Brian Mastroianni, Engadget, January 24, 2018, available from https://www.engadget.com/2018/01/24/ibm-watson-grammys/ 

[19] “Washington Post to Cover Every Major Race on Election Day With Help of Artificial Intelligence,” Lukas I. Alpert, The Wall Street Journal, October 19, 2016, available from https://www.wsj.com/articles/washington-post-to-cover-every-major-race-o... 

[20] “Next Target for IBM’s Watson? Third-Grade Math,” Elizabeth A. Harris, September 27, 2016, available from https://www.nytimes.com/2016/09/28/nyregion/ibm-watson-common-core.html 

and

“Artificially intelligent math for school educators,” A Fine, District Administration, October 27, 2017, available from http://districtadministration.com/artificially-intelligent-math-for-scho... 

[21] “IBM and MIT pen 10-year, $240M AI research partnership,” Ron Miller, TechCrunch, September 6, 2017, available from https://techcrunch.com/2017/09/06/ibm-and-mit-pen-10-year-240m-ai-resear... 

[22] “Most Americans See Artificial Intelligence as a Threat to Jobs (Just Not Theirs),” Niraj Chokshi, March 6, 2018, available from https://www.nytimes.com/2018/03/06/us/artificial-intelligence-jobs.html 

[23] “Automation is going to hit workers in three waves, and the first one is already here,” Erin Winick, MIT Technology Review, February 7, 2018, available from https://www.technologyreview.com/the-download/610211/automation-is-going... 

[24] “A.I. Will Transform the Economy. But How Much, and How Soon?,” Steve Lohr, The New York Times, November 30, 2017, available from https://www.nytimes.com/2017/11/30/technology/ai-will-transform-the-econ... 

[25] “Small cities face greater impact from automation,” Brian Wang, Next Big Future, October 24, 2017, available from https://www.nextbigfuture.com/2017/10/small-cities-face-greater-impact-f... 

and

“The Parts of America Most Susceptible to Automation,” Alana Semuels, The Atlantic, May 3, 2017, available from https://www.theatlantic.com/business/archive/2017/05/the-parts-of-americ... 

[26] “What Does Work Look Like in 2026? New Statistics Shine Light on Automation’s Impacts,” Erin Winick, MIT Technology Review, October 25, 2017, available from https://www.technologyreview.com/the-download/609218/what-does-work-look... 

[27] “Google’s AI sounds like a human on the phone — should we be worried?” James Vincent, The Verge, May 9, 2018, available from https://www.theverge.com/2018/5/9/17334658/google-ai-phone-call-assistan...  

and 

“Google now says controversial AI voice calling system will identify itself to humans,” Nick Statt, The Verge, May 10, 2018, available from https://www.theverge.com/2018/5/10/17342414/google-duplex-ai-assistant-v... 

[28] “Google's Duplex AI could soon be running call centers,” Chris Merman, The Inquirer, July 6, 2018, available from https://www.theinquirer.net/inquirer/news/3035476/google-duplex-could-so... 

[29] “Japanese white-collar workers are already being replaced by artificial intelligence,” Dave Gershgorn, Quartz, January 2, 2017, available from https://qz.com/875491/japanese-white-collar-workers-are-already-being-re... 

[30] “IBM Unveils System That ‘Debates’ With Humans,” Cade Metz and Steve Lohr, The New York Times, June 18, 2018, available from https://www.nytimes.com/2018/06/18/technology/ibm-debater-artificial-int... 

[31] “Inside Amazon’s $3.5 million competition to make Alexa chat like a human,” James Vincent, The Verge June 13, 2018, available from https://www.theverge.com/2018/6/13/17453994/amazon-alexa-prize-2018-comp...

[32] "Google AI group that's mastering Go is now taking on healthcare," Jacob Kastrenakes, Feruary 25, 2016, available from https://www.theverge.com/2016/2/25/11112366/deepmind-health-launches-med... 

[33] “Amazon, Google and Microsoft to attend White House AI summit,” John Fingas, Engadget, May 8, 2018, available from https://www.engadget.com/2018/05/08/white-house-ai-summit/ 

and 

“White House Announces Select Committee of Federal AI Experts,” Aaron Boyd, NextGov, May 10, 2018, available from https://www.nextgov.com/emerging-tech/2018/05/white-house-announces-sele... 

[34] “The White House promises to release government data to fuel the AI boom,” Will Knight, MIT Technology Review, June 5, 2018, available from https://www.technologyreview.com/s/611331/the-white-house-promises-to-re... 

[35] “Google used AI to sort millions of historical Life photos you can explore online,” James Vincent, The Verge, March 7, 2018, available from https://www.theverge.com/2018/3/7/17091392/google-ai-photo-tagging-life-...

[36] “Google AI experiment has you talking to books,” Mariella Moon, Engadget, April 14, 2018, available from https://www.engadget.com/2018/04/14/google-ai-experiment-talk-to-books/ 

[37] “Allen Institute for AI Eyes the Future of Scientific Search,” Cade Metz, Wired, November 11, 2016, available from https://www.wired.com/2016/11/allen-institute-ai-eyes-future-scientific-... 

[38] “Artificial intelligence is going to make it easier than ever to fake images and video,” James Vincent, The Verge, December 20, 2016, available from https://www.theverge.com/2016/12/20/14022958/ai-image-manipulation-creat... 

[39] “Facebook: Don't freak out about artificial intelligence,” Richard Nieva, CNET, December 1, 2016, available from https://www.cnet.com/news/facebook-artificial-intelligence-filter-bubble... 

[40] “AI will rob companies of the best training tool they have: grunt work,” Sarah Kessler, Quartz, May 11, 2017, available from https://qz.com/979812/how-ai-will-change-the-shape-of-organizations/ 

[41] “In the AI Age, “Being Smart” Will Mean Something Completely Different,” Ed Hess, Harvard Business Review, June 19, 2017, available from https://hbr.org/2017/06/in-the-ai-age-being-smart-will-mean-something-co... 

[42] “Artificial Intelligence’s White Guy Problem,” Kate Crawford, The New York Times, June 25, 2016, available from https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligenc... 

and

“AI facial analysis demonstrates both racial and gender bias,” Swapna Krishna, Engadget, February 12, 2018, available from https://www.engadget.com/2018/02/12/facial-analysis-ai-has-racial-gender... 

[43] “AI can be sexist and racist — it’s time to make it fair,” James Zhou and Laura Schiebinger, Nature, July 18, 2018, available from https://www.nature.com/articles/d41586-018-05707-8 

[44] “Google’s parent company is using AI to make the internet safer for LGBT people,” Maria LaMagna, MarketWatch, March 14, 2018, available from https://www.marketwatch.com/story/how-artificial-intelligence-can-make-t... 

[45] “The Future of AI Depends on High-School Girls,” Lauren Smiley, The Atlantic, May 23, 2018, available from https://www.theatlantic.com/technology/archive/2018/05/ai-future-women/5... 

[46] “Google’s DeepMind Launches Ethics Group to Steer AI,” George Dvorsky, Gizmodo, October 4, 2017, available from https://gizmodo.com/google-s-deepmind-launches-ethics-group-to-steer-ai-... 

[47] “Why artificial intelligence researchers should be more paranoid,” Tom Simonite, Wired, February 20, 2018, available from https://www.wired.com/story/why-artificial-intelligence-researchers-shou... 

[48] “Tech’s Ethical ‘Dark Side’: Harvard, Stanford and Others Want to Address It,” Natasha Singer, The New York Times, February 12, 2018, available from https://www.nytimes.com/2018/02/12/business/computer-science-ethics-cour... 

[49] “Artificial intelligence doesn’t have to be evil. We just have to teach it to be good.” Ryan Holmes, Recode, November 30, 2017, available from https://www.recode.net/2017/11/30/16577816/artificial-intelligence-ai-hu... 

[50] “Elon Musk Snags Top Google Researcher for New AI Non-profit," Cade Metz, Wired, December 11, 2015, available from https://www.wired.com/2015/12/elon-musk-snags-top-google-researcher-for-... 

[51] "Facebook, Amazon, Google, IBM, Microsoft form new AI alliance," Lance Ulanoff, Mashable, September 9, 2016, available from https://mashable.com/2016/09/29/partnership-on-ai/#2WlFh7QQNqqx 

[52] “Apple to Start Publishing AI Research to Hasten Deep Learning,” Alex Webb, Bloomberg, December 6, 2016, available from https://www.bloomberg.com/news/articles/2016-12-06/apple-to-start-publis... 

[53] “'We can't compete': Why universities are losing their best AI scientists,” Ian Sample, The Guardian, November 1, 2017, available from https://www.theguardian.com/science/2017/nov/01/cant-compete-universitie... 

[54] “Facebook adds A.I. labs in Seattle and Pittsburgh, pressuring local universities,” Cade Metz, The New York Times, May 4, 2018, available from https://www.nytimes.com/2018/05/04/technology/facebook-artificial-intell... 

[55] “Microsoft creates an AI research lab to challenge Google and DeepMind,” Darrell Etherington, TechCrunch, July 12, 2017, available from https://techcrunch.com/2017/07/12/microsoft-creates-an-ai-research-lab-t...