Skip to Main Content

Artificial Intelligence (AI) at Emory

What is this Guide For?

The Emory Libraries aim to support and align with the university, as it develops a new cadre of scholars in the field of AI across a spectrum of fields, with a distinct focus on the ethical and moral implications of these technologies. This guide therefore has multiple objectives:

  • Identify resources (databases, e-books, etc) for locating resources related to AI and machine learning, and their applications across disciplines and schools
  • Provide tips and guide researchers in using the above resources
  • Highlight the works of the AI.Humanity scholars as well as existing research here at Emory
  • Curate a list of current online blogs, reports, and portals for keeping abreast of machine learning in academia and industry
  • Highlight ethical and social justice themes in artificial intelligence
  • Provide resources for discussions of the utilization of AI in the teaching/learning space


From McKinsey's Executive Guide to AI:

Artificial Intelligence: AI is typically defined as the ability of a machine to perform cognitive functions we associate with human minds, such as perceiving, reasoning, learning, and problem solving. Examples of technologies that enable AI to solve complex problems include robotics, computer vision, language, virtual agents and machine learning.

Deep Learning: Most recent advances in AI have been achieved by applying machine learning to very large datasets. Machine-learning algorithms detect patterns and learn how to make predictions and recommendations by processing data and experiences, rather than by receiving explicit programming instruction. The algorithms also adapt in response to new data and experiences to improve efficiency over time. Machine learning provides predictions AND prescriptions.

Letter from AI Index Co-Authors

AI has moved into its era of deployment; throughout 2022 and the beginning of 2023, new large-scale AI models have been released every month. These models, such as ChatGPT, Stable Diffusion, Whisper, and DALL-E 2, are capable of an increasingly broad range of tasks, from text manipulation and analysis, to image generation, to unprecedentedly good speech recognition. These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new. However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment.
Although 2022 was the first year in a decade where private AI investment decreased, AI is still a topic of great interest to policymakers, industry leaders, researchers, and the public. Policymakers are talking about AI more than ever before. Industry leaders that have integrated AI into their businesses are seeing tangible cost and revenue benefits. The number of AI publications and collaborations continues to increase. And the public is forming sharper opinions about AI and which elements they like or dislike.
AI will continue to improve and, as such, become a greater part of all our lives. Given the increased presence of this technology and its potential for massive disruption, we should all begin thinking more critically about how exactly we want AI to be developed and deployed. We should also ask questions about who is deploying it—as our analysis shows, AI is increasingly defined by the actions of a small set of private sector actors, rather than a broader range of societal actors. This year’s AI Index paints a picture of where we are so far with AI, in order to highlight what might await us in the future.

Jack Clark and Ray Perrault (Artificial Intelligence Index Report 2023, Stanford University Human-Centered Artificial Intelligence)