The world is on the cusp of a transformation that may be more impactful than the agricultural revolution, the industrial revolution, and the Internet revolution combined.
Rapid developments in artificial intelligence mean that we now have the capacity to know more than we understand. ChatGpt, the large language model publicly released in November, has alarmed as much as it has excited.
Using a familiar chat box interface, people can now obtain AI-drafted responses to natural language queries on all kinds of topics. Its potential utility and application seem limitless. The tool draws from terabytes of data, including nearly every published book and resources available on the Internet through September 2021. Traditional understandings of authorship, knowledge, and learning are quickly being upended as a result.
The implications of AI technologies for higher education are vast. The emergence of ChatGPT and similar AI tools have caused critics to question the value of traditional, in-person degree programs that higher education has to offer. Opened for reexamination is everything from what it means to plagiarize, to what it means to learn, to how we know what is true.
Fundamentally, people are questioning what the future role will be for educated humans in a world where AI’s powers seem more immense than our own.
Personal to my line of work, as dean of Gonzaga University School of Law, I have heard people ask: “Will people even need lawyers in the years ahead? Why not just ask ChatGPT to provide legal advice?” The ethical, existential, and legal underpinnings of questions like those cannot be overlooked.
Our work in higher education and at law schools to cultivate thinkers and leaders is more important than ever. AI and related societal developments—such as the rise of cancel culture, disinformation campaigns, and hostile attacks on those with whom we disagree—require that we conceptualize what we do on a different plane. Fostering critical thinking, which presupposes being able to figure out what is true, is where higher education adds value.
Machines may know more than us, in a literal sense, but only we as humans have the capacity to apply that knowledge in ways that are helpful, novel, and further the public good. The moral dimension of this work cannot be ignored. We must draw on all disciplines and perspectives, including ethical and spiritual guideposts, as we navigate a future where people, not processors, continue to determine what is true, right, and just.
AI should always work for us.
At Gonzaga, we emphasize development of the whole person, nurturing students’ intellectual curiosity while also fostering their spiritual and moral growth. This holistic approach to education will be particularly beneficial in the face of ChatGPT’s growing influence.
Artificial intelligence systems are designed to perform specific tasks efficiently and accurately, but they lack the creativity, empathy, emotional intelligence, and ethical reasoning that are essential to a well-functioning society.
That is where we in higher education come in. In particular, Jesuit higher education’s emphasis on critical thinking, ethics, and social justice will help students develop the skills and knowledge necessary to work alongside ChatGPT in responsible and principled ways.
As Henry Kissinger and colleagues recently put it in a Wall Street Journal editorial, “Fundamentally, our educational and professional systems must preserve a vision of humans as moral, psychological, and strategic creatures uniquely capable of rendering holistic judgments.”
As AI becomes increasingly prevalent, elected officials, lawyers, and legal scholars will need to develop new frameworks for regulating and overseeing these technologies, as well as ensuring that access to them is equitably distributed.
Law schools can prepare future lawyers to play a crucial role in this process by providing students with the knowledge and skills necessary to navigate the legal landscape surrounding AI.
In addition, we in legal education also can train students to approach these issues from a multidisciplinary perspective, incorporating insights from fields like computer science, philosophy, history, and sociology. As AI systems continue to impact a growing number of industries, lawyers must be equipped to ensure that these systems do not exacerbate existing inequalities or create new ones.
The HAL 9000, a fictional AI character and antagonist from Stanley Kubrick’s 1968 classic, “2001: A Space Odyssey,” at one point tells his increasingly suspicious human interlocutor, “I know I’ve made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I’ve still got the greatest enthusiasm and confidence in the mission. And I want to help you.”
The dystopian world envisioned in Kubrick’s movie has not yet arrived, nor will it, provided that higher education as an industry continues to have societal support and financial resources to be a beacon of human ingenuity.
We must maintain a world where humans alone make impactful decisions; humans alone determine what is normal performance; and humans alone express feelings like enthusiasm, confidence, and the desire to help.
Higher education rooted in the development of the whole person remains the best preparation for navigating a world that is ever more volatile, uncertain, complex, and ambiguous. We should embrace the new avenues of discovery that AI technologies like ChatGPT enable, while simultaneously doubling down on our belief that higher education holds the greatest hope for bringing out the best in everyone in service to humankind.