From the community | Stanford Law must train their students for the age of AI
A survey we recently conducted at Juris Education, a leading law school admissions counseling firm, revealed that almost 40% of aspiring lawyers believe artificial intelligence (AI) tools can help improve their mental health during the law school application process and when they enter the legal profession. The survey interviewed 248 pre-law students across the United States.
Research shows that even routine, non-substantive tasks can drain psychological resources and contribute significantly to burnout among attorneys. When used responsibly and in accordance with firm policies and professional regulations, AI can support mental well-being by lightening the cognitive load of routine tasks. According to Stanford Law, firms, courts and clinics are already using AI tools to:
- Conduct or assist with legal research
- Prepare drafts of clauses, contracts, deposition questions, witness statements and other documents
- Serve as legal assistants for lawyers
However, even with this swift change, we must pause and question whether current students feel prepared for such a rapid transition. Are law schools preemptively implementing changes for the current curriculum to ensure future lawyers can indeed experience AI’s benefits when it comes to their mental well-being and balanced workloads as attorneys? Stanford, as a pioneer in both legal education and AI development, ought to take the lead in improving quality of life for their law students during and after school.
Some law schools, Stanford included, are embracing the intersection of AI and law head on. The Legal Innovation through Frontier Technology Lab (LIFTLAB), for example, is partnering up with law firms and tech companies to create and test AI tools for legal services. The Deborah L. Rhode Center on the Legal Profession looks into how AI is reshaping legal practice, professional responsibility and access to justice. Stanford Law is also cognizant of the ethical challenges that come with using AI in the legal field. As a result, through specialized courses and practicums, the center is teaching students to study and critically engage with topics like the use of AI in administrative and constitutional law, the regulation of emerging technologies and the ethics of generative AI.
Other law schools should use this as a blueprint for the holistic integration of AI into their curriculum, giving students an opportunity to experiment with the technology early on while also examining and understanding its pitfalls and limitations.
The Juris Education survey also revealed that 46% of respondents feel “frequently” overwhelmed, specifically by the law school application cycle. As a result, nearly one-third of those students considered therapy “very often.” While turning to AI tools for “therapy,” although two in five aspiring lawyers weren’t comfortable sharing the specifics of their mental health struggles with AI, 13% did it anyway.
The stress around law school begins even before applications are submitted. Many law schools’ hazy policies around the use of AI in applications further creates confusion and uncertainty for students. Law schools can change this by integrating AI ethically into the admissions process.
For example, the University of Michigan Law School and the University of Miami School of Law have introduced AI-based essay questions that require applicants to use tools like ChatGPT. These essays are designed to gauge AI literacy and ethical reasoning, a signal that, rather than cracking down on AI, schools may begin formally integrating it into their admissions process.
With such integrations, law schools can model the real world of law for students — where AI is an omnipresent reality — and alleviate some of students’ stress around law school applications.
Admissions policies are just one piece of the puzzle. Students’ mental health struggles persist once they enter law school, highlighting the importance of comprehensive support programs even on campuses. Law schools should strengthen mental health resources for their students, ensuring that therapy is accessible and any related stigma is addressed. At the same time, law schools and counseling centers can pair up to offer guided, supervised AI‑assisted services for low-risk tasks, services that can also be accompanied by human oversight. For example, Butler University and the University of Houston have partnered with a 24/7 AI tool called Wayhaven that offers personalized wellness coaching and has been shown to decrease depression and anxiety.
Law schools bear a key responsibility for creating playbooks around the cautious use of AI in the legal field. Institutions like Stanford Law can create a guilt-free policy around the use of AI but limit its use to certain tasks or sections of courses in each year of law school to avoid over-reliance, “hallucinated content” and encourage creative and critical thinking.
Students at Stanford Law, too, should play an active role in this transition. They must not hesitate in asking for clearer policies around the use of AI, engage in healthy debates at various clubs or with the administration on the fair use of AI and use their voice to ask for better mental health resources that go beyond what AI tools can sufficiently deliver.
The goal of our survey at Juris Education wasn’t to alarm law schools and students. Instead, it presents a timely chance for us to prepare the next generation of lawyers for a world where AI is unavoidable. Institutions like Stanford Law have the power to shape guidelines at other law schools in the country and must deploy these to empower students to leverage AI as a complement to their skills, judgment and mental health in the real world.
Arush Chandna is the co-founder of Juris Education, an ed-tech startup in the legal education space.
The post From the community | Stanford Law must train their students for the age of AI appeared first on The Stanford Daily.