Introduction
In 1929 Justice Felix Frankfurter observed that “research requires the poetic quality of the imagination that sees significance and relation where others are indifferent or find unrelatedness” and emphasized that the effective researcher must know “what questions to put and what directions to give to inquiry.”[1] He was discussing a very different legal research environment than the one present-day law students encounter, but his words still carry weight. Frankfurter’s legal research environment was populated with print materials. Today’s legal research environment is populated with data and algorithmically driven search platforms. Though complex technologies now play a significant role in how we find, interact with, and use legal information, it remains important for legal researchers to approach these systems with Frankfurter’s words in mind. Prominently, researchers must approach search technologies with “the poetic quality of imagination” and “knowing what questions to put and what directions to give to inquiry” in order to effectively, efficiently, and ethically perform legal research.[2]
The majority of today’s law students grew up with technology, inside and outside of the classroom.[3] Though we call them digital natives, this does not mean students know how to effectively, efficiently, and ethically use technology, especially search technologies. As Keefe noted, in 2005, “[t]he Internet has made it so easy to find information that students often do not know how to search for it.”[4] Legal research databases, driven by opaque algorithms and designed to visually mimic Google and other general search systems, often create a false sense of security in the novice legal researcher.[5] Students who developed research skills in a digital environment dominated by Google often find it difficult to transfer these research skills to the legal research environment and do not critically assess information and their research processes. Search technologies—both general and law-specific—are essential tools for law students and, therefore, we must ensure students not only learn fundamental research skills but also become critical and adaptable technology users.
This Article examines the ways advanced search technologies impact how law students approach legal research and argues that skills faculty, including law librarians, are well situated to teach law students how to use search technologies appropriately. Continually evolving legal research technologies, including generative AI, upcoming changes to the Uniform Bar Exam focusing on legal research, and continuing debates on the role of skills faculty and the law library in legal academia make this research and discussion vitally important. Part I examines the ways search technologies have transformed general and legal research, especially over the past two decades. Part II examines the role of skills faculty in legal research instruction and provides practical guidance on ways to help students learn to critically and effectively use research technologies.
I. Legal Research: A Practical and Creative Skill
Legal research is a foundational skill for all lawyers.[6] It is both a practical and creative skill that draws on Justice Frankfurter’s “poetic quality of the imagination” and employs different techniques and tools than the types of research many law students may have encountered in prior education and work experiences. Prominently, legal research requires researchers to find and make sense of information through analysis and analogizing,[7] be comfortable with uncertainty, [8] and recognize that they may need to use multiple tools and techniques to find information for their issue.[9] It is also a skill that has been transformed by technology. Law students must learn how to perform proper legal research, critically use existing technologies, and develop skills that are adaptable to the quickly changing technology environment.[10] To best understand how to teach these skills, discussed in Part II, it is first necessary to understand how technology transforms our relationship with information. Part I focuses on ways general search technologies, like Google, and legal research technologies, like Lexis and Westlaw, have altered how we find, access, and use information, as well as how evolving transformative technologies, specifically generative AI, may impact legal research in the near future.
A. Technology Impacts on General Research
Technology has transformed our relationship with information. It has opened new ways of accessing, engaging with, and using information. There are numerous benefits to the current technology-driven research environment, but it has altered the research process in ways that necessitate proper instruction for students to become skilled and flexible legal researchers. Specifically, “intellectual technologies”—including technologies used to classify, access, and use information—assist and enhance human mental abilities, but lull the user into assuming the technologies provide the best, most trustworthy answers to queries.[11] For example, scholars have analyzed how search technologies, like Google, change the research process from being a process of knowledge construction to one of passively finding answers needed to complete an assignment.[12]
Google and similar search engines have transformed research processes—shifting users from being active to passive participants. They make the cluttered chaos of the internet seem manageable.[13] Google’s search algorithms and ranking system are designed, according to Google, to “sort through hundreds of billions of webpages and other content in [Google’s] Search index to present the most relevant, useful results in a fraction of a second.”[14] Search algorithms weigh various factors to identify what the system dictates as relevant information for a query, then displays it in an organized list of results.[15] Google’s search system is not a static system, though. The information the system deems relevant changes; meaning, a search for “felony murder” completed today may produce different results tomorrow, or even an hour from now. This demonstrates the non-static nature and opacity of Google and similar search systems. We do not know what sources were added or subtracted nor how, why, or when ranking algorithms changed.
The ease of use and visually organized design of Google and similar search systems engenders trust in the system, trust in the results, and trust that the system is ranking the most relevant results highest, but users do not have the ability to see what information the systems have access to nor how they are ranking and organizing information. Users place trust in search systems, rather than their own reading and analysis, which impacts how they find, engage with, and use information.[16] Empirical studies of search habits suggest many Google users do not look beyond the first few pages of results and are favorably biased towards high-ranking results, even when those results are not relevant to their specific research.[17]
Aesthetically pleasing interfaces, like Google’s, are also perceived as easier to use, more intuitive, and trustworthy.[18] They present an aura of neutrality, but algorithms underlying search databases are influenced by subjective human choices. Human choices ingrain the assumptions and biases of the humans who create the systems, the choices they make in developing the system, the data that feeds the system, and how the system is used and evolves.[19] The visual design of search systems decouples the technology from the human-created elements.[20] We only see the input (what we enter in the search box) and the output (the results list), but how the input becomes the output is not revealed. Our inquiries go into a “black box” and we quickly receive answers.[21] But, as Nevelow Mart observes, this is not a purely “technological interaction”; specifically, “every algorithm and database interface is a completely human construct, and every search is a completely human construct, the researcher must view the search process as a human interaction, moderated by technology.”[22] The user must also be aware that the interaction between user and system is not a strong relational interaction between two equally-situated things/persons.[23] The results from a single search query are influenced by numerous nonconcurrent queries by other users. The system and the individual do not interact in a duologue despite the system’s appearance of a simple two-actor communicative query.
Google and other search systems provide proxies that help users select sources from, what is usually, a significant number of results. Proxies, as used in this Article, refer to design features—like Google’s featured snippets[24]—that act as surrogates for reading and critical analysis.[25] They create efficiency in the systems but often substitute for richer inquiry on the part of the user. Proxies take the place of individual information analysis and influence user choices, often with users blind to the impact ranking and visual proxies have on the sources they select and trust. Studies show that users do not see their selection of less relevant or irrelevant high-ranking choices as irrational, though they should.[26] Users “confuse production of ideas with their distribution.”[27]
Technology not only changes how users find and access information; it also changes the way users read and engage with information. Research becomes a passive process, absent metacognition, in which the researcher finds sources but does not construct knowledge through active engagement, such as deep reading.[28] Ready access to connected devices—such as smart phones, tablets, and laptops—makes it so easy to find answers from search systems that users forget to ask reflective questions of themselves. They do not draw connections between their existing knowledge and the new information they are acquiring.[29] Users, therefore, become passive participants in the research process. They do not establish strong informational relationships with content.
Search technologies and technology-enhanced information often force users to multitask rather than focus on deep reading. Online information sources shift our attention more rapidly than print sources.[30] Some technology enhancements, like hyperlinked text, impact how users read and comprehend information. The authors of a 2022 study found that the composition of technology-enhanced text “encourages a reading strategy whereby readers prioritise visually salient information, in order to read through Webpages quickly.”[31] They found users judged textual and informational importance based on visual signals, such as the different colors of hyperlinked text, rather than syntax and context.[32]
Hyperlinks also fragment the text, incentivizing readers to skim and anchor attention to visual enhancements, like hyperlinked text.[33] This is not always bad, since hyperlinks can denote significant or useful information, but it becomes detrimental when the text is excessively hyperlinked and cognitively overwhelms the reader and/or hyperlinks are not important to the user’s specific research needs. Hyperlinked text also acts as a signal of importance that readers may trust over their own judgment.[34] Readers may associate hyperlinked text with citations in academic articles and view the text as having more value, even if the hyperlink is to an unreliable site, a deadlink, or a site selling something.[35] Further, hyperlinks disrupt active reading. They encourage users to switch tasks by clicking on them and navigating to new information sources before they finish reading and comprehending the original source. Task switching creates “‘switch costs,’ . . . the time cost (and sometimes, loss of accuracy) that happens when we shift focus from one task to another.”[36]
B. Technology Impacts on Legal Research
Legal research has also been transformed by technology. Legal-specific research technologies, like Lexis and Westlaw, have incorporated technology changes like those discussed above, such as hyperlinked text to help researchers access additional documents on the platform. But legal research technologies—like legal research itself—differ in ways that make novice legal researchers, accustomed to general search technologies, ill prepared to use law-specific databases. Legal research technologies, now so integral to the practice of law,[37] create barriers to engagement by promoting efficiency at the expense of deep analysis. This section examines: (1) how legal research technologies may lead to superficial analysis; (2) the concealed, black box nature of search technologies; and (3) the limitations of the systems.
1. Felony Murder Search on Westlaw and Lexis
Before examining how legal research technologies impact user engagement with information, it is helpful to demonstrate how searching on legal research platforms and general search systems, like Google, varies. To demonstrate the different ways Lexis, Westlaw, and Google perform searches and present information, we will use a basic search for “felony murder.” This basic demonstration shows how students accustomed to researching on Google and using Google-like searches are ill equipped to effectively use legal databases without first receiving proper legal research instruction. A Google search for “felony murder” generates over forty-million results and includes definitional information at the top. Performing a Google-like search from Westlaw’s main page search box for “felony murder”[38] generates 10,000+ cases, 10,000+ secondary sources, over 8,000 statutes and court rules, and an unmanageable amount of other content.[39] Unlike the Google search, which algorithmically assumed a person searching for “felony murder” would find definitional information most relevant, Westlaw does not include this type of basic information at the top; rather, the top result is a 1982 case from the Supreme Judicial Court of Massachusetts, Commonwealth v. Matchett.[40] A novice researcher accustomed to Google but unfamiliar with the different manner in which Westlaw’s systems search and present information may assume Commonwealth v. Matchett is a significant felony murder case. A novice legal researcher with limited knowledge of the legal system—like many first-year law students—is at a greater disadvantage and may think this case is the most relevant for the issue they are researching, even if their issue occurred in a different jurisdiction.
Performing a Google-like search on Lexis’ main page search bar for “felony murder” also generates 10,000+ cases, 10,000+ secondary sources, 10,000+ statutes and legislation, and an unmanageable amount of other content.[41] Unlike Westlaw, Lexis provides an “Answers” box at the top of the results page that lists cases, statutes, and secondary sources that give definitional information. The “Answers” box may be helpful to some researchers, but it can cause confusion for a novice legal researcher who may be unable to discern whether the “Answers” actually answers their specific legal question. A novice researcher searching both Westlaw and Lexis may also be confused because the top case listed on Lexis is not Commonwealth v. Matchett; rather, the first case is Commonwealth v. Brown, a 2017 case from the Supreme Judicial Court of Massachusetts.[42] Determining relevancy on both Lexis and Westlaw requires profound engagement with the information. Simply looking at the top results of a single search is often not going to give the researcher relevant information for their issue. Though the systems provide ways for users to filter results, students accustomed to general search systems often approach Lexis and Westlaw with a top-results-equal-best-results mentality and do not proceed to the important steps of filtering, refining search results, and reading sources. They either accept the top results as the best answer or assume there is no relevant information for their query.
2. Superficial Analysis
Legal research is not a rote task. It requires engagement with materials and creative thinking to search for, identify, and apply information to the researcher’s legal issue. Users accustomed to general search systems may be prone to superficial analysis. They may not proceed to the important steps of reading, analyzing, and analogizing information. Simply looking at the top results of a single search, such as the “felony murder” search described above, is often not going to give the researcher suitable information for their issue. Though the systems provide ways for users to filter results by such things as jurisdiction and date, students accustomed to general search systems often do not filter and refine search results. They accept the top results as the most relevant sources for their issue or assume there is no relevant information.
Search technologies were first developed to automate routine, predictable, and repetitive tasks. Technology is now shifting to automating knowledge-based work, like legal research.[43] Legal search technologies are continuing to develop more and more advanced tools to automate legal work, but automation of knowledge leads to superficial analysis where students rotely search, find, and save materials to (maybe) review later.[44] They may skim materials and use proxies, like hyperlinks and other editorial enhancements, to give them a cursory overview, but they do not actively read, critically engage with content, or analyze and apply information to their legal issue. Reading, deeply engaging with materials, and analyzing information within the context of the researcher’s legal issue are crucial parts of information seeking.[45] Researchers may miss important information by just focusing on the editorial enhancements and efficiencies built into these systems, like hyperlinks, headnotes, and key numbers. Users’ attention is diverted from reading materials and forming their own cognitive understanding within the context of their issue. For example, students may not read the text of a case and instead depend on editorial enhancements to give them a quick answer, thereby “interposing another human being’s subjective judgment between researcher and text”[46]—or, more likely today, interposing automated, algorithmic judgments “between researcher and text.”[47]
Editorial enhancements and efficiencies also trap students in what Delgado and Stafancic refer to as “perseveration.”[48] They persist down futile research paths despite not finding beneficial information and do not consider using different research techniques or tools. For example, students may fixate on one or two search terms and not use their research to expand their search vocabularies or students may only use one database for all their research. They may also anchor attention to terms and concepts highlighted in headnotes or hyperlinked text and ignore reading the actual case opinion where they could identify more keywords and gain a richer understanding of the legal concepts.[49] Search system proxies give the impression that they are the best or only place to look for answers—not the actual text of a source—to the detriment of effective research.
3. Concealed Research Process
The black box nature of legal search systems conceals the research process and encourages less creative searching and engagement with information. We only see the input—the search terms we enter in the search box—and the output—the results. We do not see how the input becomes the output. Our inquiries go into the black box, and we quickly get results that we interpret as relevant answers to our inquiry. We attribute authority to these systems and treat the results as testimony.[50] Within the general search system context, this can be detrimental. Within the legal research system context, this can be destructive.
The underlying organization of legal databases is influenced by organizational structures that were developed to organize and categorize print materials.[51] For example, Westlaw’s Key Number System was created over one hundred years ago to organize case opinions by legal issues and topics. It is a system of classification that indexes cases into almost 400 topics. Within each of the topics are numerous subtopics, classified by key numbers. In total, there are over 100,000 individual key numbers.[52] Cases may be categorized under multiple key numbers.[53]
As the American legal system grew, more litigation produced more case law necessitating a way to standardize and organize court reports. West’s print-based reporting system had a profound impact on how cases were accessed.[54] West’s topic system created an efficient means of searching for and evaluating case law, but it is predicated on the false assumption that a rational organizational structure can incorporate all past, current, and future legal issues. Organizational structures are necessary to find, access, and make sense of information, but law is more complex than even 100,000 key numbers can contain. Pushing legal information into standardized structures may limit the creative research and analysis that legal reasoning requires, especially in the novice researcher. This is particularly detrimental in the context of legal research technologies where the organizational structures are hidden by opaque search mechanisms. These systems have the power to include, exclude, classify, and rank information, but lack of transparency means users do not know what information is included or excluded and how the systems are classifying and ranking information.[55]
4. Limitations of Keyword Searches
Keyword searches make information on databases findable, but there are limitations to the utility of keyword searches, especially in the legal context.[56] For example, in 1989, Delgado and Stefancic recognized limitations of computer-aided searching, stating “computerized research can ‘freeze’ the law by limiting the search to cases containing particular words or expressions” and inhibit thoughtful browsing.[57] Almost ten years later, in 2007, they observed that “[c]omputerized legal research is a godsend for lawyers who know exactly what they are looking for.”[58] But the reality of the legal research process is that the researcher rarely knows exactly what they are searching for. Legal research, more often, requires the researcher identify what they are looking for as they move through the research process, aided by thoughtful browsing and analysis. They may, therefore, need to adapt their research process to incorporate new keywords, search techniques and tools, and knowledge.
Word-based keyword searches make it possible to quickly search databases and receive numerous results, but they are contingent on the probability that the researcher and the court, legislature, scholar, or other writer/editor use the same words and phrases and that the programmers have identified those words and phrases as relevant. Researchers who only perform searches for one or two words or phrases, do not use advanced search techniques, and only use one database severely limit their knowledge acquisition and development. Users may feel vindicated in their limited searches due to the numerous results they receive. But, numerous results, like the 10,000+ cases on Westlaw and Lexis from the “felony murder” search described above, do not mean the search was good or useful. It actually reflects a poor search.
Legal research databases require researchers to filter and refine results and continuously expand search vocabularies. Databases provide tools to refine and filter and help researchers expand their research. For example, Lexis’ Research Map provides a visual research trail to help a researcher see the research they have already done on Lexis and build from it using tools that compare search results and find similar documents.[59] But, if researchers do not know how to effectively use these tools—or even know that they exist—they may not benefit from them, or, worse, in using them, they may create cognitive overload and, thereby, limit their comprehension and processing of information.
Word-based searches also remove language from important context, which impacts user engagement and can create false confidence in search results. For example, homonyms present challenges to word-based search systems since the same word may have very different meanings dependent on context.[60] A simple example is the word “arm,” which may refer to an appendage on the human body, a weapon, or the action of supplying weapons. The word “aggravating” may refer to the act of annoying a person, or it could refer to aggravating factors in criminal sentencing. Understanding these keyword limitations is necessary to effectively use search systems.
5. Information Limitations
Effective legal research requires access to legal information, but researchers often encounter barriers to accessing quality information.[61] Critically acknowledging the information limitations of legal research databases and financial motivations influencing information access helps researchers see the benefits of using multiple different systems and sources to perform legal research. Researchers often assume databases have all the information they need, but they may be wrong and, further, may not have the ability to easily identify what information is not available and when changes are made.
A number of authors have tried to identify informational limitations within legal databases by examining what cases are not included in the databases.[62] For example, McAlister wrote about the limited access Westlaw and Lexis provide to a significant number of federal appellate court decisions.[63] McAlister described how users assumed more decisions, including unpublished decisions, were accessible through commercial databases than were actually available.[64] These types of informational limitations are not easily detectable. They are obscured by technology.
Search systems “by their nature . . . are [] tools of ignoring, as much as of showing” and are more often being designed to promote the profit objectives of companies and ignore the information needs of users.[65] Prominently, Lamdan found that the parent companies of Lexis (RELX) and Westlaw (Thomson Reuters) are concentrating on developing and selling legal data analytics capabilities rather than focusing on providing quality content.[66] As these companies become more akin to data brokers, rather than information providers, the priority may shift to populating databases with lesser quality content that provides data points for data analysis and the creation of data to sell. When selling data takes priority over providing high quality, well-vetted, and precedential content the legal researcher suffers.[67]
Additionally, digital information is not permanent. Information can be moved, altered, and removed with a keystroke. Content creation and alteration in the digital age is continuous; it is not a discrete event, such as the printing of a book.[68] Opaque systems can conceal revisions or deletions, and the trust users place in systems, especially the major legal databases like Lexis and Westlaw, makes it less likely that they will question results, even when they no longer receive the same results despite using the same search terms and filters. Link rot—the issue of hyperlinks that no longer work—has already been identified as an information access issue in the legal profession, but the impact of indiscernible alterations to legal databases has not been researched heavily.[69] Users approach legal databases with assumptions about the quality and quantity of information available, but these assumptions may be misguided since the technologies driving these systems often obscure information access limitations.
C. An Uncertain Future—Generative AI
As this Article goes to publication, the legal profession is facing another shift caused by transformative technology. Like Google and similar search engines described above, generative AI systems will transform how users find, interact with, and use information. The ways in which this technology will transform legal work, especially legal research, is still uncertain, but it is undeniable that changes are coming, and likely quickly.[70] Some emerging uses for generative AI in the legal profession include integration into existing systems, such as Lexis+ AI and Casetext’s CoCounsel, and the development of internal systems by individual law firms using firm data.[71]
Briefly, generative AI refers to artificial intelligence that can generate text, images, audio, and other types of media.[72] It is not a new technology, but recent technological developments such as advancements in large language models (LLM) and the deep learning architecture known as a transformer that makes LLMs possible, have significantly expanded the capabilities of generative AI.[73] Transformers can read massive amounts of data, identify patterns and relationships between words and phrases, and predict what words and phrases may come next.[74] Prominently, LLM-based chatbots, like ChatGPT and Bard, can quickly produce human-like dialogic responses to queries, known as prompts.[75] These chatbots produce responses to queries almost instantaneously, drawing from massive amounts of data.
Although it is uncertain exactly how generative AI systems will transform legal research, we can anticipate some of the possible detriments and benefits of incorporating these types of technologies into legal research based on the discussion of other research technologies above. Specifically, the technologies may exacerbate issues, discussed above, such as superficial analysis, the concealed research process, and information limitations. But these technologies may also mitigate other issues, such as keyword searching limitations. This section briefly discusses these issues, but the speed with which generative AI is developing and being integrated into existing technologies makes a thorough analysis currently impractical.[76]
Generative AI technologies automate knowledge in a way that may further push researchers toward superficial analysis. As stated earlier, legal research is a process of both knowledge acquisition and knowledge development. Researchers may replace their own thoughtful construction of knowledge with generative AI systems. Prominently, the false communicative interaction between user and system engenders trust in responses users receive from chatbots like ChatGPT and Bard due to the human-like “dialogues” the systems create.[77] Generative AI systems exude certainty, causing users to trust the decision making of the chatbot over their own thoughtful analysis.[78]
Trust in these systems, as they currently exist, is potentially highly detrimental. This was demonstrated in June 2023, when two New York lawyers were sanctioned for citing to cases they found through ChatGPT in a legal brief.[79] The cases either did not exist or were not relevant to the issue in the lawyers’ case.[80] In response to the sanction, the lawyers’ firm, Levidow, Levidow & Oberman, issued a statement saying: “[W]e made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth.”[81] In this instance, the lawyers trusted the technology to provide both relevant and real sources and did not thoughtfully or carefully assess and analyze the information.
Generative AI systems use LLMs that are immense and mysterious, and that conceal the research process. Since LLMs are trained, not programmed, users—and even the systems’ creators—often do not know why the systems function as they do or why they respond in a specific manner.[82] Further, the number of parameters—values that control the decisions of LLMs—are significant in size. For example, the number of parameters for OpenAI’s GPT 4 is estimated to be around 100 trillion.[83] As these systems are commodified, financial considerations will likely create more incentives to conceal processes by which decisions are made in order for companies to maintain competitive advantage. For example, MIT Technology Review, reporting on GPT 4 being “even bigger and better” than ChatGPT 3.5 stated: “Yet how much bigger and why it’s better, OpenAI won’t say. GPT 4 is the most secretive release the company has ever put out, marking its full transition from nonprofit research lab to for-profit tech firm.”[84] To effectively and ethically perform legal research, users should have the ability to understand how the information they receive is compiled and the processes that inform the responses generative AI systems create.
Similar to the issues discussed above, generative AI obscures information access limitations. A basic information limitation is that users do not know the types, quantity, or quality of information that feeds the systems. Another limitation, specifically of ChatGPT, is the knowledge cutoff date. ChatGPT 3.5, the free version of ChatGPT, has a knowledge cut-off date of January 2022, meaning the system does not use data available after January 2022 to generate responses.[85] In contrast, ChatGPT’s Plus and Enterprise paid plans use Microsoft’s Bing search platform to search the internet for current information to generate responses.[86] Bard also searches the internet for current information, using the Google search engine.[87]
The most prominent information limitation with generative AI systems is all the unknowns combined. Users do not know precisely how the systems work, what data the systems use, when things are changed, what data the systems are collecting from users, and myriad other unknowns that make using these systems for legal research potentially detrimental and possibly ethically unsound due to confidentiality concerns.[88] Prominently, attorneys may risk waiving attorney-client privilege by using publicly available generative AI systems, like ChatGPT and Bard, since the information provided by users may be seen by individuals outside the attorney-client relationship. OpenAI explicitly states that they “review conversations to improve [their] systems and to ensure the content complies with [their] policies and safety requirements” and advises users to not “share any sensitive information in [] conversations” since prompts cannot be deleted.[89]
The ways in which generative AI will transform legal research and the legal profession are uncertain, but changes are coming—and quickly.[90] For example, some courts have already added requirements that lawyers attest that no parts of filings with the court were created by generative AI.[91] Students need to learn how to use technology as it exists today and develop flexible research skills that make it easier for them to adapt to technologies as they may exist in the future. The role of skills faculty in guiding students in using and encountering the uncertainties of research technologies is explored in the next Part.
II. The Role of Skills Faculty in Legal Research Technology Instruction
Part I described the impact of general search technologies and legal research technologies on the research process and examined how these technologies transform how we find, access, and interact with information. Part II first considers why skills faculty are best suited to teach law students how to critically and effectively use these search technologies. Part II then describes ways skills faculty can help students: (1) move beyond superficial analysis; (2) understand the black box, concealed nature of search technologies; (3) overcome the limitations of keyword searching and information limitations of search systems; and (4) develop skills that are flexible and adaptable to technology changes.
A. Skills Faculty Role in Teaching Critical and Effective Use of Research Technologies
1. Skills Faculty Teaching Legal Research
Legal writing instructors and law librarians are best positioned to teach law students proper use of legal research technologies.[92] Teaching legal research involves different pedagogical approaches than the traditional law school pedagogy. Legal research is like training for a marathon. You can read books, watch videos, and attend lectures all about how best to train for and run a marathon, but you cannot effectively run a marathon without actually running—a lot. Similarly, law students can read about legal research, watch videos, and attentively listen to lectures, but in order to become effective, efficient, ethical researchers, they must actually do legal research. Legal research is a skill that can be taught but, more importantly, it is a skill that must be practiced consistently. Consistent, guided practice is essential for improving research skills and developing creative research competencies. Novice researchers benefit from learning in a well-structured practice environment that skills faculty, like law librarians, are adept at creating.[93]
Teaching a skill, like legal research, requires different pedagogical approaches than doctrinal law classes. Doctrinal professors must have robust knowledge of the areas of law they teach, but they do not always receive training in how to teach. The Socratic method of instruction may be suitable to this, but skills instruction requires different approaches that give students opportunities to practice skills.[94] Legal writing instructors and law librarians often receive training, through graduate programs, fellowships, and conferences, that is focused on creating interactive learning environments that provide students opportunities to practice skills and receive regular feedback.[95]
2. Impediments to Legal Research Instruction
Before examining ways skills faculty can effectively teach good legal research skills and use of research technologies, it is important to briefly recognize some of the impediments instructors face. In general, law students do not receive robust legal research instruction, and legal research is not consistently emphasized as a critical skill in the law school curriculum.[96] Curricular diminishment of the importance of legal research instruction feeds student perceptions that legal research is merely a rote task of simply locating sources and not a process of knowledge development.[97] Students, who already enter law school overconfident in their research abilities, depreciate the importance of learning legal research and often do not invest the appropriate time and focus to their legal research courses.[98] Trivializing legal research instruction may contribute to “non-learning”—“the potential outcome of any experience that is deemed unimportant, not given consideration, or rejected because the learning is trivial”—which leads to students who are not prepared to perform legal research in law school or in practice.[99] This is particularly detrimental in the technology-driven research environment that continues to evolve and change how we find, access, and interact with information.
B. Helping Students Overcome Superficial Analysis
As discussed above, legal research is not a rote task of simply finding sources. Skills faculty can create opportunities for students to encounter the ways research technologies often force the user towards superficial analysis. They can instruct students on ways to use technologies that support deep engagement with information and incorporate guided practice opportunities into classes. Interactive learning environments in which instructors guide students through research problems help students see legal research as a complex process and not simply a rote task of finding materials and help students slow down and think deeply about their research process and the information they find.
Guided practice also provides opportunities for instructors to highlight potential pitfalls as they arise. Skills faculty understand many of the pitfalls students fall into when using research technologies and can help students avoid technology-produced traps. For example, technology enhancements, such as hyperlinks and headnotes, often hinder deep reading and knowledge development.[100] Skills faculty can help students identify places where they are getting distracted. For example, simply telling students to finish reading a source before clicking on any hyperlinks focuses student attention on reading and, if they find themselves unconsciously opening hyperlinks, this exercise may help them become more conscious of technology habits that are distracting them from deep reading and analysis.[101]
Another interactive exercise that skills faculty can use to help students break bad technology habits is a live assignment in which the instructor watches as the student works through a research problem. Completing the assignment while the instructor watches, either in person or over Zoom, helps the instructor identify and correct bad habits. For example, Schlinck used this type of exercise and identified a common issue among her students she referred to as the “food blog scroll”—the habit of scrolling to about halfway down a webpage as soon as it opens, which can lead to users missing important information.[102] She was able to redirect students during the live assignment and suggest they begin reading at the top of the page.[103]
Skills faculty can also develop exercises that compel students to slow down. A simple example is having students enter sources in a research log as they proceed through a research problem. A research log is a list of sources found and a summary of the findings.[104] A basic research log includes the date the source was accessed, a citation to the source, where the source was located (including a hyperlink if possible), the date or currentness of the source, additional sources and information identified, and the findings or value of the source. The act of entering information in a log makes students pause and contemplate the usefulness of a source to their legal issue and whether they have effectively read and analyzed the source. It may break students out of the “perseveration” trap described above, in which students persist down futile research paths, by helping students see a research trail and whether it is leading them to relevant information or if they need to reassess and redirect. Research logs also help students organize research and develop more complex search vocabularies.
C. Helping Students Understand the Black Box of Research Technologies
Legal research databases and general search technologies conceal the research process. Users do not see how the input becomes the output. They trust that the systems are providing relevant information despite the inability to see what information the systems have access to and how the results are generated and ranked. The aura of neutrality and aesthetically pleasing designs engender trust that can lull researchers—especially novice researchers—into believing the systems are giving the user the best information. Skills faculty need to help students develop a basic understanding of the organizational structures that inform the technological schemas and imbue a healthy amount of skepticism so students approach these systems critically.
A complete understanding of how legal and general search systems work is impossible and not necessary, but skills faculty can give students a basic understanding of the organizational structures. Users benefit from a basic understanding of the print-based organizational structures that inform the technological schemas. This knowledge helps students see a bit of the concealed process that is hidden in black box search technologies. Skills faculty, especially law librarians, understand print-based organizational structures and can show students how organizational systems, like West’s reporting system, created long before computers were invented, still influence information organization on databases.[105] An easy example is showing students hard copies of case reporters to help them understand how to use star pagination on Westlaw and Lexis.[106] For example, for the Maryland case Unger v. State, students would be shown how to access the case on Lexis and Westlaw, then how to find the case in the print volumes of the Maryland Reports (427 Md. 383 (2009)) and the Atlantic Reporter (48 A.3d 242 (2009)). Showing print reporters provides visual context for page numbers that is lost when print materials are put into digital format.
General and legal research technologies are essential to effective legal research, but students must learn to be critical users. They cannot trust the system over their own thoughtful reading and analysis. Skills faculty must provide a strong grounding in legal research skills while also teaching students to be skeptical users of technology. Skeptical analysis of the organizational systems underlying legal information systems promotes innovation in legal thought, which helps users find relevant information for their legal issue.[107] Skills faculty are attuned to the past, current, and future developments of research systems. They can demonstrate how the search process is a not simply a neutral technological process; rather, it is a human interaction moderated by technology.[108] For example, skills faculty should demonstrate finding a variety of sources using different platforms such as Lexis, Westlaw, Google, and other search systems.
It is especially important to demonstrate the differences between Lexis and Westlaw. There is often an assumption that Lexis and Westlaw include all the same sources and organize information in the same way. Users should be aware of differences between the systems and understand that searching each system may require unique approaches. For example, showing students the different results from the basic “felony murder” searches discussed above demonstrates that searching on Lexis and Westlaw will often require different search techniques. Another example is demonstrating the differences between searching the secondary source American Law Reports (ALRs) on Lexis and Westlaw. Westlaw provides a digest and an index, with extensive cross references, to help users search ALRs, whereas Lexis does not; it only provides an advanced search feature. Therefore, users will need to use different search techniques to find relevant ALRs on Lexis and Westlaw.
Students should be encouraged and given opportunities to use both Lexis and Westlaw. For example, when completing in-class group exercises, at least one student in the group should use Lexis and at least one student should use Westlaw. Students see firsthand the different approaches they may need to take in researching on each database. They also encounter differences in the structured classroom environment where they can ask questions and the instructor can provide guidance and reassure students that what they are encountering is normal and they are not doing something wrong.
D. Helping Students Overcome the Limitations of Search Systems
1. Limitations of Keyword Search
Keyword searches make information on databases findable, but students accustomed to searching on Google, or similar search engines, may encounter difficulties when using the same search techniques on legal databases. Google and similar search engines primarily rely on natural language processing (NLP) to try to create order out of a vast, continuously growing information environment.[109] Natural language searches are different from advanced, structured search queries using terms and connectors, operators, and/or punctuation (Boolean logic).[110] NLP is, generally, quicker and seems easier to use than Boolean search techniques, but there are limitations.
NLP has difficulty processing language ambiguities, recognizing context, and distinguishing language intricacies, such as synonyms and homonyms—all important aspects of researching a legal issue. Employing advanced, structured search queries using terms and connectors, operators, and/or punctuation (Boolean logic) produces more targeted searches in legal search systems, like Westlaw and Lexis.[111] Advanced searching and filtering are necessary to overcome limitations of keyword searching and identify information relevant to the researchers’ legal issue, not just the information the system deems relevant. Using advanced searches may also encourage researchers to look at more results and, therefore, find more relevant information. For example, Narayanan and De Cremer cite studies “show[ing] that users who type more complex search strings with advanced Boolean variables tend to also look at more pieces of content further down their list of results” on search engines, like Google.[112]
Skills faculty can counter the impacts of the “Googlization” of information by showing students how searching on legal research systems is different from searching on Google.[113] Prominently, students must become comfortable with using advanced search techniques, such as Boolean logic, and filters. Simply giving students a video to watch or a few exercises is not enough. To effectively progress from novice to proficient, students need to see the utility of using advanced searches and they need consistent guidance and practice. Incorporating advanced searches into most demonstrations and exercises helps students practice advanced searching and see how advanced searching makes their research more efficient and effective.
In addition to using advanced search techniques, students must expand their search vocabularies to effectively use search technologies and find relevant information. Law school teaches students the legal jargon and terms of art that form the basic communicative structures of the law. First-year doctrinal courses provide the foundations on which students build their legal vocabularies and start forming structural legal categorizations. Unfortunately, indoctrinating students into the jargon and traditional organizational structures of the law often narrows the students’ search language and scope of inquiry. Skills faculty can counter limiting effects by giving students tools to develop robust search vocabularies. One means of developing search vocabularies is to include a section in the research log, discussed above, for students to list new keywords identified in a source. Additionally, students should be encouraged to use a thesaurus to broaden their vocabularies and find alternative ways of constructing searches. Specifically, skills faculty can show students Westlaw’s thesaurus feature in advanced searches.[114] Instructors can also provide opportunities for brainstorming of search terms during class by asking students to suggest search terms that the instructor writes on a whiteboard or create a word cloud through an interactive generator like Poll Everywhere.[115] This is an easy way to help students see the variety of search terms that may be used during their research process.
2. Limitations of Information
Effective legal research requires access to information and often requires use of multiple search systems to find relevant information for a legal issue. Novice legal researchers often assume legal search systems, like Lexis and Westlaw, contain all the cases, statutes, regulations, and secondary sources they will need and that the two systems contain the same information.[116] Although both systems provide access to an extensive amount of legal information and there is significant overlap in the sources available on each, they do not contain all sources nor do they contain exactly the same information. Skills faculty should incorporate demonstrations and exercises that emphasize the differences.
In addition to teaching students the differences between Lexis and Westlaw, it is important to inform students about the cost of using Lexis and Westlaw and show students how to find legal information using free resources. This helps students see how the commodification of information often leads to a reduction in information access and provides students additional resources for finding relevant information. An easy practice is to incorporate questions on assignments and exercises that require students to use free resources. Another exercise can ask students to find the same source using a free and subscription source and compare and contrast the utility of each resource. For example, an exercise may ask students to try to find the same case opinion on various platforms including Google Scholar, the Caselaw Access Project, Justia, FindLaw, Lexis, and Westlaw. Students can then reflect on the ease of finding the opinion on each platform and examine the different editorial enhancements and their utility.
E. Helping Students Develop Flexible Skills
Technology changes quickly. Skills faculty need to keep up to date on potential changes and incorporate these into legal research instruction, but more importantly students need to develop flexible skills that will help them adapt to future developments. Within the law school community, skills faculty are often the most attuned to legal technology developments. They anticipate changes and can quickly adapt teaching methods and content coverage. They can also use technology changes as learning opportunities. As Kuhlthau recognized in 1991: “The education of users of information systems is becoming more important with each technological advance. Merely devising better means of orienting people to sources and technology, however, does not adequately address the issue of uncertainty and anxiety in the [internet search process].”[117] Students need to find comfort in the constantly changing technology-driven search environment. Shielding them from developments, such as generative AI, is a disservice. Skills faculty can provide strong guided instruction on proper use of evolving technologies and help students become both effective and critical users of these technologies. They can provide classroom environments that allow students to experiment and develop curiosity, which will help students approach future developments with critical curiosity rather than reckless abandon or fear.
An example of using a recent technology advancement in legal research instruction is using prompt creation for generative AI systems, like ChatGPT and Bard, to help students expand search vocabularies and contextualize searches, mitigating the limitations of keyword searches described above.[118] Specifically, legal research databases promote speed over creative browsing and over thoughtful analysis. Generative AI chatbots, like ChatGPT and Bard, work most effectively when prompts are well constructed and concrete. Kubiki recommends users follow the five P’s in prompt creation: prime (provide context); persona (define personality, time, expertise, background); prompt (clearly ask for specific facts and/or series of actions); product (tell the chatbot what you want); and polish (evaluate the response and refine the prompt or ask follow up questions to elaborate).[119] Creating prompts can help the users contextualize search terms in a way word-based searches do not. Constructing prompts may help users expand their research vocabularies and think about their issues at a deeper level, since prompts, generally, involve writing full sentence inquiries rather than individual words or phrases disengaged from context.
Conclusion
Legal research is a fundamental skill all law students must learn. It is a practical and creative skill that requires Justice Frankfurter’s “poetic quality of the imagination.”[120] It is also a skill transformed by technology. Advanced search technologies have changed how we find, access, and interact with information. Although today’s law students grew up using search technologies in and out of the classroom, they often do not possess the critical research skills necessary to perform legal research effectively, efficiently, and ethically. Skills faculty are well situated to provide structured practice environments for students to practice using research technologies. They understand the benefits and limitations of research technologies and can help students move beyond superficial analysis; understand the concealed, black box nature of search technologies; and recognize the limitations of the systems. As the need for effective and adaptable research skills in the legal profession expands, it is more essential that skills faculty develop robust legal research instructional programs to ensure students heed Justice Frankfurter’s words and approach research with “the poetic quality of imagination” in order to “know[] what questions to put and what directions to given inquiry.”[121]
Felix Frankfurter, The Conditions for, and the Aims and Methods of, Legal Research, 15 Iowa L. Rev. 129, 134 (1930) (collected materials from 1929 meeting of the association of American Law Schools).
Id.
Lauren M. Singer & Patricia A. Alexander, Reading on Paper and Digitally: What the Past Decades of Empirical Research Reveal, 87 Rev. Educ. Res. 1007, 1008 (2017) (stating “97% of students by 2009 had access to a computer in their classroom”); see also Olivia R. Smith Schlinck, OK, Zoomer: Teaching Legal Research to Gen Z, 115 Law Libr. J. 269 (2023).
Thomas Keefe, Teaching Legal Research From the Inside Out, 97 Law Libr. J. 117, 122 (2005) (emphasis in original).
See, e.g., Susan Nevelow Mart, Adam Litzler & David Gunderman, Hunting and Gathering on the Legal Information Savanah, 114 Law Libr. J. 5 (2022).
See, e.g., American Bar Association Section on Legal Education & Admissions to the Bar, Legal Education and Professional Development: An Educational Continuum 7, 157–63 (1992) [hereinafter MacCrate Report] ; Final Report of the Testing Task Force, Nat’l Conference of Bar Examiners 21 (Apr. 2021), https://nextgenbarexam.ncbex.org/wp-content/uploads/TTF-Final-Report-April-2021.pdf [https://perma.cc/DWM3-BAS2] https://nextgenbarexam.ncbex.org/about/implementation-timeline/(last visited Nov 28, 2023) (identifying legal research as a foundational skill to be assessed on the NextGen Bar Exam).
See Carol C. Kuhlthau, Inside the Search Process; Information Seeking from the User’s Perspective, 42 J. Am. Soc’y Info. Sci. 361 (1991).
Richard Delgado & Jean Stafancic, Why Do We Tell the Same Stories? Law Reform, Critical Librarianship, and the Triple Helix Dilemma, 42 Stan. L. Rev. 207, 208 n.3, 216 (1989) (“Casebooks convey implicit normative messages by the way in which their authors arrange the cases, comments, and notes.”). See also Sherri Lee Keene & Susan A. McMahon, The Contextual Case Method: Moving Beyond Opinions to Spark Students’ Legal Imaginations, 108 Va. L. Rev. Online 72 (2022).
Frankfurter, supra note 1, at 134; see also Christina L. Kunz, Deborah Schmedemann, Ann L. Bateson, Matthew P. Downs & Susan L. Catterall, The Process of Legal Research xxvi (6th ed. 2004); MacCrate Report, supra note 6.
See, e.g., Singer & Alexander, supra note 3; Kristin E. Murray, Take Note: Teaching Law Students to be Responsible Stewards of Technology, 70 Cath. U.L. Rev. 201 (2021); Schlinck, supra note 3.
Nicholas Carr, The Shallows: What the Internet Is Doing to Our Brains 44 (2d ed. 2020).
See, e.g., Corey Seemiller & Meghan Grace, Generation Z Goes to College 174 (2016) (“Research has become less about the process of knowledge acquisition and more about quickly finding the answer needed for an assignment.”).
See, e.g., Geoffrey C. Bowker & Susan Leigh Star, Sorting Things Out: Classification and Its Consequences 7 (Revised ed. 2000).
How Results are Automatically Generated, Google Search, https://www.google.com/search/howsearchworks/how-search-works/ranking-results/ [https://perma.cc/J89W-YSBB] (last visited Nov. 28, 2023).
Id.
See, e.g., Matthew Reidsma, Masked by Trust: Bias in Library Discovery (2019).
Devesh Narayanan & David De Cremer, “Google Told Me So!” On the Bent Testimony of Search Engine Algorithms, 35 Phil. & Techn. 1, 5 (2022) (citing Bernard J. Jansen & Amanda Spink, How Are We Searching the World Wide Web? A Comparison of Nine Search Engine Transaction Logs, 42 Info. Processing & Mgmt. 248 (2006); Bing Pan, Helene Hembrooke, Thorsten Joachims, Lori Lorigo, Geri Gay & Laura Granka, In Google We Trust: Users’ Decisions on Rank, Position, and Relevance, 12 J. Computer-Mediated Commc’n 801 (2007)).
See, e.g., Narayanan & De Cremer, supra note 17, at 8; Stefano Triberti, Alice Chirico, Gemma La Rocca & Giuseppe Riva, Developing Emotional Design: Emotions as Cognitive Processes and Their Role in the Design of Interactive Technologies, 8 Frontiers in Psych. 1773 (2017); Don A. Norman, Emotional Design: Why We Love (or Hate) Everyday Things (2005); Reidsma, supra note 16.
See, e.g., Susan Nevelow Mart, Algorithm as a Human Artifact: Implications for Legal [Re]search, 109 Law Libr. J. 387, 390 (2017); Siva Vaidhyanathan, The Googlization of Everything (And Why We Should Worry) 62 (2011).
Niel Kerssens, When Search Engines Stopped Being Human: Menu Interfaces and the Rise of the Ideological Nature of Algorithmic Search, 1 Internet Hists. 219 (2017) (finding in abstract that “a transformation from human interfaces to software interfaces in online search helped encourage and normalise algorithmic ideology at the expense of a more humanistic ideology of search connected to library traditions”).
See generally Frank Pasquale, The Black Box Society: The Secret Algorithms that Control Money and Information (2015); Nicholas Mignanelli, Critical Legal Research: Who Needs It?, 112 Law Libr. J. 327, 337 (2020); Mart, Litzler & Gunderman, supra note 5.
Mart, supra note 19, at 398.
See generally Farzana Rashid & Eduardo Blanco, Characterizing Interactions and Relationships Between People, 2018 Proceedings of Conf. on Empirical Methods in Natural Language Processing 4395.
Feature snippets are displayed at the top of the results list when Google’s “systems determine this format will help people more easily discover what they’re seeking.” How Google’s Featured Snippets Work, Google Search Help, https://support.google.com/websearch/answer/9351707?p=featured_snippets&hl=en&visit_id=637866678631256137-1537539129&rd=1 [https://perma.cc/9XW6-VM4V] (last visited Dec. 5, 2023).
See Reidsma, supra note 16, at 110–12.
Narayanan & De Cremer, supra note 17, at 5 (citing Antti Oulasvirta, Janne P. Hukkinen & Barry Schwartz, When More Is Less: The Paradox of Choice in Search Engine Use, Procs. of the 32nd Int’l ACM SIGIR Conf. on Res. & Dev. in Info. Retrieval 516 (2009); Robert Epstein & Ronald E. Robertson, The Search Engine Manipulation Effect (SEME) and Its Possible Impact on the Outcomes of Elections, 112 Procs. of the Nat’l Acad. of Scis. E4512 (2015)).
Frankfurter, supra note 1, at 135.
See, e.g., Carr, supra note 11, at 141.
Janna Anderson & Lee Rainie, Pew Research Ctr., Concerns About the Future of People’s Well-Being, in The Future of Well-Being in a Tech Saturated World (Apr. 17, 2018), https://www.pewresearch.org/internet/2018/04/17/concerns-about-the-future-of-peoples-well-being/ [https://perma.cc/SE44-J4Q4] (quoting Meg Mott of Marlboro College). See generally, Constructivism: Theory, Perspectives, and Practice (Catherine Twomey Fosnot ed., 2d ed. 2005).
Carr, supra note 11, at 140; Patricia M. Greenfield, Technology and Informal Education: What Is Taught, What Is Learned, 323 Science 69 (Jan. 2, 2009).
Lewis T. Jayes, Gemma Fitzsimmons, Mark J. Weal, Johanna K. Kaakinen & Denis Drieghe, The Impact of Hyperlinks, Skim Reading and Perceived Importance When Reading on the Web, PLOS One (Feb. 9, 2022), https://doi.org/10.1371/journal.pone.0263669 [https://perma.cc/C9W3-2U2F]; see also Carr, supra note 11, at 90–93.
Jayes, Fitzsimmons, Weal, Kaakinen & Drieghe, supra note 31.
Carr, supra note 11, at 90–93.
Jayes, Fitzsimmons, Weal, Kaakinen & Drieghe, supra note 31 (“Given sentences with more links were rated as more important, and previous research has shown that skim reading leads to increased focus on links, it seems readers use links as signals through the text to anchor attention, leading to increased comprehension of those sentences.”) (citation omitted).
Carr, supra note 11, at 153.
Michelle D. Miller, Memory Requires Attention, in Remembering and Forgetting in the Age of Technology: Teaching, Learning, and the Science of Memory in a Wired World 148 (2022) (“[D]emanding cognitive tasks like acquiring new learning do require focused attention.”).
The American Bar Association (ABA) recognizes that technological proficiency is a significant aspect of legal practice. Prominently, in 2012, the ABA revised comments to Model Rule 1.1 of the Model Rules of Professional Conduct to state:
To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.
Model Rules of Pro. Conduct r. 1.1 cmt. 8 (Am. Bar. Ass’n 2020) (emphasis added).
The majority of states have adopted the revised Model Rule 1.1. Tech Competence, LawSites, https://www.lawnext.com/tech-competence [https://perma.cc/GKG9-CFBL] (last visited Nov. 28, 2023); Murray, supra note 10.
Quotation marks were not used in the search. Quotation marks are only used here to demarcate the words entered in the search bar.
The search was performed for all content and all states and all federal courts on January 27, 2023.
Commonwealth v. Matchett, 436 N.E.2d 400 (Mass. 1982).
The search was performed for all content and all states and all federal courts on January 27, 2023.
Commonwealth v. Brown, 81 N.E.3d 1173 (Mass. 2017).
Pasquale, supra note 21, at 8 (“[A]uthority is increasingly expressed algorithmically. Decisions that used to be based on human reflection are now made automatically.”) (citation omitted). See also Jamie J. Baker, 2018: A Legal Research Odyssey: Artificial Intelligence as Disruptor, 110 Law Libr. J. 5, 20 (2018); Sarah A. Sutherland, Legal Data and Information in Practice: How Data and the Law Interact (2022).
See, e.g., Alyson M. Drake, Building on CREAC: Reimagining the Research Log as a Tool for Legal Analysis, 52 U. Memphis L. Rev. 57, 59 (2022).
Kuhlthau, supra note 7, at 362.
Delgado & Stafancic, supra note 8, at 222.
Id.
Richard Delgado & Jean Stafancic, Why Do We Ask the Same Questions? The Triple Helix Dilemma Revisited, 99 Law Libr. J. 307, 311 (2007).
Anecdotally, as a new legal research instructor I was surprised when I would ask students in research meetings if they had read the case (or other source) and they said no. They assumed the editorial enhancements added by Westlaw and Lexis would give them all the information they needed.
Narayanan & De Cremer, supra note 17, at 5.
See, e.g., Lee F. Peoples, The Death of the Digest and the Pitfalls of Electronic Research: What is the Modern Legal Researcher to Do?, 97 Law Libr. J. 661 (2005).
Joshua M. Silverstein, Using the West Key Number System as a Data Collection and Coding Device for Empirical Legal Scholarship: Demonstrating the Method Via a Study of Contract Interpretation, 34 J.L. & Com. 203, 217 (2016) (citation omitted).
Id. at 218 (citation omitted).
See, e.g., Robert C. Berring, Legal Research and Legal Concepts: Where Form Molds Substance, 75 Calif. L. Rev. 15, 21 (1987).
See, e.g., Pasquale, supra note 21.
Daniel Martin Katz, Dirk Hartung, Lauritz Gerlach, Abhik Jana & Michael J. Bommarito II, Natural Language Processing in the Legal Domain, arXiv: 2302.12039 (Feb. 23, 2023), https://arxiv.org/abs/2302.12039 [https://perma.cc/P9UL-Q9ZE].
Delgado & Stafancic, supra note 8, at 221.
Delgado & Stafancic, supra note 48, at 310 (emphasis added).
Using the Research Map, LexisNexis Support Home, https://lexisnexis.custhelp.com/app/answers/answer_view/a_id/1084019/~/using-the-research-map [https://perma.cc/PJ7C-V78B] (last visited Nov. 28, 2023).
Homographs—words that are spelled the same but usually are pronounced differently—present the same challenge.
Ian Gallacher, Forty-Two: The Hitchhiker’s Guide to Teaching Legal Research to the Google Generation, 39 Akron L. Rev. 151, 189 (2006).
See, e.g., Christina L. Boyd, Pauline T. Kim & Margo Schlanger, Mapping the Iceberg: The Impact of Data Sources on the Study of District Courts, 17 J. Empirical Legal Stud. 466 (2020); David A. Hoffman, Alan J. Izenman & Jeffrey R. Lidicker, Docketology, District Courts, and Doctrine, 85 Wash. U.L. Rev. 681 (2007); Margo Schlanger & Denise Lieberman, Using Court Records for Research, Teaching, and Policymaking: The Civil Rights Litigation Clearinghouse, 75 UMKC L. Rev. 153, 158 (2006).
Merritt E. McAlister, Missing Decisions, 169 Pa. L. Rev. 1101, 1105 (2021) (“The courts have relied increasingly on so-called ‘unpublished decisions’—decisions not designated for inclusion in the West Federal Reporter—[and] academics and practitioners alike have long assumed that unpublished decisions were widely available on free court websites and in commercial databases.”).
Id. at 1105.
Reidsma, supra note 16, at 27.
Sarah Lamdan, Data Cartels: The Companies that Control and Monopolize our Information 86 (2023).
Id. at 86–87.
Carr, supra note 11, at 107.
See, e.g., Jonathan Zittrain, The Internet Is Rotting, The Atlantic (June 30, 2021), https://www.theatlantic.com/technology/archive/2021/06/the-internet-is-a-collective-hallucination/619320/ [https://perma.cc/N7VG-QPGH]; Jonathan Zittrain, Kendra Albert & Lawrence Lessig, Perma: Scoping and Addressing the Problem of Link and Reference Rot in Legal Citation, 127 Harv. L. Rev. F. 176 (2014).
Due to the speed with which generative AI is developing and being implemented and adopted in the legal field, this section may be out of date by the time the article is published. This section is meant to provide a basic overview of generative AI and ways it may impact legal research, but the author recognizes that it may not reflect the current state of generative AI at the time of publication. See, e.g., Wolters Kluwer & Above the Law, Generative AI in the Law: Where Could This All Be Headed? (2023), https://470182.fs1.hubspotusercontent-na1.net/hubfs/470182/WK AI Report 7.3.23.pdf [https://perma.cc/L875-GYNY].
Press Release, LexisNexis Announces Launch of Lexis+ AI Commercial Preview, Most Comprehensive Global Legal Generative AI Platform, LexisNexis (May 4, 2023), https://www.lexisnexis.com/community/pressroom/b/news/posts/lexisnexis-announces-launch-of-lexis-ai-commercial-preview-most-comprehensive-global-legal-generative-ai-platform [https://perma.cc/64CP-WBSR]; Lyle Moran, How In-House Lawyers can Use AI-Powered CoCounsel, Legal Dive (July 19, 2023), https://www.legaldive.com/news/casetext-cocounsel-ai-legal-assistant-openai-generative-ai-thomson-reuters-purchase/688411/ [https://perma.cc/8JKM-NVBN]; Isha Marathe, 6 Law Firms that Have Launched Internal Generative AI-Powered Chatbots, Law.com (Sept. 8, 2023, 1:59 PM), https://www.law.com/legaltechnews/2023/09/08/6-law-firms-that-have-launched-internal-generative-ai-powered-chatbots/#:~:text=Legal technology companies are not,and assist their attorneys internally [https://perma.cc/U2JK-2TTL].
For more information on generative AI, see Nick Routley, What Is Generative AI? An AI Explains, World Econ. Forum (Feb. 6, 2023), https://www.weforum.org/agenda/2023/02/generative-ai-explain-algorithms-work/ [https://perma.cc/M78L-5CNA]; Owen Hughes, Generative AI Defined: How it Works, Benefits and Dangers, TechRepublic (Aug. 7, 2023), https://www.techrepublic.com/article/what-is-generative-ai/ [https://perma.cc/9A8R-GGLR].
See, generally, Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lucasz Kaiser & Illia Polosukhin, Attention Is All You Need (Thirty-First Conf. on Neural Info. Processing Sys., 2017), https://arxiv.org/pdf/1706.03762.pdf [https://perma.cc/E8X7-4WNY]; George Lawton, What Is Generative AI? Everything You Need to Know, TechTarget, https://www.techtarget.com/searchenterpriseai/definition/generative-AI [https://perma.cc/B4GQ-Q9B8] (last visited Dec. 5, 2023).
Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser & Polosukhin, supra note 73; Lawton, supra note 73.
ChatGPT was developed by OpenAI. Bard was developed by Google. Introducing ChatGPT, OpenAI, https://openai.com/blog/chatgpt [https://perma.cc/4UEZ-Z69V] (last visited Dec. 5, 2023); Bard FAQ, Bard, https://bard.google.com/faq [https://perma.cc/NW4W-X2AJ] (last visited Dec. 5, 2023).
For up-to-date information and analysis of impacts of AI on the legal profession, specifically legal research, see AI Law Librarians, https://www.ailawlibrarians.com/ [https://perma.cc/MV2G-3LRH] (last visited Dec. 5, 2023).
I purposefully place quotations marks around the word “dialogue” here to emphasize that these systems are not human and, therefore, these are not true dialogic interactions between humans. Dialogue, Merriam-Webster Dictionary Online, https://www.merriam-webster.com/dictionary/dialogue [https://perma.cc/QW47-48WW] (last accessed Dec. 5, 2023) (“a conversation between two or more persons”).
Lance Eliot, Latest Prompt Engineering Technique Aims to Get Certainty and Uncertainty of Generative AI Directly on the Table and Out in the Open, Forbes (Aug. 18, 2023), https://www.forbes.com/sites/lanceeliot/2023/08/18/latest-prompt-engineering-technique-aims-to-get-certainty-and-uncertainty-of-generative-ai-directly-on-the-table-and-out-in-the-open/?sh=151194ff4cc0 [https://perma.cc/K3SM-68UN]; see, e.g., Sara Merken, New York Lawyers Sanctioned for Using Fake ChatGPT Cases in Legal Brief, Reuters (June 26, 2023), https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/ [https://perma.cc/KNL5-AY32].
Benjamin Weiser, ChatGPT Lawyers Are Ordered to Consider Seeking Forgiveness, N.Y. Times (June 22, 2023), https://www.nytimes.com/2023/06/22/nyregion/lawyers-chatgpt-schwartz-loduca.html [https://perma.cc/BJ2E-WWNG].
Id.
Id.
Editorial, ChatGPT Is a Black Box: How AI Research Can Break It Open, Nature (July 25, 2023), https://www.nature.com/articles/d41586-023-02366-2 [https://perma.cc/FAS7-VNH3].
OpenAI does not make this information available. Eric Griffith, GPT-4 vs. CahtGPT-3.5: What’s the Difference?, PC Mag (Mar. 16, 2023), https://www.pcmag.com/news/the-new-chatgpt-what-you-get-with-gpt-4-vs-gpt-35 [https://perma.cc/65LZ-2T9S].
Will Douglas Heaven, GPT-4 Is Bigger and Better than ChatGPT—But OpenAI Won’t Say Why, MIT Tech. Rev. (Mar. 14, 2023), https://www.technologyreview.com/2023/03/14/1069823/gpt-4-is-bigger-and-better-chatgpt-openai/ [https://perma.cc/K8K2-24MN].
Lance Whitney, ChatGPT Is No Longer as Clueless About Recent Events, ZD Net (Nov. 7, 2023), https://www.zdnet.com/article/chatgpt-is-no-longer-as-clueless-about-recent-events/ [https://perma.cc/VWD3-W2XK].
Antonio Pequeño IV, Major ChatGPT Update: AI Program No Longer Restricted to Sept. 2021 Knowledge Cutoff After Internet Browser Revamp, Forbes (Sept. 27, 2023), https://www.forbes.com/sites/antoniopequenoiv/2023/09/27/major-chatgpt-update-ai-program-no-longer-restricted-to-sept-2021-knowledge-cutoff-after-internet-browser-revamp/?sh=60ab78b6e01b [https://perma.cc/7Z39-URQB].
Rafly Pratama, ChatGPT vs. Bard: Which One Is Better, MS Power User (July 10, 2023), https://mspoweruser.com/chatgpt-vs-bard/#:~:text=Google’s Bard has been praised,from any point in time [https://perma.cc/DG9M-AGVE].
See, e.g., Model Rules of Pro. Conduct r. 1.6 (Am. Bar Ass’n, 2020); Andrew Perlman, The Implications of ChatGPT for Legal Services and Society, The Practice: Harv. L. Sch. (2023), https://clp.law.harvard.edu/knowledge-hub/magazine/issues/generative-ai-in-the-legal-profession/the-implications-of-chatgpt-for-legal-services-and-society/ [https://perma.cc/A4A9-V3BX].
What Is ChatGPT?, OpenAI, https://help.openai.com/en/articles/6783457-what-is-chatgpt [https://perma.cc/76KH-NHV4] (last visited Dec. 5, 2023).
See, e.g., Wolters Kluwer & Above the Law, Generative AI in the Law: Where Could This All Be Headed? (2023), https://470182.fs1.hubspotusercontent-na1.net/hubfs/470182/WK AI Report 7.3.23.pdf [https://perma.cc/P2DR-9HWE].
Davin Coldewey, No ChatGPT in My Court: Judge Orders All AI-Generated Content Must Be Declared and Checked, TechCruch (May 30, 2023), https://techcrunch.com/2023/05/30/no-chatgpt-in-my-court-judge-orders-all-ai-generated-content-must-be-declared-and-checked/#:~:text=All attorneys appearing before the,was checked for accuracy%2C using [https://perma.cc/PMR9-C6YF].
See, e.g., Gallacher, supra note 61.
“Librarians are information professionals, taught to understand and interrogate the resources at a legal researcher’s disposal, and they might be, as Berring and Van Heuvel assert, the ‘most knowledgeable, experienced, and capable researchers at any law school or law firm. . . .’ And there is no question that in sharing that knowledge with law students, they can provide a unique and valuable perspective on legal research.”
Id. at 173 (quoting Robert C. Berring & Kathleen Vanden Heuvel, Legal Research: Should Students Learn It or Wing It?, 81 Law Libr. J. 431, 447 (1989)); see also Dustin Johnston-Green, Rebecca A. Mattson & Dajiang Nie, Pedagogy, in Introduction to Law Librarianship 189 (2021).
See, e.g., Jessamyn Neuhaus, Geeky Pedagogy 31–42, 74–79 (2019); Joshua R. Eyler, How Humans Learn 51–60, 82–111 (2018).
There are numerous debates concerning the efficacy of the Socratic method in legal education. Though interesting and important, these are outside the scope of this article. See, e.g., Jamie R. Abrams, Reframing the Socratic Method, 64 J. Legal Educ. 562 (2015).
See, e.g., Murray Fellows, U. Md. Francis King Carey Sch. of Law, https://www.law.umaryland.edu/faculty---research/murray-fellows/ [https://perma.cc/FVZ2-MY8N] (last visited Dec. 5, 2023); Teaching the Teachers Conference, https://elibrary.law.psu.edu/tttconference/ [https://perma.cc/SEZ4-6FP2] (last visited Dec. 5, 2023).
See generally Caroline L. Osborne, The State of Legal Research Education: A Survey of First-Year Legal Research Programs, or “Why Johnny and Jane Cannot Research”, 108 Law Libr. J. 403 (2016). Models of legal research instruction vary depending on individual law schools’ curriculum structures. They range from stand-alone credit courses taught by law librarians to research instruction being incorporated into legal writing or other skills courses to law librarians providing single, non-credit workshops or short research sessions. Some models do not even involve law librarians in any legal research instruction. The ALWD/LWI Legal Writing Report of the Institutional Survey, 2019–2020, reported only 17% of respondents had introduction to legal research courses taught independently of other courses, with most of these courses taught by faculty or staff “whose primary responsibilities are as a librarian.” Only 4% of respondents required an advanced legal research course to satisfy graduation requirements. Ass’n of Legal Writing Directors & Legal Writing Inst., ALWD/LWI Legal Writing Survey, 2020–2021, Report of the Individual Survey (2022), https://www.lwionline.org/sites/default/files/2020-2021-ALWD-and-LWI-Individual-Survey-report-FINAL.pdf [https://perma.cc/364T-SBGH] (survey results were impacted by the COVID-19 pandemic and many schools shifting to online education); Ass’n of Legal Writing Directors & Legal Writing Inst., ALWD/LWI Legal Writing Survey, 2019-2020, Report of the Institutional Survey (2021), https://www.lwionline.org/sites/default/files/ALWD LWI 2019-20 Institutional Survey Report FINAL Nov 23 2020.pdf [https://perma.cc/3PP9-EE7A] (Survey results were impacted by the COVID-19 pandemic and many schools shifting to online education.); Johnston-Green, Mattson & Nie, supra note 92.
Drake, supra note 44, at 60.
Osborne, supra note 96, at 415–16; Yasmin Sokkar Harker, “Information Is Cheap, but Meaning Is Expensive”: Building Analytical Skill Into Legal Research Instruction, 105 Law Libr. J. 79, 85 (2013).
Annie Downey, Critical Information Literacy 50 (2016).
See Cindy Guyer, Using an Infographic to Encourage Deep Reading, RIPS L. Librarian Blog (Aug. 29, 2022), https://ripslawlibrarian.wordpress.com/2022/08/29/using-an-infographic-to-encourage-deep-reading/ [https://perma.cc/84FU-EV75].
See, e.g., Jayes, Fitzsimmons, Weal, Kaakinen & Drieghe, supra note 31.
Olivia Smith Schlinck, The “Food Blog” Scroll and Its Impact on Online Legal Research, RIPS L. Librarian Blog (Nov. 16, 2022), https://ripslawlibrarian.wordpress.com/2022/11/16/the-food-blog-scroll-and-its-impact-on-online-legal-research/ [https://perma.cc/A4P6-4ZFZ].
Id.
Caroline L. Osborne, The Legal Research Plan and the Research Log: An Examination of the Role of the Research Plan and Research Log in the Research Process, 35 Legal Ref. Servs. Q. 179, 193 (2016).
See generally Berring, supra note 54, at 26.
See generally How to Use Star Pagination, LexisNexis: Support, https://lexisnexis.custhelp.com/app/answers/answer_view/a_id/1094056/~/how-to-use-star-pagination [https://perma.cc/HA87-Z854] (last visited Dec. 5, 2023).
Delgado & Stafancic, supra note 8, at 224.
See generally, Yasmin Sokkar Harker, Critical Legal Information Literacy: Legal Information as a Social Construct, in Information Literacy and Social Justice: Radical Professional Praxis 205 (Lua Gregory & Shana Higgins eds., 2013).
Natural Language Processing, Google Research, https://research.google/research-areas/natural-language-processing/#:~:text=Natural Language Processing (NLP) research,%2C ads%2C translate and more [https://perma.cc/DU54-FJA7] (last visited Dec. 5, 2023). Natural language processing (NLP) is “concerned with giving computers the ability to understand text and spoken word in much the same way human beings can.” What is Natural Language Processing (NLP), IBM Cloud Learning, https://www.ibm.com/cloud/learn/natural-language-processing [https://perma.cc/2QG7-GCUF] (last visited Dec. 5, 2023). Textual NLP consists of various levels: Lexical Analysis; Syntactic Analysis; Semantic Analysis; Discourse Analysis; and Pragmatic Analysis. M.K. Anjali & P. Babu Anto, Ambiguities in Natural Language Processing, 2 Int’l J. Innovative Rsch. in Comput. & Commc’n Eng’g 392, 392 (2014).
M. Sara Lowe, Bronwen K. Maxson, Sean M. Stone, Willie Miller, Eric Snajdr & Kathleen Hanna, The Boolean Is Dead, Long Live the Boolean! Natural Language Versus Boolean Searching in Introductory Undergraduate Instruction, 79 Coll. & Rsch. Librs. (2018), https://doi.org/10.5860/crl.79.4.517 [https://perma.cc/D7EA-X2TL].
Id.
Narayanan & De Cremer, supra note 17, at 5 (citing B.J. Jansen, A. Spink & T. Saracevic, Real Life, Real Users, and Real Needs: A Study and Analysis of User Queries on the Web, 36 Info. Processing & Mgmt. 207 (2000)).
Vaidhyanathan, supra note 19, at 60–64; Annalee Hickman Moser, Go Ahead and Google. Then Do a Subject-Based Search, 46 Student Law. 8–9 (2018).
Use the Thesaurus, Thomson Reuters, https://www.thomsonreuters.com/en-us/help/westlaw-edge/searching/thesaurus.html [https://perma.cc/NW6J-39HW] (last visited Dec. 5, 2023).
Poll Everywhere, https://www.polleverywhere.com/ [https://perma.cc/682L-RT8H] (last visited Dec. 5, 2023).
See McAlister, supra note 63, at 1105.
Kuhlthau, supra note 7, at 370.
Jennifer Chapman, The Importance of Words (or What’s a Pat?), RIPS L. Librarian Blog (Sept. 13, 2023), https://ripslawlibrarian.wordpress.com/2023/09/13/the-importance-of-words-or-whats-a-pat/ [https://perma.cc/3J7S-PEAX].
Josh Kubiki, The Power of Generative AI, LinkedIn, https://www.linkedin.com/posts/jpkubicki_genai-prompting-quick-guide-activity-7062039737934516224-gkM-/?originalSubdomain=my [https://perma.cc/G8B4-FWWW] (last visited Dec. 5, 2023).
Frankfurter, supra note 1, at 134.
Id.