CRAAP test
The CRAAP test is a test to check the objective reliability of information sources across academic disciplines. CRAAP is an acronym for Currency, Relevance, Authority, Accuracy, and Purpose.[1] Due to a vast number of sources existing online, it can be difficult to tell whether these sources are trustworthy to use as tools for research. The CRAAP test aims to make it easier for educators and students to determine if their sources can be trusted. By employing the test while evaluating sources, a researcher can reduce the likelihood of using unreliable information. The CRAAP test, developed by Sarah Blakeslee[2] and her team of librarians at California State University, Chico (CSU Chico),[1] is used mainly by higher education librarians at universities. It is one of various approaches to source criticism.
C: Currency
The first step toward knowing if a source is reliable is to check its currency.[1] Currency means that the information found is the most recent. That being said, students and educators may ask where the information was posted or published. Next, they look to see if the information has been revised or updated and whether the research assignment can rely on multiple sources in different McGuire platforms. The topic is also taken into consideration of whether it needs current news, media, or the latest findings from research or that can be found from older sources as well. These questions are important because they help pinpoint recent trends of the information and also exhibit the constant research changes that are spreading rapidly as technology expands now and in the future. If the source comes from a website, the links to access it must be working.[1]
R: Relevance
When looking at sources, the relevance of the information will impact a well rounded research endeavor. One question in this category to ask is how does the topic relate to the information given in a source? More importantly, the writers of the references should focus on the intended audience. There are a vast number of topics and an increase in access to information. Therefore, the relevance of the information helps the audience know what they are looking for. Also, there is a check-in about whether the data is at an appropriate level of comprehension, that is, the degree must not be too elementary or advanced for the students' or educators' needs. Because of the variety of sources, educators can do their best to keep an open mind about source usage. Moreover, they should decide if they feel comfortable enough to cite the source.[1]
A: Authority
There is, however, not only the currency and relevance of the given source, but also its authority to consider. This is significant because the students and educators will look to see who is the author, publisher, or sponsor before they can trust the information. Their education level and the author's affiliations are important because this can help the readers know if the author is qualified to write on the topic. There should also be contact information for the publisher or author. The authority of the source helps the students or educators know that the information can be used and trusted in a proper manner.[1] Authority in citing a source establishes a trust boundary between the reader and the author of the works.
A: Accuracy
Emphasizing the trustworthiness of sources, the accuracy of the contents in the source must connect back to the origin. Evidence must support the information presented to the audience. Evidence can include findings, observations, or field notes. The report must be reviewed or referred. It must be verifiable from another source or common knowledge. That said, the language used in the sources has to be unbiased or free of emotion, because of its use for fact retrieval. The content in the source should be free of spelling, grammar, or typographical errors.[1]
P: Purpose
The purpose of the sources helps the readers know whether the information they are looking for is right for their research. The questions that arise when looking for the purpose range from informing, teaching, selling, entertaining, research or even self-gaining purposes. Also, the author's intentions should be clear. Certain aspects should be taken into consideration whether the information given is fact, opinion, or propaganda as well as political, personal, religious, or ideological bias.[1] Knowing the purpose of the information helps researching for sources a lot easier.
History
The test was implemented by Sarah Blakeslee[3][2] during the spring of 2004 when she was creating a first year workshop to first-year instructors. Blakeslee was frustrated that she could not remember the criteria for looking up different sources. After much thought, she came up with the acronym. She wanted to give students an easier way to determine what sources are credible.[2] One of the other tests that came before the CRAAP test is the SAILS test: Standardized Assessment of Information Literacy Skills, created in 2002 by a group of librarians at Kent State University as an assessment for students' information literacy skills. The SAILS test focuses more on the scores as a quantitative measure of how well students look up their sources.[4] While the SAILS test is more specific in its terms of evaluation, it shares the same objectives as the CRAAP test.
Website evaluation
One university has started using the CRAAP test to help teach students about online content evaluation. In a 2017 article, Cara Berg, a reference librarian and co-coordinator of user education at William Paterson University emphasizes website evaluation as a tool for active research.[5] At Berg's university, for example, library instruction is given to roughly 300 different classes, each in different subjects that require some type of research that require students to look up sources. Website evaluation using the CRAAP test was incorporated as part of the first year seminar for students at this university, to help them hone their research skills.[5]
Challenges in the classroom
When the CRAAP test was first implemented at William Paterson University, there were some technical challenges. The workshop for website evaluation felt rushed and in most cases, librarians could not cover all the angles in one class session. As well, as a consequence of rushing the website evaluation portion for reasons of time, student performance on an assessment focused on website evaluation was poor. To address these problems, they developed a "flipped" method in which students watched a video that covered two of three workshop sections on their own time, with in-class instruction limited to website evaluation yet occupying all of a class period. Student performance on assessments of their knowledge of CRAAP for website evaluation improved after the change to instruction.[5]
Pedagogical uses
The CRAAP test is generally used in library instruction as part of a first-year seminar for students. Students were required to participate in this class as a part of the graduation requirement at William Paterson University.[5] Besides using the CRAAP method in English courses, many other courses have been using this method as well, such as science and engineering classes. The test is applied the same way as the website evaluation and is used universally in all courses. Examples of universities that use the CRAAP test include Central Michigan University,[6] Benedictine University,[7] Community College of Baltimore County,[8] among the many examples. There are other schools that use the test as a way for students to do well on their assignments in subjects that require research papers.
Alternatives and criticisms
In 2004, Marc Meola's paper "Chucking the Checklist" critiqued the checklist approach to evaluating information,[9] and librarians and educators have explored alternative approaches.
Mike Caulfield, who has criticized some uses of the CRAAP test in information literacy,[10] has emphasized an alternative approach using step-by-step heuristics that can be summarized by the acronym SIFT: "Stop; Investigate the source; Find better coverage; Trace claims, quotes, and media to the original context".[11][12]
In a December 2019 article, Jennifer A. Fielding raised the issue that the CRAAP method's focus is on a "deep-dive" into the website being evaluated, but noted that "in recent years the dissemination of mis- and disinformation online has become increasingly sophisticated and prolific, so restricting analysis to a single website's content without understanding how the site relates to a wider scope now has the potential to facilitate the acceptance of misinformation as fact."[13] Fielding contrasted use of the CRAAP method, a "vertical reading" of a single website, with "lateral reading", a fact-checking method to find and compare multiple sources of information on the same topic or event.[13]
In a 2020 working paper, Sam Wineburg, Joel Breakstone, Nadav Ziv and Mark Smith found that using the CRAAP method for information literacy education makes "students susceptible to misinformation". According to these authors the method needs thorough adaptation in order to help students detect fake news and biased or satirical sources in the digital age.[14]
References
- Korber, Irene. "LibGuides: Literature Reviews: Evaluating Info". libguides.csuchico.edu. Archived from the original on 2018-05-08. Retrieved 2018-05-21.
- Blakeslee, Sarah (2004). "The CRAAP Test". LOEX Quarterly. 31 (3). Archived from the original on 2018-06-12. Retrieved 2018-05-28.
- "Library Staff Directory | Meriam Library". library.csuchico.edu. Archived from the original on 2018-09-05. Retrieved 2018-05-27.
- "Project SAILS: Standardized Assessment of Information Literacy Skills". Project SAILS. May 29, 2018. Archived from the original on May 28, 2018. Retrieved June 3, 2018.
- Berg, Cara (March–April 2017). "Teaching Website Evaluation The CRAAP Test and the Evolution of an Approach". Internet@schools. 24 (2): 8–11. Archived from the original on 10 August 2019. Retrieved 25 July 2019.
- Renirie, Rebecca. "Research Guides: Website Research: CRAAP Test". libguides.cmich.edu. Archived from the original on 2017-12-27. Retrieved 2018-06-12.
- Hopkins, Joan. "Research Guides: Evaluating Sources: The CRAAP Test". researchguides.ben.edu. Archived from the original on 2018-06-12. Retrieved 2018-06-12.
- Casey, Sharon. "Research Guides: Evaluate It! : C.R.A.A.P. Criteria". libraryguides.ccbcmd.edu. Archived from the original on 2018-06-12. Retrieved 2018-06-12.
- Lenker, Mark (October 2017). "Developmentalism: Learning as the Basis for Evaluating Information". portal: Libraries and the Academy. 17 (4): 721–737. doi:10.1353/pla.2017.0043. S2CID 148728541. Archived from the original on 2019-01-01. Retrieved 2019-12-19.
- Caulfied, Mike (September 14, 2018). "A Short History of CRAAP". hapgood.us. Archived from the original on 2019-04-01. Retrieved 2019-06-14.
- Fister, Barbara; MacMillan, Margy (May 31, 2019). "Mike Caulfield: Truth Is in the Network: Smart Talk Interview, no. 31". projectinfolit.org. Project Information Literacy. Archived from the original on 2019-08-06. Retrieved 2019-06-14.
- See also: Stellino, Molly (December 12, 2018). "Shortcut roundup: quick guides to become media literate". newscollab.org. News Co/Lab at the Walter Cronkite School of Journalism and Mass Communication, Arizona State University. Archived from the original on 2019-04-06. Retrieved 2019-06-19. Stellino lists Caulfield's four moves (an earlier version of SIFT) alongside other acronyms and heuristics and then summarizes the common factors that she sees in all of them.
- Fielding, Jennifer A. (December 2019). "Rethinking CRAAP: Getting students thinking like fact-checkers in evaluating web sources". C&RL News. 80 (11): 620–622. doi:10.5860/crln.80.11.620. S2CID 214267304. Archived from the original on 2019-12-31. Retrieved 2019-12-31.
- Wineburg, Sam; Breakstone, Joel; Ziv, Nadav; Smith, Mark (2020). "Educating for Misunderstanding: How Approaches to Teaching Digital Literacy Make Students Susceptible to Scammers, Rogues, Bad Actors, and Hate Mongers". Stanford History Working Group Working Paper. Working Paper A-21322. Archived from the original on 2021-08-11. Retrieved 2021-08-11.