Assessment of Information Literacy: Lessons from the Higher Education Assessment Movement

Lois M. Pausch, Geology Librarian and Associate Professor of Library Administration
University of Illinois at Urbana-Champaign

Mary Pagliero Popp, Electronic Services Librarian
Indiana University Bloomington

ABSTRACT

Assessment in institutions of higher education is being driven by demands for accountability from legislators, trustees, and accrediting agencies. These assessment efforts are now expanding to library instruction programs. The library literature, however, reveals few rigorous efforts to evaluate the teaching of information literacy concepts and skills. Objective methods are being developed in many teaching disciplines, resulting in a body of research and descriptions of effective evaluation methods. Instruction librarians need to investigate these to determine which of them might be adopted/adapted for use in libraries. This paper reviews higher education assessment methods; identifies useful theories and practices; describes assessment programs in academic libraries; and makes recommendations for changes in library education and for future research.

Introduction

Assessment is a hot topic in higher education today, resulting in a plethora of books and journal articles. There are as many definitions of assessment as there are authors, but essentially it is the process of determining how well an educational system functions and improving parts of it as necessary.1 Most often the term refers to "outcomes assessment," which tries to answer three questions: "(1) What should students learn?; (2) How well are they learning it?; and (3) How does the institution know?"2 National accrediting agencies now specify that plans for assessment, including those related to student learning outcomes, be part of reaccreditation self-studies. Only, recently, however, have libraries come to have a place in these assessment programs. For example, one agency, the Middle States Association of Colleges and Schools, mandates that the ability to retrieve and use information be assessed.3 The need for information literacy, the ability to retrieve, manage, and evaluate information, has come to the forefront as technology has made information a commodity in the modern world.

Assessment in Higher Education

Accountability, outcomes measurement, and assessment are the subjects of much discussion in higher education. Recently, a new book by Trudy Banta, et al., Assessment in Practice, discussed nine "Principles of Good Practice for Assessing Student Learning," developed by the American Association for Higher Education (AAHE), and proposed a tenth. The original nine principles include:

  1. The assessment of student learning begins with educational values.
  2. Assessment is most effective when it reflects an understanding of learning as multidimensional, integrated, and revealed in performance over time.
  3. Assessment works best when the programs it seeks to improve have clear, explicitly stated purposes.
  4. Assessment requires attention to outcomes but also and equally to the experiences that lead to those outcomes.
  5. Assessment works best when it is ongoing, not episodic.
  6. Assessment fosters wider improvement when representatives from across the educational community are involved.
  7. Assessment makes a difference when it begins with issues of use and illuminates questions that people really care about.
  8. Assessment is most likely to lead to improvement when it is part of a larger set of conditions that promote change.
  9. Through assessment educators meet responsibilities to students.

The 10th principle proposed by Banta is:

Assessment is most effective when undertaken in an environment that is receptive, supportive, and enabling.4

Overview of Assessment Methods

Assessment is multidimensional, dealing as it does with cognitive, affective, and skills learning. It may be formative or summative, qualitative or quantitative, or take some other form. Formative assessment deals with programs as they are functioning and tries to foster ongoing improvement for the provider and receiver of the instruction. Summative assessment deals with the aftereffects, usually by testing or some other method employed after the instruction is complete. For higher education, quantitative results--test scores, number of graduates, etc.--initially dominated discussions of assessment. Increasingly, qualitative forms of assessment are taking center stage. Qualitative methods focus on the opinions of learners and on examples or descriptions of what they have learned.5 They test for understanding rather than memorization, "deep" learning rather than "surface" learning. Using qualitative methods, assessment can be "developmental," judging where students are in their understanding, or "ecological," testing students' abilities to apply knowledge in "authentic situations." Assessing in ways that foster "deep" learning is important because research shows students learn what they expect will be assessed.6 

Prus and Johnson list and describe advantages and disadvantages of a variety of assessment options, including:

  • tests--standardized, locally developed, and oral,
  • performance appraisals, simulations, and other competency-based measures
  • self-reports or third-part reports--surveys, exit interviews, reports from employers, etc.
  • behavioral observations,
  • portfolios of student work--these can include papers, assignments, etc.,
  • classroom assessments,

and others.7

Other methods noted in the literature include:

  • focus groups8 in which learners are asked about learning, attitudes, methods of teaching used, etc.;
  • satisfaction surveys;9
  • learning logs10 or research diaries;
  • self-assessment11 in which learners do a task and judge how well they feel they performed;
  • capstone courses in the major12 where students synthesize and apply what they know; and
  • case studies.13 

Many of these are also discussed as ways to provide the information that is being requested by university administrators, legislators (in the cases of state-supported institutions), boards of trustees, and other governing bodies to justify the expenditure of funds for higher education.

Assessment Methods and Libraries

What is the impetus for the use of assessment methods in libraries? For library instruction/bibliographic instruction/information literacy librarians (For the rest of this paper, the term instruction librarian will be used to denote library instruction, bibliographic instruction, and information literacy librarians and library instruction will be used for bibliographic instruction and information literacy.), the answer is to find out whether or not what is taught is useful and has carryover from the initial session to at least the end of the student's studies. Mechanisms of assessment must be implemented which demonstrate that the requisite cognitive concepts and skills are being learned. If instruction librarians want to continue instructional programs, those programs must be shown to be necessary to other learning experiences. This must be done in order to garner support from library administrators, campus administration, faculty, and students.

In today's universities and colleges, library instruction encompasses a variety of methods, especially the traditional single-class instruction session, credit courses or parts of credit courses, workbooks, and handouts. Other vehicles for teaching are being explored, such as using the World Wide Web to present information and tutorials, particularly because academic libraries are increasingly involved in distance education. Assessment can be applied to all of these instructional programs.

In a traditional session, the instructor monitors learning and provides students with an ongoing measure of their progress. Classroom assessment techniques may be chosen by the instructor, who uses them at appropriate points in the class and analyzes the results. These assessment methods are anonymous, non-threatening and many can be adapted for the single-class instruction session without using too much valuable teaching time.14 As examples, there is the "Minute Paper" where students are asked to answer one or both of the following questions: "What was the most important thing you learned during this class? and "What important question remains unanswered?,"15 the "One Sentence Summary" of what was learned;16 or the "Muddiest Point" assessment where students are asked to write down the least clear or most confusing point covered in the class.17 

For credit courses and multiple session instruction, a number of techniques may be utilized. These include standard classroom testing,18 classroom assessment techniques, satisfaction surveys, portfolios,19 and assignments. Another method is to encourage or require students to develop concept maps of their learning. This unique method "involves identifying concepts or ideas pertaining to a subject, and then describing the relationships that exist between these ideas in the form of a drawing or sketch."20 The concept map has been shown to become more complex as students learn more and is a reliable method for testing knowledge.21 In addition, this method provides the student with a self-assessment tool that can become an inducement to learning.

Increasingly, instructional efforts are focusing on non-classroom activities, such as computer based learning (especially in the case of distance education), World Wide Web tutorials, videos, non-credit workshops, and handouts. Assessment of learning from these activities should be undertaken, but for distance learning must take into account the special requirements of the medium and the difficulties inherent in the lack of face-to-face instruction.22 

Assessment of library instruction programs and activities must begin with a statement of mission and purpose, "precise statements of what students are to learn, written in terms of student achievements," and "detailed information about performance standards, evaluation procedures, evaluation results, and use of results.23 

Reality

All of the foregoing covers assessment in the ideal. Review of the recent literature on assessment of library instruction reveals few changes in the formal evaluation methodologies employed by librarians. In fact, evaluation of any kind is more likely to be informal in nature, as is noted in a study by Bober, Poulin, and Vileno of the library literature from 1980-1993.24 Where formal evaluation is being carried out, little full program assessment is being done. Most studies report evaluation of a particular part of a program, such as content, methodology used, impact on student attitudes, or the effectiveness of the program itself in terms of cost and required resources.25 These studies also reveal that no control groups are used.

A formal evaluation study has been carried out at Cornell, funded by an ALA grant, of a discrete group of students in one discipline, agricultural economics. This program covered information literacy instruction offered throughout four years of the students' education. The program was aimed at increasing information literacy for the students; information literacy was defined as the ability to retrieve and manage information. Librarians worked closely with faculty to design the program as part of a cohesive curriculum presented in a sequence of classes required of freshmen, sophomores, juniors, and seniors so that teaching and learning was incremental; that is, each class built on the content of the previous class. Assessment was done employing two surveys: one to graduates of the library's instruction program to determine which skills were retained by the students and which proved to be useful in their careers after graduation; and the other to employers of Cornell graduates to identity which information skills and knowledge were desirable in an entry-level employee.26 

In recent years, the emphasis of many programs has changed from tools to concepts, but few programs have adopted methods aimed at assessing whether students gained the cognitive skills for analysis, synthesis, and evaluation of information that form the basis for much of the assessment of higher education. One of the most often used evaluation tools, generally carried out immediately post-session, is the study of changes in student attitudes. In most cases, this technique does not measure student learning.27 The Cornell study cited earlier does not evaluate learning outcomes except in terms of computer skills learned and later employed on the job.

Although many higher education institutions are engaged in assessing their programs, courses, curricula, etc., library instruction programs have generally not been included in such evaluations. An exception is Alverno College, a pioneer in the assessment of student learning. The program assessment process, which provides "meaningful feedback to the faculty about patterns of student performance on a range of curriculum outcomes," has not been applied to the library. Alverno librarians, however, are very involved in integrating the Information Literacy program into the curriculum, where Alverno uses a process known as "Student Assessment as Learning." The process, integral to the students' learning, includes observation and judgment of student performance on the basis of explicit criteria, with feedback to the student. Students self-assess their performance (e.g., developing and using a search strategy) based on specific criteria that address both the libraries' instruction program goals and the eight abilities integrated into the curriculum (communication, aesthetic responsiveness, analysis, global perspectives, effective citizenship, problem solving, valuing decision-making, and social interaction)" Librarians then provide written and verbal feedback on their "performance."28 

In some cases, such as Wartburg College in Iowa,29 Indiana University Bloomington,30 and Pierce College in Washington (as part of the Washington Statewide Initiative on Information Competencies),31 the good news is that academic librarians have established goals and objectives on which to measure the effectiveness of library programs, either as a result of assessment plans of their universities for an accreditation visit, or, as is the case in Washington, as part of a statewide initiative. The Indiana and Wartburg documents also indicate that, although instruction librarians may lay the groundwork with some foundation courses and lectures/workshops in the early year(s) of the students' stay on campus, librarians should begin to work with other teaching faculty in assessment activities aimed at providing instruction in information literacy within the department's or school's curriculum. This could include: helping to develop a set of learning goals and objectives that teaching faculty themselves can use in providing instruction and evaluating their students' abilities in finding and using information, fostering a team-teaching partnership, or making a library lecture/workshop a regular part(s) of a discipline-related course. This will require librarians to give up the idea that only they can teach the basic library information skills and to provide help and encouragement to teaching faculty as they incorporate the teaching of information literacy in their classes. As is noted in an article by Pacey, user education librarians should make "every effort to re-integrate information skills with the curriculum, with other, complementary, skills, and with the learning environment in the broadest sense, embracing the commonality and collegiality of the whole institution, always building on good practice that has been achieved."32 Pacey then goes on to recommend seven ways this might be done. Librarians should: (1) keep a positive attitude towards other faculty who are willing to promote library skills themselves or in close partnership with librarians; (2) make informal (and very subtle) efforts to foster, update, and develop library skills in other teaching faculty; (3) incorporate library skills in in-house and other professional development programs offered to the faculty; (4) continue to provide subject-oriented information skills training (especially true for subject librarians); (5) engage in more or less continuous programs of educating users through the promotion of services, especially new services (based on the commercial idea of selling your products); (6) recognize that students learn from one another often without the help of a professional; and (7) devote time and effort to the development of a possible common core curriculum unit devoted to various skills--information skills, study skills, etc.33 Pacey notes that the old definition of user education is dead, but the short-term and life-long skills to be taught are more important than ever.34 

The Future

In planning for the future of assessment, both institutional and library, the role of the student receives a good deal of attention. Using focus groups, student advisory groups,35 etc., to propose additions or recommendations for changes in courses or curricula is yet another way to assess programs of study. Ask students for ideas for means of assessment that will actively involve them in the process.

Library educators will have to take a long, hard look a their instructional programs, to offer classes in pedagogical theory that will be of use to the librarian in preparing for the library of the future. If the job of the librarian is changing in response to changes in technology, so, too, should library education change. Newly graduated librarians know a great deal about computers, technology, and databases, but many know nothing about teaching patrons to use those same tools. Instruction librarians, without proper preparation, find the continuation or implementation of instructional programs to be difficult and, often, the work is done badly or not at all. Librarians already active in library instruction need continuing education in pedagogy and assessment techniques.

Much more research on the outcomes of library instruction programs needs to be done. Longitudinal studies, like the one at Cornell, need to be undertaken, where possible. Other studies need to be done to see if one-shot classes have any carryover and, if so, whether students really do benefit. Some methods of assessment, e.g., satisfaction surveys, bibliography reviews,36 use of portfolios, etc., need to be tested to see how valid they are. What is shown in the literature, for the most part, is user satisfaction with the one-shot session, when it is possible that the patrons do not know enough to be dissatisfied.

There needs to be greater cooperation between librarians and other teaching faculty in the provision of information use instruction that goes beyond just library skills. There should also be some way for school librarians and university/college librarians to prove or disprove the value of library instruction programs at the high school level. This fulfills one of the goals spoken of earlier in this paper; that of ongoing education and assessment.

Conclusion

Changes in accreditation policies and in external requests for accountability provide an opportunity for academic librarians to participate in the assessment of institutional effectiveness. Much has been done in research and practice in higher education evaluation on which academic libraries can base assessment programs. To help in the process, more publications about the assessment efforts of all kinds of libraries are needed, as is a profession-wide consensus on outcome measures. Some academic libraries have already begun--now is the time for the rest of us to join in the effort.

NOTES

  1. Alexander W. Astin, Assessment for Excellence: The Philosophy and Practice of Assessment and Evaluation in Higher Education, (New York: American Council on Education, Macmillan Publishing Co., 1991), 2.
  2. Wanda E. Gill, "Conversations about Accreditation: Middle States Association of Colleges and Schools: Focusing on Outcomes Assessment in the Accreditation Process: A Presentation for the 1993 American Association for Higher Education Double Feature Conference on Assessment and Continuous Quality Improvement, Saturday, June 12, 1993, 8:15 - 9:15 am,"(Washington, DC: ERIC Document Reproduction Service, 1993), ED 358 792, 2-3.
  3. Susan Griswold Blandy, "The Librarian's Role in Academic Assessment and Accreditation: A Case Study," in Assessment and Accountability in Reference Work, Susan Griswold Blandy, Lynn M. Martin, Mary L. Strife, editors, The Reference Librarian, no. 38, (New York: Haworth Press, 1992), 72.
  4. Trudy W. Banta, et al., Assessment in Practice: Putting Principles to Work on College Campuses, (San Francisco: Jossey-Bass, 1996), 2, 62.
  5. Paul Hager and Jim Butler, "Two Models of Educational Assessment," Assessment & Evaluation in Higher Education, 21 (December 1996), 367-377; Mary L. Mittler and Trudy H. Bers, "Qualitative Assessment: An Institutional Reality Check," New Directions for Community Colleges, no. 88 (Winter 1994), 61-62; John Biggs, "Assessing Learning Quality: Reconciling Institutional, Staff, and Educational Demands," Assessment & Evaluation in Higher Education, 21 (March 1996), 5-15.
  6. Biggs, "Assessing Learning Quality," 8.
  7. Joseph Prus and Reid Johnson, "A Critical Review of Student Assessment Options," New Directions for Community Colleges, no. 88 (Winter 1994), 69-83.
  8. For more information about focus group use and the Nominal Group Technique in assessment, see Trudy H. Bers, "The Popularity and Problems of Focus Group Research," College and University, 44 (Spring 1989), 260-268; Mary Chapple and Roger Murphy, "The Nominal Group Technique: Extending the Evaluation of Students' Teaching and Learning Experiences," Assessment & Evaluation in Higher Education, 21, no. 2(1996), 147-158; Kathy Kramer Franklin and W. Hal Knight, "Using Focus Groups to Explore Student Opinion, Paper Presented at the Annual Meeting of the Mid-South Educational Research Association Conference, Biloxi, MS, November 1995," (Washington, DC: ERIC Document Reproduction Service, 1995), ED 388 200; Anne Hendershott and Sheila Wright, "Student Focus Groups and Curricular Review," Teaching Sociology, 21 (April 1993), 154-159; Linda Costigan Lederman, "Assessing Educational Effectiveness: The Focus Group Interview as a Technique for Data Collection," Communication Education, 39 (April 1990), 117-127.
  9. Janet G. Donald and D. Brian Denison, "Evaluating Undergraduate Education: The Use of Broad Indicators," Assessment & Evaluation in Higher Education, 21, no. 1 (1996), 23-39.
  10. Susan Sellerman Obler, Julie Start, and Linda Umbdenstock, "Classroom Assessment," in Making a Difference: Outcomes of a Decade of Assessment in Higher Education, Trudy Banta, editor, (San Francisco: Jossey-Bass, 1993), 216.
  11. Carl J. Waluconis, "Student Self-Evaluation," in Making a Difference: Outcomes of a Decade of Assessment in Higher Education, Trudy Banta, editor, (San Francisco: Jossey-Bass, 1993), 244, 250, 252-255; Georgine Loacker and Marcia Mentkowski, "Creating a Culture Where Assessment Improves Learning," in Making a Difference: Outcomes of a Decade of Assessment in Higher Education, Trudy Banta, editor, (San Francisco: Jossey-Bass, 1993), 19-20.
  12. Jeffrey A. Seybert, "Community College Strategies: Assessing Student Learning," Assessment Update, 6 (July-August 1994), 8-9.
  13. Ellen G. Hawkes and Patricia Pisaneschi, "Academic Excellence for Adults: Improving the Teaching/Learning Process Through Outcomes Assessment: Paper Presented at the National University Conference on Lifelong Learning: Meeting the Higher Education Needs of Adult Learners, San Diego, Feb. 14, 1992," (Washington, DC: ERIC Document Reproduction Service, 1992), ED 345 098.
  14. Thomas A. Angelo and K. Patricia Cross, Classroom Assessment Techniques: A Handbook for College Teachers, 2nd ed., (San Francisco: Jossey-Bass, 1993), 3-6. For more information about the effectiveness of classroom assessment, see Michelle L. Kalina and Anita Catlin, "The Effects of the Cross-Angelo Model of Classroom Assessment on Student Outcomes: A Study," Assessment Update 6 (May-June 1994), 5, 8.
  15. Angelo and Cross, Classroom Assessment Techniques, 148.
  16. Ibid., 183.
  17. Ibid., 154.
  18. For more information about planning good tests of learning, see Grant Wiggins, "Creating Tests Worth Taking," Educational Leadership, 49 (May 1992), 26-33. For examples of testing in library instruction, see: F. W. Lancaster, If You Want to Evaluate Your Library, 2nd ed., (Champaign: University of Illinois Graduate School of Library and Information Science, 1993), 240-246; Donald Barclay, "Evaluating Library Instruction: Doing the Best You Can With What You Have," RQ, 33 (Winter 1993), 195-202.
  19. For example, Lendley C. Black, "Portfolio Assessment," in Making a Difference: Outcomes of a Decade of Assessment in Higher Education, Trudy Banta, editor, (San Francisco: Jossey-Bass, 1993), 139-150; Margaret E. Gredler, "Implications of Portfolio Assessment for Program Evaluation," Studies in Educational Evaluation, 21, no. 4 (1995), 431-437; Janet E. Boyle, "Portfolios: Purposes and Possibilities," Assessment Update, 6 (Sept.-Oct. 1994), 10-11; Patrick L. Courts and Kathleen McInerny, Assessment in Higher Education: Politics, Pedagogy, and Portfolios, (Westport, CT: Praeger, 1993), chapters 3 and 4. An excellent review of research on portfolios appears in: Cindy S. Gillespie, et al., "Portfolio Assessment: Some Questions, Some Answers, Some Recommendations," Journal of Adolescent & Adult Literacy, 39 (March 1996), 480-491.
  20. Christine S. Sherratt and Martin L. Schlabach, "The Applications of Concept Mapping in Reference and Information Services," RQ, 30 (Fall 1990), 60; Carla J. Schick, "The Use of Concept Mapping to Evaluate the Effects of Both Bibliographic Instruction and Discipline-Based Art Education Instruction on the Library Skills of Elementary Art Teachers," (M.S. ,Thesis, Kent State University, 1991), ERIC Document: ED 352 981.
  21. Kimberly M. Markham, Joel J. Mintzes, and M. Gail Jones, "The Concept Map as a Research and Evaluation Tool: Further Evidence of Validity," Journal of Research in Science Teaching, 31 (January 1994), 91-101.
  22. Sources for ideas about assessing online courses and distance education include: Dennis R. Ridley, et al., "Assessment Plan for CNU Online," (Newport News, VA: Christopher Newport University, 1995), ERIC Document: ED 392 829; Robert V. Price and Judi Repman, "Instructional Design for College-Level Courses Using Interactive Television," Journal of Educational Technology Systems, 23 (1994/95), 251-263; Sandra Wills and Carmel McNaught, "Evaluation of Computer-Based Learning in Higher Education," Journal of Computing in Higher Education, 7 (Spring 1996), 106-128; Thomas C. Reeves and Richard M. Lent, "Levels of Evaluation for Computer-Based Instruction," in Instructional Software: Principles and Perspectives for Design and Use, Decker F. Walker and Robert D. Hess, editors, (Belmont, CA: Wadsworth Publishing Co., 1984), 188-203; Nancy Nelson Knupfer and Barbara I. Clark, "Hypermedia as a Separate Medium: Challenges for Designers and Evaluators," in Proceedings of Selected Research and Development Presentations at the 1996 National Convention of the Association for Educational Communications and Technology, 18th, Indianapolis, 1996, (Washington, DC: ERIC Document Reproduction Service, 1996), ED 397 805. For World Wide Web assessment ideas, look at: William J. Gibbs and He Ping Cheng, "Formative Evaluation and World Wide Web Hypermedia," in Eyes on the Future: Converging Images, Ideas, and Instruction, Selected Readings from the Annual Conference of the International Visual Literacy Assocation,27th, Chicago, IL, October 18-22, 1995. (Washington, DC: ERIC Document Reproduction Service, 1995), ED 391 506; Mark Stover and Steven D. Zink, "World Wide Web Home Page Design: Patterns and Anomalies of Higher Education Home Pages," RSR, 24 (Fall 1996), 7-20. For non-credit courses, see Craig A. Clagett and Daniel D. McConochie, "Accountability in Continuing Education: Measuring Noncredit Student Outcomes," AIR (Association for Institutional Research) Professional File, no. 42 (Fall 1991), ERIC Document: ED 347 383, 4.
  23. Ruth Green, "Quality Standards for Academic Program Evaluation Plans," Assessment Update, 5 (Nov.-Dec. 1993), 4-5.
  24. Christopher Bober, Sonia Poulin, and Luigina Vileno, "Evaluating Library Instruction in Academic Libraries: A Critical Review of the Literature, 1980-1993," Reference Librarian, no. 51/52 (1995), 53-71.
  25. Ibid., 57.
  26. Mary Ochs, et al., "Assessing the Value of an Information Literacy Program," (Ithaca, NY: Cornell University Albert R. Mann Library, 1991), ERIC Document: ED 340 385.
  27. Bober, Poulin, and Vileno, "Evaluating Library Instruction in Academic Libraries," 58.
  28. Amy W. Parenteau, electronic mail message to authors, Feb. 13, 1997.
  29. Engelbrecht Library, Wartburg College, "Library Assessment Plan, March 1995," (Waverly, IA: Wartburg College, 1995).
  30. Assessment Planning Committee, "An Assessment Plan for Information Literacy, May 1, 1996 (Final)," (Bloomington, IN: Indiana University Bloomington Libraries, 1996).
  31. Debra Gilchrist, "To enABLE Information Competency: The Abilities Model in Library Instruction," in Programs That Work, ed. by Linda Shirato, 19-33. (Ann Arbor, MI: Published for Learning Resources and Technologies, Eastern Michigan University by Pierian Press, 1997). Presentation at the 24th Annual National LOEX Library Instruction Conference, Programs That Work, May 1996.
  32. Philip Pacey, "Teaching User Education, Learning Information Skills; or, Towards the Self-Explanatory Library," The New Review of Academic Librarianship, 1 (1995), 100.
  33. Ibid., 100-102.
  34. Ibid., 102.
  35. Douglas Lee Hull, "Involving Students as Active Participants in Assessment," Assessment Update, 5 (Sept.-Oct. 1993), 1.
  36. For examples, see Bonnie Gratch, "Toward a Methodology for Evaluating Research Paper Bibliographies," Research Strategies, 3 (Fall 1985), 170-177; Virginia E. Young and Linda G. Ackerson, "Evaluation of Student Research Paper Bibliographies: Refining Evaluation Criteria," Research Strategies, 13 (Spring 1995), 80-93.