Theory and Frameworks
W.K. Kellogg Foundation (2017). The evaluation handbook (Updated with Racial/Equity Lens).
Hood, S., Hopson, R. K., & Frierson, H. T. (2015). Continuing the journey to reposition culture and cultural context in evaluation theory and practice. Greenwich CT: Information Age Publishing.
Hood, S., Hopson, R., & Frierson, H. (Eds.). (2005). The role of culture and cultural context. Greenwich CT: Information Age Publishing.
Hood, S., Hopson, R., & Kirkhart, K. (2015). Culturally responsive evaluations: Theory, practice and future implications. In Newcomer, K., Hatry, H., & Wholey, J. (Eds.), Handbook of practical program evaluation, fourth edition (pp. 281 – 317).
Frierson, H. T., Hood, S., Hughes, G. B., & Thomas, V. G. (2010). A guide to conducting culturally-responsive evaluations. In J. Frechtling (Ed.),The 2010 user-friendly handbook for project evaluation (pp. 75-96). Arlington, VA: National Science Foundation.
Bowman, N. R., Dodge Francis, C., & Tyndall, M. (2015). Culturally responsive indigenous evaluation: A practical approach for evaluating indigenous projects in tribal reservation contexts. In S. Hood, R. Hopson, & H. Frierson (Eds.), Continuing the journey to reposition culture and cultural content in evaluation theory and practice. Charlotte, NC: Information Age.
LaFrance, J., & Nichols, R. (2010). Reframing evaluation: Defining an indigenous evaluation framework. The Canadian Journal of Program Evaluation, 23(2), 13-31.
Mariella, P., Brown, E., Carter, M., & Verri, V. (2009). Tribally-driven participatory research: State of the practice and potential strategies for the future. Journal of Health Disparities Research and Practice. 3(2), 41-58.
Waapalaneexkweew (Bowman, N., Mohican/Lunaape) (2018). Looking backward but moving forward: Honoring the sacred and asserting the sovereign in Indigenous evaluation. American Journal of Evaluation, 39(4), 543–568.
Waapalaneexkweew (Bowman, N., Mohican/Lunaape), & Dodge‐Francis, C. (2018). Culturally responsive indigenous evaluation and tribal governments: Understanding the relationship. In F. Cram, K. A. Tibbetts, & J. LaFrance (Eds.), Indigenous Evaluation. New Directions for Evaluation, 159, 17–31.
Patton, M. (2010). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York, NY: Guilford Press.
Dozois, E., Langlois, M., & Blanchet-Cohen, N. (2010). DE 201: A practitioner’s guide to developmental evaluation. Victoria, BC: International Institute for Child Rights and Development and The J.W. McConnell Family Foundation.
Gamble, J. A. A. (2008). A developmental evaluation primer. The J.W. McConnell Family Foundation.
Evaluation Planning and Design
Evaluation Questions Checklist from Western Michigan University. This checklist compares the “should” and “should not” in developing evaluation questions.
Evaluation Design Checklist from The Evaluation Center at Western Michigan University. This checklist helps evaluators think through focusing the evaluation; collecting, organizing, analyzing, and reporting information; and administering the evaluation.
Types of Evaluation from the CDC. Includes a table that compares evaluation types, when to use each type, what each type shows, and why each type is useful.
Frechtling, J. (Ed.). (2010). The 2010 user-friendly handbook for project evaluation. Arlington, VA: National Science Foundation.
Logic Model resources from the University of Wisconsin Extension. Includes: examples, templates, bibliography, popular resources, and an online course.
Logic Model Development Guide from the W.K. Kellogg Foundation. You can download the full PDF guide from the W.K. Kellogg Foundation site.
Developing a Logic Model or Theory of Change from the Community Tool Box. Includes: what is a logic model, when can a logic model be used, how do you create a logic model, what makes a logic model effective, and what are the benefits and limitations of logic modeling?
Onwuegbuzie, A. J., Dickinson, W. B., Leech, N. L., & Zoran, A. G. (2009). A qualitative framework for collecting and analyzing data in focus group research. International Journal of Qualitative Methods, 8(3), 1-21.
Grudens-Schuck, N., Allen, B. L., & Larson, K. (2004). Methodology brief: Focus group fundamentals. Extension Community and Economic Development Publications, 12.
Simon, J. S. (1999). The wilder nonprofit guide to conducting successful focus groups. Amherst H. Wilder Foundation.
Thayer-Hart, N. (Ed.), (2013). Facilitator tool kit: A guide for helping groups get results. From the Office of Quality Improvement at the University of Wisconsin-Madison.
The Classroom Assessment Techniques (CATs) is part of the Field-tested Learning Assessment Guide (FLAG). For “Interviews and Focus Groups” information click on that title on the left side of the screen. After clicking on the “Interviews and Focus Groups” technique you can scroll to the bottom for more information about the technique, including “tools.”
ERIC/AE Staff (1997). Designing structured interviews for educational research. Practical Assessment, Research & Evaluation, 5(12).
The Student Assessment of Their Learning Gains website is a free course-evaluation tool that allows college-level instructors to gather learning-focused feedback from students.
Frary, R.B. (1996). Hints for designing effective questionnaires. Practical Assessment, Research & Evaluation, 5(3).
Survey Research from the Colorado State University Writing Studio provides information on types of surveys, designing surveys, and other survey information.
Retrospective Pre-test Method
MacDonald, G., Wingate, L., & Lee, M. (2015, December 9). The retrospective pretest method for evaluating training [Webinar]. An EvaluATE webinar.
Hill, L. G., & Betz, D. L. (2005). Revisiting the retrospective pretest. American Journal of Evaluation, 26(4), 501-517.
Pratt, C. C., McGuigan, W. M., & Katzev, A. R. (2000). Measuring program outcomes: Using retrospective pretest methodology. American Journal of Evaluation 21(3), 341-349.
Rhodes, T. (2009). Assessing outcomes and improving achievement: Tips and tools for using the rubrics. Washington, DC: Association of American Colleges and Universities.
Note: this site provides learning outcome rubrics at no cost.
Lunsford, E. & Melear, C. T. (2004). Using scoring rubrics to evaluate inquiry. Journal of College Science Teaching 34(1), 34-38.
Huba, M. E., & Freed, J. E. (2000). Learner-centered assessment on college campuses: Shifting the focus from teaching to learning. Needham Heights, MA: Allyn & Bacon.
Note: Chapter 6 is on rubrics – Using Rubrics to Provide Feedback to Students.
Corbin, J. & Strauss, A. (2008). Basics of qualitative research (3rd ed.): Techniques and procedures for developing grounded theory. Thousand Oaks, CA: SAGE Publications.
Creswell, J. W. (2008). Research design: Qualitative, quantitative, and mixed methods approaches (3rd ed.). Thousand Oaks, CA: SAGE Publications.
Glesne, C. (2001). Becoming qualitative researchers: An introduction, 4th Edition. Boston: Pearson/Allyn & Bacon.
Saldaña, J. (2009). The coding manual for qualitative researchers. Los Angeles: SAGE Publications.
Lamont, M., & White, P. (Eds.). (2005). Proceedings from Workshop on interdisciplinary standards for systematic qualitative research. Arlington, VA: National Science Foundation.
Peshkin, A. (2000). The nature of interpretation in qualitative research. Educational Researcher, 29(9), 5-9.
AEA provides a listing of qualitative software options. The listing includes information about the product, developer, a site link, platform, price, and scope.
Sharpe, E., Raby, R., & Raddon, M. (ND). Sleuthing the layered text [online tutorial]. Retrieved from MERLOT.
Boone, H. N., Jr., & Boone, D. A. (2012). Analyzing likert data. Journal of Extension 50(2).
McGraw-Hill (2001). Statistical Primer for Psychology Students [online tutorial].
Naugatuck Valley Community College provides Institutional Research Definitions and Explanations. On this site you will find common terms, common statistical tests, and further reading.
Creswell, J. W. & Clark, V. L. P. (2007). Designing and conducting mixed-methods research. Los Angeles: SAGE Publications.
The original publication is not available anymore. But an updated version (2018) is available.
Frechtling, J. & Sharp, L. (Eds.). (1997). User-Friendly handbook for mixed-methods evaluations. Arlington, VA: National Science Foundation.
Gary Miron a professor of Evaluation, Measurement and Research, at Western Michigan University has created an Evaluation Report Checklist (2004). It includes: title page; executive summary; table of contents…; introduction and background; methodology; results chapters; summary, conclusion, and recommendations; and references and appendices.
Stephanie Evergreen has created an Evaluation Report Layout Checklist (2013). It includes best practices for report type, arrangement, graphics, and color.
Robertson, K. N., & Wingate, L. A. (2017). Checklist for program evaluation report content. Kalamazoo, MI: EvaluATE, The Evaluation Center, Western Michigan University.
Evergreen, S. (2017). Presenting data effectively: Communicating your findings for maximum impact (2nd Edition). Los Angeles: SAGE Publications.
Stephanie Evergreen (from Evergreen Data: International Reporting & Data Visualization) is a leading expert in data visualization. Check out her blog for data visualization tips, tricks, and tools.
Her qualitative chart chooser 3.0 is particularly helpful for writing reports.