Evaluating Differently

For many of us working in informal science learning, we’re just coming out of our busiest season of the year. Projects wrapped up. A new administration took office. We navigated a holiday season unlike any other.  And for most, several federal and private grants were due.

Among other things, 2020 brought with it a heightened awareness of the disparities in our communities which has pushed many of us to think more carefully about the ways in which we do our work. For my colleagues and I here at ILI, that included considering how we might evaluate differently. Whether it be that we needed to find innovative ways to evaluate the influence and reach programs or how and whose voices we represent in our work.

For me,  last year has been a deeply reflective process in understanding how I represent the voices of others and how I come to know what I know. A significant sum of my work focuses on expanding how and who we reach in informal science learning. Frequently, we’ve used surveys and other traditional qualitative evaluation methods to measure dimensions such as reach, attitudes, interest, and knowledge gain. These tools are tied to these methodologies that have been tested over a long period of time and can offer us an opportunity to learn about our existing visitors. However, they can also leave a gap in truly knowing about communities. The longer I am a part of the field, the more I realize the methods we so heavily rely on were most often developed within the same silos as the cultural constructs that have disenfranchised so many. It leads me, and many in our field, to wonder whose voices are represented in our tools? How do our ways of knowing develop? And which ways of knowing are not represented in our work?

For instance, when we talk about capital, what definition of capital are we discussing? Over the last four years we have worked on an NSF project studying the changes in science capital of young adults participating in a park-based informal science learning experience. What we began to recognize is that the ways we describe the science-related activities participants could take part in were not necessarily meaningful to the communities we surveyed. Not to say that these individuals and communities are not engaging in free-choice science pursuits; it’s simply that these other ways of knowing are not recognized in our programs. While we may value activities such as visiting science centers and libraries, some communities may not value these as much as fishing or farming, activities that are as just as much about science. These other ways of knowing and engaging may not fall under middle-class, Western ways of engaging, but it is essential we recognize them in our work.

What do we do then, when our tools do not represent and meet the needs of our community? In many cases, we are grappling with these very ideas. In seminars hosted by Visitor Studies (referenced below) and papers shared on InformalScience.org, several leaders in our field are helping us think about these very questions. In one VSA webinar spring, Dr. Cecilia Garibay led us in a discussion on advocating and supporting the work of diversity, equity, access, and inclusion. The conversation was lively, including a fair bit on how and in what ways diverse voices should be included in evaluation work.  It goes without saying, without diverse voices at the table, both in the design process and in the evaluation itself, our studies are unlikely to be reflective of the people they represent. What I have found to be missing from the conversation was a reflection on our reliance upon tools that may or may not serve their function. The tools we use to evaluate may not be well-suited for the projects we develop.

I have reflected more deeply on how I can improve on this in my own work. In what ways can I learn from the people I am working with – and those already doing good work in the field? In the case of the previous example, we tested the science capital tool with a multi-month process of combined focus groups and counter storytelling. It has been through these qualitative stories that we recognized significant omissions in the response options in our primary evaluation tool. While this may seem like an easy fix, how often do we stop to re-validate our tools for our communities? How often do we consider what methods and tools would be culturally appropriate for our communities? And how often do we step outside our boxes to invite our communities in to collaborate in the evaluation design process? Are we asking ourselves how our communities would define impact or success?

What’s next in understanding and evaluating other ways of knowing? We’re still unsure. But what we do know is that ILI, our networks, and the community around us are thinking critically about this topic. We’d love to hear more about your successes, needs, and challenges in doing this work as we learn together. For more information please see the resources below. To discuss, please contact the writer.

Dr. Monae Verbeke

Director of Evaluation

Institute for Learning Innovation



Past Webinars and Papers:


Effective and Equitable Community Engagement: Collaborating with Integrity and Reciprocity




Shifting Paradigms: Embracing Multiple Worldviews in Science Centers



Attending to Diversity, Equity, Access and Inclusion in the time of Coronavirus



Fu, A. C., Kannan, A., Shavelson, R. J., Peterson, L., & Kurpius, A. (2016). Room for rigor: Designs and methods in informal science education evaluation. Visitor Studies19(1), 12–38.


Peterman, K., Verbeke, M., & Nielsen, K. (2020). Looking Back to Think Ahead: Reflections on Science Festival Evaluation and Research. Visitor Studies23(2), 205-217.


Solórzano, D. G., & Yosso, T. J. (2002). Critical race methodology: Counter-storytelling as an analytical framework for education research. Qualitative inquiry8(1), 23-44.


Future Webinars:


The State of DEAI in Museums and What it Means for Visitor Studies
Wednesday, February 10, 2021, at 4:00-5:00 pm Eastern

Register Here


Posted Jan 29, 2021