Across philanthropy, many funders are turning to systems thinking as they navigate a period of profound change and uncertainty. More foundations are explicitly naming systems change as part of their work, widening the aperture on their strategies, and acknowledging that the issues they’re tackling are deeply interconnected, shaped by shifting policy environments, strained institutions, and forces beyond any single organization’s control.
What’s far less clear is how to support learning and decision-making in this context. The approaches many of us were trained in—linear logic models, predefined indicators, proof-oriented methods—often fall short or even mislead when outcomes are emergent, indirect, and long-term. People know this, and yet it’s not always obvious what to do instead.
There’s a real need for approaches that help funders and their grantees understand how change is unfolding, identify meaningful signals of progress, and make sense of complex and often ambiguous conditions as they decide how to move forward.
This is the space the Systems Change Community of Practice (CoP) was created to fill. Engage R+D launched the first cohort in 2023 as a place for foundation learning and evaluation leaders to work through these questions together. Foundations, including the James Irvine Foundation, support the CoP’s infrastructure while also participating as learning partners alongside peers from other institutions. Cohorts run 16–18 months and provide time for peer learning, shared experimentation, and honest conversations about the tensions leaders are navigating. As one early participant put it, the goal was simply “to create a space in philanthropy where foundation evaluation staff can come together to learn from one another.”
The CoP’s origins trace back to 2019, when interviews with foundation learning and evaluation leaders confirmed there were few intimate spaces for peers to talk practically about their work. A virtual pilot in 2020 shaped the first formal cohort in 2023, which focused on strengthening practice within individual foundations. By the end, members were asking how the community could contribute more visibly to the field—a question that directly shaped the current cohort’s focus on shared resources and field learning.
Building on that shift, the current cohort has focused more intentionally on generating knowledge that can be shared more broadly. That’s where the idea of a field learning agenda comes in. Rather than each foundation developing its own list of questions, the CoP has been drawing on perspectives from across the field to understand what people are wrestling with and what knowledge the field needs most right now. This has included ongoing conversations within the cohort as well as a facilitated, community-building conversation at GEO’s 2025 Learning Conference.
The learning agenda we’ve developed reflects six broad areas of inquiry:
- clarifying what “counts” as systems change,”
- tracking change meaningfully,
- making learning actionable,
- practicing equity and shared accountability,
- adapting for what’s next, and
- communicating about systems change.
Together, they describe the questions people are genuinely trying to answer, rather than abstract concepts.
The three of us shared and tested the draft agenda during our session at the November 2025 American Evaluation Association (AEA) conference. As a warm-up, we used a quick TRIZ exercise to surface the patterns that tend to get in the way of good systems-change learning.
People named things most of us have experienced at some point—working in silos, sticking too rigidly to early plans, relying on metrics that don’t fit the work, or making things more complex than they need to be.
The conversation was honest and practical, reinforcing how much people want approaches that feel adaptive and genuinely useful, especially in environments where time, capacity, and certainty are limited.
The conversation also reinforced a few themes. First, people really are hungry for concrete examples—stories, case materials, and tools that make systems work feel less abstract. Second, different audiences need different kinds of support.
- Boards need grounding in how systems work differs from programmatic work and how best to support it.
- Program teams want tools that help them understand progress and make decisions.
- Learning and evaluation leaders need frameworks, sensemaking practices, and examples of how others are approaching similar challenges.
We also heard something important about cross-pollination. Before its recent dissolution, U.S. Agency for International Development (USAID) had been doing thoughtful work on many of these same questions—work that participants noted shouldn’t be lost. The call for better connection and knowledge-sharing across efforts showed up repeatedly, underscoring the value of field-level learning infrastructure at a time when many institutions are under strain.
As part of this broader effort, our team recently compiled the Top 25 Systems Change Resources and shared them at AEA. It’s a starting point—a way to make useful materials easier to find, and a signal of the kinds of knowledge we hope to share more regularly. This resource, along with the field learning agenda, is available Engage R+D’s website as part of an evolving effort to build shared knowledge for systems change measurement and learning.
In the coming year, the CoP will continue refining the learning agenda and developing additional resources responsive to what practitioners say would be most helpful. If you’d like to follow along—or if you’re a funder interested in joining a future CoP cohort—you can sign up to stay connected.
This work is still emerging, and we’re learning as we go. But there’s something grounding about doing it together. The questions people are raising—about accountability, adaptation, and learning when systems themselves are unstable—are shared across institutions. Naming them collectively and working through them across roles and foundations is a step toward a more capable and connected field.


