Addressing Challenges in Evaluating Advocacy Initiatives: Two at a Time

It is widely acknowledged among practitioners of the craft, that evaluation of policy advocacy initiatives is challenging. From non-linearity, uncertainty, and emergence, to attribution and temporality of change; these challenges amplify the complex nature of evaluations, akin to the policy imperatives being evaluated. On a portfolio evaluation of advocacy initiatives, we identified these as- ‘How can non-linear changes be captured? How can changes in the ecosystem be accounted for? How can we incorporate an organisation’s history, capacity, and strategic thinking into the evaluation? How do we know that the investment is making progress towards envisaged outcomes?’ All of these are hard-to-address issues with no easy answers, and therefore necessitate methodological adaptations and experimentation.

On a policy advocacy portfolio evaluation, we have started working on understanding two of these questions. First, “how can non-linearity be captured?� and second, “what are some of the measures to determine progress towards envisaged outcomes, i.e. effectiveness�? In the following sections, we detail the thought processes towards addressing the two identified questions.

Capturing non-linearity

While a Theory of Change (ToC) is useful tool, it is linear, leading to blind spots. As Alford (2017) writes,

while the theory of change approach has done much to advance thinking beyond linear, reductionist and rigid approaches to planning and management.…theory of change diagrams lag behind these aspirations and end up encouraging linear, reductionist thinking

That is, there is a disconnect between the way we think about ToC and the way we visualise these processes.

We have been deliberating on how a systems approach can be used to incorporate non-linearity, complexity, learning and adaptation. The Systems Approach pushes beyond the immediate problem to discover patterns, leverage the system, learn and adapt as the system changes and may help in answering questions such as –

  • How does the environment within which the program resides, operate as a dynamic, complex system?
  • How do strategies engage with the system to leverage impact? What is the one lever if pushed, can create a domino effect?
  • How can we test hypothesis to aid learning and adaptation?

To explore and understand these dynamics, a system map or causal loop programming is often used. Loops are dynamics at play and act as either drivers of change (virtuous loops), regressors (vicious loops) or maintainers of the status quo (stagnating loops) (Omidyar Group). Over the next few weeks, we will be developing rudimentary system maps taking C3 as the starting case. The maps will visualise factors (individuals, laws, policies, organisations, norms, beliefs) that impact how the system works, their upstream and downstream causes and the reasons for the same. These shall be overlaid with the ToC to better understand how envisaged strategies engage with the system.

Measures of progress

As we worked with the grantees in co-creating the ToC, two cross-cutting strategies that emerged were building salience through media engagement and enhancing the up-take of research/evidence, especially among decision-makers/policy makers. We are presently developing better understanding of the constructs and measurement challenges.

Media salience

When we connect with the external world, it is often through a ‘second-hand reality’ created by journalists and media houses. The media picks up issues/topics that it considers ‘newsworthy’ (object salience). Additionally, the media also shapes our understanding, perceptions, and perspectives on the topic (attribute salience) (McCombs, M. (2002)). While object salience is ‘external’, ‘attribute salience’ is associated with personal relevance (Weaver, D.H. (1982)).

Measures of media salience need to capture both object and attribute salience. The former includes measures of attention and prominence (Kiousis 2004). Attention is determined by the number of stories dedicated to topics in media, whereas prominence refers to structural and presentational elements of the story – placement, size, pictures, pull quotes and other aesthetic devices. Attribute salience is captured through valence and is particularly problematic to capture given that is focuses on whether people are thinking about the issue, and what their feelings are! The guide prepared by ORS for the Casey Foundation is a great starting point to think about valence. It talks about how valence can be gauged by coding the number of stories that have a positive or negative tone towards the object of the story.

Uptake of research and evidence

Ascertaining the direct effects of research and evidence is virtually impossible. Scholars in fact, suggest that policy making is not evidence-based, but evidence-informed (Cairney 2017, Mayne et al 2018). There is no clear definition of evidence and/or ways of distinguishing between high- or low-quality evidence. Decision-makers are unable to process all information and therefore use cognitive and organisational shortcuts to process enough evidence to take decisions. This, they do, in an environment they have little control over. Given this and to better understand the nuances of uptake of evidence, grantees must understand how policy makers access, engage with and use evidence.

Makkar et al (2016) developed SAGE (Staff Assessment of engagement with Evidence); a measure that combines interviews with document analysis to evaluate how policy makers engage with research. The tool takes into account the:

 Research engagement options – How are policy makers searching for evidence? How are they evaluating it for relevance and quality? Are they also commissioning new research? Are they interacting with experts?

 Use of research – Does the research/evidence have: instrumental (to take decisions), conceptual (to shape thinking about the issue and its solution), tactical (to drive home a point, or make a case for an idea), or imposed (to set specific mandates) use?

 Barriers to research use – What are internal, external and organisational factors that impede access and use?

The tool is used to garner information on the entire spectrum of research/evidence that the policy maker is engaging with. Following this, it can be ascertained whether the evidence/research generated by the grantee organisation contributed to decision making and/or policy formulation.

Next steps

Building on the thought processes, we intend to chisel out probable measures for both of the issues. Through rounds of discussions and iterations- we hope we will be able to arrive at some of the most appropriate measures in the evaluation context. However, experimentation with actual measurement will tell us whether or not we have moved somewhere towards addressing the challenges- two this time.