By Clara Rodríguez Ribas, PhD Candidate at Universitat Pompeu Fabra
The Special Debate held during APPAM’s 2019 International Conference presented an opportunity to discuss the ability of empirical evidence to reliably guide public policy in a rapidly changing world versus the value of other reasoned criteria for policy choice such as constituent views or civic values.
To guide the discussion, the moderator Irma-Perez Johnson from the American Institutes for Research presented the case of how policies are responding to the way new technologies are leading to increased automation and mechanization of jobs, expanding demand for middle-skilled jobs across a wide range of industries. Policy makers are pondering what changes should be made to education to better respond to these changes – which are also demanded by employers.
Brooks Bowden, Assistant Professor on Methods and Policy at University of Pennsylvania, argued that while tapping available evidence should continue to be an important part of the public policy process, in many instances other sources of reason and information need to be used. She gave the example of a digital learning policy implemented in North Carolina in 2015 without broad research backing. The policy sought to address the changes called upon by the evolving job market through, among other elements, the phase-out of textbooks to be replaced by digital devices, the improvement of connectivity throughout the state, and the provision of teacher professional development. Bowden argued that the need for this ground-breaking policy, despite the lack of evidence backing it, was justified by what the state saw as a human capital crisis.
Stephen Bell, Head of Evaluation and Research at Westat, argued that there is a sufficiently broad range of methodologies and data collection strategies to enable researchers to provide timely input into policy making processes. In response to Bowden’s presentation, he questioned where the knowledge on what would work for digital learning was drawn from – was it a belief? From his viewpoint, it would have been better to have some evidence on the impact that a digital learning can have, otherwise there is a high risk of poor investment without obtaining results. An alternative, more valid approach, would have been to have school districts and leadership on board with the thinking process in designing policies that are innovative and seek to respond to rapid changes in the environment that way. For example, a random control trial could have been conducted, to which many schools might have signed up given the high interest in adjusting their education model to the changing environment.
Bowden argued that the decision was logical and responded to a major demand from students and business owners – without waiting for evidence or having time to build the buy-in mentioned by Bell. Building the buy-in would have required a change in values and other time-intensive investments which would not have allowed response to the demands from constituents in time. Yes, piloting is a useful tool when there is no evidence available, but stating a program is a pilot works against building trust and network of support. The quicker alternative of rolling out the policy was the best approach in the North Carolina case.
Bell counterargued that there are various methods that allow quick turn-around testing of programs as they are being implemented, even if currently it is being slowly scaled-up. He underscored that there are enough tools available to allow for more orderly building of knowledge and more confidence in building policies on education as well as in other sectors.
The debate was continued with a Q&A session. A speaker from the floor argued that both presenters missed the role of parents in the process. Whilst businesses are guiding demand, there should be knowledge spaces for different learning modalities and needs to be discussed to inform and make decisions on the values and priorities needed in education policies.
There was also an intervention that referred to the need to better define how much should be invested in evaluating public policies in proportion to the total investment made in the policy. How much is too much/too little? They stated that more should be done to look into the use of existing longitudinal surveys and studies to collect evidence on the outcome impact of policies and leverage this as evidence to inform future policies. Stephen Bell agreed and added that ultimately evidence is always needed, even if it is not a “gold standard” evidence. “Sometimes quick and dirty evidence should be enough,” he said. Bowden agreed, adding that, “As researchers, part of our role is deciding how much to spend on program evaluation, when to work faster, when to work longer - it's different to the role that policy makers play, but important”.