Scrutinising select committees
This is the first of three blogposts about scrutiny’s impact and how scrutiny might be improved. It anticipates the publication by us at the end of the summer of a revised methodology for local areas to use both to evaluate their existing scrutiny arrangements and to review and design new ones – joint systems and systems covering combined authorities. These practical tools will be based, in part, on the kinds of research to which this post refers.
Good scrutiny must be shown to have value, and to make a real difference to people’s lives. But given that scrutineers are by definition not decision-makers, tracking the impact of their work can be difficult, and this is a challenge that has frequently left us stumped.
We are in good company. Almost since the establishment of the system of Parliamentary departmental select committees in 1979, researchers have struggled to develop measures for their impact, too. While the context is different, the issue is the same – evaluating the actual impact of select committees is extremely tricky.
Probably the “gold standard” of these attempts came a few years ago with the publication by the Constitution Unit of “Selective Influence”, a detailed investigation of the policy impact of select committees, written by Meg Russell and Meghan Benton. In additional to the traditional measure that asks “how many recommendations were accepted”, Meg and Meghan’s research identified seven “qualitative” measures for influence:
- Influencing policy debate;
- Spotlighting issues and altering policy priorities;
- Brokering in policy disputes;
- Providing expert evidence;
- Holding Government and outside bodies accountable;
- Generating fear (anticipated reactions) – this is about how Government adjusts its approach based on what the committee might do – Russell and Benton felt that this could be the most important facet of select committees’ influence.
I suspect that all of these measures will look extremely familiar to scrutiny practitioners in local government (with a few tweaks in the wording). There are some similarities in the weaknesses they found in the way that select committees operate, too. Short-termism, a lack of preparation and poor questioning, the lack of a research base, the quality of Government evidence and quality of the Government response, the quality of report and recommendation drafting, and poor follow through, are all cited – and all have their analogues in the way that local government scrutiny works, in its variable way, across the country.
This work has been built on by similar research undertaken by the Institute for Government. “Select committees under scrutiny”, by Dr Hannah White, benefits from the transformation in the visibility and impact that select committees experienced during the 2010-15 Parliament. In doing so, it was able to focus more on the role of chairs – a critical role. Before 2010, committee chairs were by and large selected by party whips – using the positions as a form of patronage, and as a consolation prize for those not elevated to Cabinet (although speaking for myself, I would far rather be a select committee chair than a Secretary of State). Now, they are elected by secret ballot.
The change in chairing arrangements (along with the rest of the Wright reforms) means that Hannah’s research is as much about innovation as anything else; innovation that has cemented the impact and influence that select committees have traditionally held. Hannah suggests six qualitative sources for influence, which you could say lead to the actual areas of influence that Meg and Meghan identify in their research. Those six are:
- Status (which on its own is not enough);
- Formal powers (although there are of course risks attached to deploying these);
- Relationships (in particular, the relationship between the committee and the relevant Secretary of State);
- Expertise (necessary both to hold to account and to produce authoritative reports);
- Respect (because committees need to be held in respect to have influence);
- Communications (using coverage as a means to secure impact rather than an end in itself).
So much for sources and measures of influence. Together, select committees (and scrutiny committees!) which want to improve how they work wouldn’t go far wrong by looking in detail at these and identifying the areas where improvement might be necessary.
Our new evaluation framework for scrutiny in local areas will draw on these sources and measures of influence as a first step to understand where and how improvements might be brought about.
The next step is to establish – on the basis of this research – some practical lessons for those seeking to hold decision-makers to account. Helpfully, Hannah’s research does so for select committees. In the next blogpost in this series I’ll reflect on those lessons from the point of view of local government; in the post after that, I’ll look in more depth at the other factors which our new evaluation framework will take into account.