26/04/12 Digital Government , Research #

Why we need another composite index (on public e-Services)

The debate on composite indicators or synthetic indices in the e-government field has been ongoing since the publication of the first benchmarking exercises at the EU level back in 2002. Many analysts and researchers consider composite indicators as “black boxes” (see for example this paper by Frank Bannister, 2007). We put in still intelligible indicators and what comes out is a mysterious number, and, inevitably, a mysterious rank. The feeling is that it’s a weird combination of voodoo (or too complicated math), subjectivity, weak frameworks, unbelievable results (can you really believe that Italy has put 100% of public services on line with the highest possible level of interactivity?).


A 3-days seminar at the JRC-IPSC of the European Commission opened my mind. There I found a motivated and high-skilled team coordinated by Andrea Saltelli, which, by the way, was responsible for drafting the OECD-EC Handbook on Constructing Composite Indicators.

While it was clear to me that things like data quality, framework reliability and transparency – when it comes to show how the results have been computed – are always crucial, I learned that composite indicators quality and robustness can and must be checked, and that more advanced and reliable techniques can be applied. I suspect that if we applied tools such as the Sensitivity Analysis or the Uncertainty Analysis to the existing “black box” indicators we would get an idea of how ranks can vary and of therefore at what extent resulting policy indications can be week.

I’ve been working for quite some time on a composite indicator on eServices (eGovernment, eEducation, eTransportation, to be extended to eHealth and Smart Cities) for research project TAIPS funded by the European Investment Bank, together with my friends and colleagues Marco Biagetti, Davide Arduini and Professor Antonello Zanfei. I presented some preliminary results at the 1st EIBURS-TAIPS Conference at Urbino University (here you can find all papers and slides from the conference), in front of a bunch of innovation policy gurus including Paul David, Ian Miles, Edward Steinmueller and Keith Smith.
Here is the abstract and my slides.

Abstract The study aims at providing evidence on regional differences in the diffusion of ICT in the public sector in Italy, with a focus on different types of public e-services (eGovernment, eEducation and Intelligent Transport Systems). Data are obtained by merging four different surveys carried out by Between Co. (2010-11) and Istat - Italy’s National Bureau of Statistics (2009). We pursue a three-fold objective. First, we attempt to overcome the prevailing attitude to consider the various domains of public e-service provision as separate from one another. In other words, measuring the progress of digital government requires a holistic view to capture the wide spectrum of public e-services in different domains (e.g. local and national administrative procedures, transportation, education, etc.) and the different aspects of service provision (not just e-readiness or web interactivity, but also multi-channel availability and take-up). Second, we shall tackle a major drawback of existing statistics and benchmarking studies of public e-services, which are largely based on the count of services provided online, by including more sophisticated indicators both on quality of services offered and back office changes. Third, we develop a sound, open and transparent methodology for constructing a public eServices composite indicator based on OECD/EC-JRC Handbook. This methodology, which incorporates experts opinion into a Data Envelopment Analysis, will allow us to combine data on different e-service categories and on different aspects of their development, and will enable us to define a ranking of Italian regions in terms of ICT adoption and public e-service development.
0 likes no responses
My Twitter Feed
RT @opengovpart: July 28 deadline fast approaching to contribute to the 2017-18 #OGP Research Agenda:…
RT @colinrtalbot: Cambridge Policy Lab - our (re)activated and relocated Policy Lab site - enjoy
RT @maassenpaul: Six ways for citizens to assert power beyond the ballot box - my take on some of the cool concepts @Guardianpublic https:…
RT @maassenpaul: Grazie mille! Looking forward to meeting many #opengov champions this week in Rome! @opengovitaly @SPizzicannella @luigire
RT @Monithon: Monithon raccontato sul @Guardian come esempio di #opengov! Many thanks @maassenpaul @opengovpart @ParlWatchItalia https://t.…
RT @psanabria: New book by @gretanasi @MariaCucciniell & Virginia Degara on the Evaluation of Innovation Performance in the Public Sector @…
[abstract] Evaluation of Innovation Performance in the Public Sector by @gretanasi et al.
RT @gquaggiotto: Adapt to policymakers’ ‘fast thinking’, don't bombard them with evidence hoping they'll get round to ‘slow thinking' https…
[paper] Crowdsourcing Government: Lessons from Multiple Disciplines via @TheGovLab #OpenGov @muredduf
Recent Comments
- Putting research into practice: Training academics to use Open Data as OER: An experience from Uruguay | Thoughts on Open Education to OpenCoesione School – A scalable learning format using OpenData as Educational Resources
[…] developed by A Scuola di Open Coesione, and in the work t Chiara Ciociola and Luigi Reggi...
- Così fallisce l’Open Government: quando lo Stato fa auto-gol | to A (long) list of the risks of Open Government
[…] un post di un paio di settimane fa mi sono cimentato in una prima lista dei possibili ris...
- Così fallisce l'Open Government: quando lo Stato fa auto-gol | Tech Economy to A (long) list of the risks of Open Government
[…] analisi dell’esistente, l’Open Government porta con sé grandi potenzialità m...

A citizen monitoring marathon of the development projects financed by the European Union