GODI launch and public dialogue phase

Dear all,

we are pleased to launch the fourth edition of the Global Open Data Index (GODI) today. During our preparation towards launch we were following your discussions in this forum. We are thrilled to see your engagement with our methodology, and consider this as very important to advance open data measurements. Your input therefore should not go unnoticed.

To give these discussions the right venue we invite you to a public dialogue phase of the results. This phase will be open for the next 30 days. We will respond to your feedback, and moderate the discussions to make sure that we include stakeholders from different data categories in different countries.

We start this dialogue phase because we want to be as transparent as possible, to enable the open data community and government officials to discuss the results, and to incorporate your feedback if it is justified. This way we hope to develop a fairer and more actionable assessment.

All the best
Danny

1 Like

@yurukov asked some important questions which we hope the public dialogue phase will be able to answer Review process - discussion with contributors? - #6 by dannylammerhirt

I have raised these issues a number of times and reading though the threads I see that others have had these concerns as well. I don’t understand why we are discussing these only now when the index is officially launched.

From the very beginning it seemed strange that only one submission was possible. This has led to some quite poor quality entries, at least for Bulgaria. I was assured that if we enter explanation, corrections or any critique in the forum in a specific way, these would be considered in the review process.

This is what I did - I detailed all the necessary data with links and explanation where I saw issues. None of that was considered. I only got one inquiry regarding national budgets. I see all this for the results - Bulgaria got 0% on a lot of the categories. Some examples - In 2015 Bulgaria got 90% on water quality, 55% on government spending and 60% on national maps. In 2016 the score on these and others is 0%.

I’ve contributed to all of the issues of this index and I have quite clear overview on how criteria has changed over the years. The entries this time were indeed the most detailed and applaud that. However I don’t see how water quality transparency could fall from 90% to 0% in one year when data quality for this and other datasets has actually improved greatly with the new CKAN portal our government has.

Some have suggested that this is due to discoverability - if not everyone can find and understand the dataset easily, then it’s not open enough. This however raises the question about language and domain knowledge. France is on 3rd place, but I can’t understand much from their data portal and documentation. I tried finding the datasets on my own without using the score page, but couldn’t find most of them. Does that imply bad discoverability?

The reason I’m raising these issues is because this index has been a valuable tool. So far at least. I vouched for the quality as I saw how it really reflects actual data quality, efforts to improve open data and relative measure of success. It has been a vital instrument in the efforts me and others in Bulgaria invest in pushing public officials to introduce legislation, restructure administration and introduce more transparency. The recognition of the steps the administrations has taken has led to even more improvements. One example is that all new government software systems must be open source and expose APIs for open data. The government portal reached 2000 datasets recently, many of which are updated every week.

This index has helped in the political aspect of pushing for these changes. It is important that the index improves in quality and expands in scope. The new questioner was indeed a positive step. I also do acknowledge that the position of any country in the index is relative to the improvements all others did in the reviewed period as well to the more strict considerations each new year. What I dispute is the data collection and review process which obviously missed a lot despite the fair warnings. Fixing this now, when the index is officially published, would not help in any way the political impact I valued so much in GODI so far.

HI Boyan,

Thank you! for your comments. Here are my answers as the project manager:

  1. The index score is still not set in stone. We learned from past GODIs, and decided to publish this edition with the public dialogue phase and adjust the scoring if needed. Last year when we tried to get government comments on submissions, we got not many, and this year we hope to get more comments not only from government but data users too. We hope that by doing it as we launch, we can create a product that can help to publish useful data.
    So basically, the only change we change in this process is that we moved the dialogue phase from before the launch to the launch and made GODI more dynamic.

  2. We also made a decision to be more strict with our data definitions. This is done because we are trying to promote the publication of useful data, that the community wants to see. This is why you will sometimes see 0%, simply because one of a key characteristic is missing, which makes the data unusable. We understand that this might be “all or nothing”, but we hope that this will help to create feedback that is needed for data publishers about the data that needs to be open. (See this nice example thread here: Public spending in Norway described as "0 % open"!?!?)

  3. I did instruct @dannylammerhirt to check the datasets you talked about and put them in the consultation again. So let’s see. I don’t think the review was bad, but I do agree that language does play a big role here and we will need to think how to make this better.

  4. Lastly, this dialogue phase is an experiment, so let’s see if this work better or worse in making GODI a better tool for the community. Can you join us in the experiment and help us test if it works better? :slight_smile:

I think there’s being much hype about some government’s position in the index as if it were set in stone. I worry this creates a communication risk of a government appearing to be lying if their position gets updated.

The only way I think that could be avoided, is by launching without numbers, but that wouldn’t be launching, right?

So yes, let’s see how the experiment goes, I’m not thrilled right now.

Some feedback from Belgium here. I think it all boils down to submitting data and the (communication about, or expectation of) corrections.

Correction and discussion

If an entry (or part thereof) was not accepted the first time for one reason or another (maybe the information provided wasn’t clear, or there was a mistake in scoring the different criteria), and this was challenged in this forum, it seems to take a rather long time to

  • either correct the score in the index, based upon the additional received
  • or discuss it on the forum, and explain what the final decision will be

During the last months, some entries were rejected and scores were lowered, because (as mentioned in a blog post or comment), false positives were filtered out. Fair enough, some of the info provided to the GODI may not have been clear, mistakes on both ends of the screen do happen etc.

But because every “place” wants to improve their score (which only demonstrates that people value the index, otherwise they would not put so much effort in it), they are eager to see some positive changes as well …

Resource constraints

Please do note that there are also resource constraints at the government level (at least in Belgium, but probably in other countries as well).

GODI is not the only open data benchmark - there are also surveys/benchmarks from OECD, EU Data Landscape, … - and often the public servants filling out / following-up these rather different surveys are the same people who are promoting open data and are working on open data related projects.

So the smoother the discussion goes, the easier it is for us to explain why a specific dataset is not considered to be open enough and what can be done about it.

And yes, this means that next year we’ll probably have to put more effort into “getting it right the first time” and check if the datasets really are easy to find (and understand) when submitting entries.

But it also means faster corrections / decisions just before and just after (promoting) the release of the GODI. I do realize this is mostly an effort on voluntarily basis, but now it’s rather difficult to explain to others what the benefit of the GODI is, while having to wait weeks for a simple correction or a final decision on the score.

To summarize

On the plus size: it’s already a good point that this can be discussed openly, and that people from all over the world put effort in it

But maybe next year some extra OKFN resources should be brought to the table for the follow-up and faster corrections / motivation of final decision.

1 Like

I hear you, and I was afraid of it as well. Hence my tweet about the fact that these are initial results.
We as civil society need to remind our governments that this is not a final ranking or scoring, and I was happy to hear that most of them (including people like @gonzaloiglesias) were aware of it and happy to engage. So let’s see how the experiment ends :slight_smile:

Hi, @barthanssens - thank you for the feedback, it is helpful, and each year we take the feedback into consideration as well. We are aware that there are other benchmarks and try to sync with them as well (you are welcome to join those calls as well!).

We understand the frustration of people when we work slowly, because we also want to work fast. We are, however, a small team ourselves, and we try to make this phase to the best that we can. Since it is the first time we are running this phase, we want to test on the hypothesis of engagement and change it next iterations to work better, so we definitely take your input into account. I am sure @dannylammerhirt is documenting this already and will have more comments about this :slight_smile:

Hi @barthanssens,

I think your proposal is very understandable. We need to be able to capture progress that has been made, otherwise we risk to be little relevant for governments. We see this indeed as a challenge and would love to engage more government officials in the discussions around our data categories, and how we assess them. More clarity and communication from our side is needed from the get go, so the community and government officials can engage with us even before submission.

You made great suggestions @barthanssens, also in this thread. We do encourage others to follow your example and share their experiences with one another (especially also around some tricky questions such as whether or not we should assess election results on polling station level only)