Hi,
To build on top of Rufus’ answer, I will mentioned to factors -
- The index is a crowdsourced effort. As part of it, we have learned that it is used as a learning tool. For some people, the index is the first encounter with the open data definition and the concept of open. We still see errors in machine readable and license answer. While I know we need to improve the definitions we are using, we also need to take this part into consideration.
- Looking again at the crowdsource answers, as Rufus said, some of the questions will be hard to assess - how do we know that a dataset is anonymised properly? This can only be answered by privacy experts. How can we know that it has the complete context of metadata?
- First and for most, I see the index as an advocacy tool. I think that we can add question on subject we want to advocate, but also take into consideration that the global index is a global benchmarking, and is already bias toward developed countries (and it will stay like this for a while, developed countries started Open Data years before the global south). We try to make it as valid and reliable as we can with our limitations (Crowdsourced data is know as not reliable), but this is not an academic research, it is a tool, and in order to use it wisely we need to see what we do need to add to it so it will serve our network. I think some of that some of the question here are good for that, and some are missing the point. Taking machine readable as an example again - Some government officials still struggle with the concept of machine readable. Taking points off because the format is XLS and not CSV can cause frustration with the work they are doing and to actually harm the process.
To conclude, we should always revise our methodology and some suggestion here are good, but we need to take into consideration the purpose of the tool and the nature of it.
Best,
Mor