Entry for election results / United Kingdom

godi

#1

A key feature of this indicator is that data is available at polling station level.

In the UK the Representation of the People Act 1983 requires polling stations to combine votes with at least one additional polling station before any counting occurs. This is to protect the identity of voters in the event that a candidate receives a low number of votes.

I think in the interests of transparency it is worth having this information as a note on the relevant data page. Otherwise a reader, who could be from any part of the world, will only be getting a partial view of the situation. Presumably they can then make their own judgement if they regard this a positive or negative state of affairs?


What are GODI's key datasets (and how we define them)?
#2

Actually the same happens in Belgium: counting stations are required to mix (paper) ballots from polling stations (which in turn have multiple ballot boxes) before counting may start, so there are only results available on the counting station and municipality level.

When voting is done electronically (depends on the municipality), votes are counted at municipality level, again to protect the voters.
(and Belgium is one of the few countries still requiring voters to actually show up)

It would be interesting to have an overview of different voting systems in the countries participating in the GODI, especially places that make a distinction between polling an counting stations.


Election openness for Norway
#3

Thank you that is very useful to know.

Nick


#4

The previous discussion on this is here: Election data criteria

I infer that OKI’s point of view is: ‘the optimum balance between openness and privacy is to report vote counts down to polling station level’. Although there is no research in this area to back this up, the results of the survey suggest that about a third of countries do publish to this level, so there is some justification.

My main problem with the methodology is why do OKI decide that 0% is a representative score, when there is still plenty of value in the data which is published in this area. Countries who have have published no result data are getting the same score as UK.

OKI knocks off a percentage point for not being up to date or providing bulk download, but other minor issues like level of detail knock off 10% of a final score. If there is no nuance in the scoring then the final rankings don’t mean much.


#5

Dear @dread, thank you very much for your feedback. As part of the public dialogue, we will follow up on your input and will get back to you in the coming days.
All the best,
Oscar


Entry for Election Results / Norway
#6

Hi all,

I wrote a response in another topic, where I’d like to discuss our dataset definitions more generally. If you want, please feel free to join the discussion there and leave your remarks on election data in our list of key datasets.

I will talk with the National Democratic Institute to discuss our results too


#7

@dannylammerhirt Can I just point out a reason that individual polling station data may not be a good idea for a country to publish with regard your discussions with NDI, and that is in cases of low turnouts at a single polling station.

For example, the England and Wales Police and Crime Commissioner elections see very low turnouts: ~15% in Wales, and extremely low in particular polling stations (here’s a story about one station that saw zero votes cast).

If there were to be one voter at a polling station, or a small number of voters who vote for the same candidate, the secrecy of the ballot would be seriously undermined, and for that reason publication at polling station level would be inadvisable.

Some Irish islands with very small populations have their own polling stations, some with electorates of 50 people or less. Here is an example of what happened in one island polling station in a recent referendum: https://twitter.com/westernpeople/status/602060652610486273 10 Against, 2 For. If you knew the people there, it might be entirely possible to identify who cast those votes.


#8

Hi @BobHarper1,

Thanks for this pointer. This seems to be a realistic concern especially in areas with small numbers of voters assigned to a polling station. I will discuss this with the NDI and will report back what the conclusion is.

Danny


#9

This is a generic comment that I also have made in the UK water quality thread:

An approach where one or two parameters are not available leads to a zero mark across the board is not going to motivate anyone to want to improve if the amount of work needing to be done looks overwhelming. On the other hand addressing one or two weak criteria which leads to an incremental improvement in the main indicator can seem a lot more achievable.

More generally looking at the process from another perspective it can be viewed as having a negative approach because we are not being given anything back which says if you do x your mark should improve. We can only infer what this might be which could be easy to misunderstand.

Making a positive statement of what needs to be done gives more of a clear audit trail as the data provider being ‘audited’ would have something clearer to review and either confirm or deny if that is the correct state of affairs. It also gives them a clear improvement path if they want to follow it. In an ideal world a data producer needs a link that gives them a page they can show to their decision makers and say - ‘this is how we stand at the moment - these are the gaps - do you want to do something about them?’


#10

Another example of election data being reused