Approval process? Waiting on review of SF Parcels


Quick question: can someone review the submission

and either approve, or if there are problems, let me know so they can be fixed.

I have a submission that doesn’t seem to have been reviewed and I cannot figure out if it’s been reviewed and rejected or just not touched.

In the fall, we got a tweet that SF OpenData didn’t have Parcels. However, we did, and have had them up longer than I’ve been around at the Open Data Program. Also, Parcels was definitely lit up green after the initial Open Data Day/CodeAcross event.Somehow it reverted back to unavailable. That may be a bug or a change. But I can’t figure out the audit trail there.

Anyway, we tweeted back the link. I was hoping someone would resubmit the record with the data, but that didn’t happen. So in February I decided to submit myself. The status on the data has remained “Unavailable” since February and I’m trying to figure out who could/should review the entry. And if it has already been reviewed and rejected, I can’t see the status of my submission. Maybe I’m missing it, but I think I’ve looked pretty hard for it.

Anyway, would love to get this rectified as I’ve now seen these metrics show up in people’s reports and it’s been wrong since at least the fall.


Thanks for flagging and we’ll get this sorted quickly one way or the other.

One question: would you be up for becoming a “reviewer” for the US Census - i.e. having the power to review and approve / reject these submissions. As demonstrated by your enquiry we need some assistance here and we think you’d be a good (sensible, trustworthy, knowledgeable) person to help out on this.

This would also mean you could get this specific item sorted - yourself!

If so, we just need to add your email (the one you login with to the census site - probably your gmail) to the reviewer list.


@jasonlally any thoughts? We can move ahead on getting that specific item sorted but it would be great to have your assistance as a reviewer.


@rufuspollock Thanks for following this up!

I’ve got mixed feelings about being a reviewer. I don’t mean to be difficult or over complicate things, but I’d prefer to have someone else check my work. I’ve asked to see if anyone in our local Brigade would be up to the task and they’ll ping you here if someone is up for it.

While I can assure you I personally wouldn’t abuse the role, on principle, I think it’s good to have lightweight governance and accountability structures to maintain the integrity of the data and the credibility of the SF open data program specifically and open data programs broadly.

Here are my broader concerns that maybe can be addressed with lightweight solutions.

Concern: I can’t figure out who else on the platform is a reviewer. This leads to confusion over roles and responsibilities. I’ve ended up here on this forum, which is great, but it’d also be great if there were simple ways to “nudge” a reviewer.

Possible Solution: Reviewers on the platform are featured with simple profiles with affiliation (maybe by City) to create a sense of community and accountability with and among reviewers.

Concern: I don’t see a clear set of expectations or guidelines for reviewers (maybe that exists when you become one). For example, what is the policy on reviewing your own work, is there one?

Possible Solution Elevate how to become a reviewer, what the roles and responsibilities and expectations are. Right now this information is far down on the about page, but maybe it deserves something more “front and center.” Maybe on each City page there’s a call to action (Become this City’s Census Librarian or Become a Peer Reviewer).

Concern: Related to the above, I think there should be a peer-review requirement. This is just a simple mechanism to maintain data integrity. I’m not worried as much about nefarious activity as there seem to be audit trails in place that can control for that, however, there is the possibility someone makes a simple mistake or oversight in approving their own submission. I’d hate for a mistake to be interpreted as ill intent, potentially undermining the credibility of the program, if that makes sense.

Possible Solution Implement simple technical controls that don’t allow you to approve your own work. In the absence of or along with technical controls, introduce a policy in the expectations mentioned above. It could be this is already in place. Apologies if I’m rehashing existing features.

All that being said, I’m happy to be a reviewer generally, but not of my own submissions, and I’d ask even for more clarity and whether it’s appropriate at all for me to be a reviewer given my role as the program manager for an open data program.

All of the above is certainly not meant to be negative. This is a wonderful effort and very useful. In fact, the reason I outline the above, is I’d like to see this be sustained and nurtured so it becomes an enduring and maturing resource that can help continue to raise the bar on open data programs.

Thank you for helping on this, and for all the work you’re doing to advance open data internationally. I know this isn’t all you do, but I do appreciate it from afar :smile: