Be Sure to Vote (and Consider Me)

Last Day to Vote! Thursday, October 13 is the last day you can your vote for the ONA Board. Check your inbox for an reminder email sent Wednesday at 11 a.m. ET titled "Deadline is tomorrow -- vote for the 2010 ONA Board of Directors." It has your personal ballot login details. Then go vote!

I'm up for election, and my opponents are awesome.

The race is to fill seats on the board of the Online News Association, an important organization at the intersection of journalism and technology

And while the slate of candidates is crazy strong, I still think I'd be a great addition to ONA's board. So here are some John Keefe Bullet Points to help ONA members consider a vote for me:

  • Planner and doer
  • Super collaborative
  • Public media news director
  • Data-news MacGyver
  • Journohacker
  • Share what I learn so others might, too

I'm especially committed to that last point, working hard to explain our data-news projects in ways other journalists can replicate and build upon. For evidence, just look anywhere else on this blog.

It's journalism in the public interest, infused with the spirit of open-source coding. This isn't unique to me; there are several fantastic teams committed to this. But it is exactly the commitment to propelling journalists, in open and accessible ways, that I would bring to the ONA board.

My full bio and ONA vision statement -- and those of my slatemates -- is right here.

Voting begins Friday, September 23.

Making the NYC Evacuation Map

A couple of years ago, I had our WNYC engineers use a plotter to print out this huge evacuation map PDF. Seemed like a good thing for the disaster-planning file. Just in case.

Then, back in June of this year, I was browsing the NYC DataMine (like you do), and realized New York City had posted a shapefile for the colored zones on that map.

UPDATE (Feb. 11, 2012): NYC has nicely revamped the DataMine since the summer Irene struck -- even mapping geographic files like this right in the browser. But it's actually tricker to find the shapefiles now. Here's the hurricane zones dataset. Click "About" and scroll down to "Attachments" for the .zip file containing the shapefiles. Or just use this shortcut.

I knew I could use the shapefile to make a zoomable Google map -- which would be a heckuvalot easier to use than the PDF. So I imported the shapefile into a Google fusion table. (It's super easy to do; check out this step-by-step guide.) Next, I added that table as a layer in a Google Map and tacked on an address finder I'd developed for WNYC's census maps.

Then I tucked the code away on my computer. Just in case.

Fast-forward to Thursday morning, as Irene approached. On the subway in to work, I polished the map and added a color key. It was up on by midmorning, long before the Mayor ordered an evacuation of Zone A.

When the order was announced, I used another fusion table to add evacuation center locations, updating that list with info from New York City's Chief Digital Officer Rachel Sterne. (The dots are gone now, since the sites are closed.)

I'm not at liberty to reveal traffic numbers, but the site where we host our maps received, um, a lot more views than it usually does. By orders of magnitude. Huge props to the digital team for keeping the servers alive.

Tracking a Hurricane

As Hurricane Irene was approaching Puerto Rico, I noticed that the National Hurricane Center posts mapping layers for each element in their storm-track forecast maps.

Since their zoomable maps aren't embeddable, I made one that is. Feel free to use it:

Right now, I'm manually updating the map with new layers as they are issued, every three hours. I'm pretty close to having a script ready to handle that for me, based on information in the storm's RSS feed.

In the process of building this map, I learned how to use "listeners" to dictate the order the layers are rendered. For anyone trying to work that out, here's the code for how I did it.

Mapping Campaign Contributions on the Fly

Our new Empire Blog reporter, Colby Hamilton, dropped by my desk the other day wondering whether we could map contributions to presumptive NYC mayoral candidates by zip code.

He was going to post about it after lunch. I said I'd be ready with a map.

Thanks to Fusion Tables and a little Ruby magic, I had one ready when his story was done shortly after lunch, and we updated it into the evening as the candidates' filings were made available by the NYC Campaign Finance Board.

How I Did It

For anyone looking to do something similar, here's what I did:

-- I downloaded each candidate's donation as an Excel spreadsheet from the homepage of the Campaign Finance Board.

-- I uploaded the spreadsheet to Google Fusion Tables (if it's an Excel file more than 1MB, you have to save it as comma-separated-values, or .CSV, before uploading).

-- I used Fusion Tables' fantastic aggregattion function -- View -> Aggregate -- to sum the contributions by zip code. Then I exported that file as a .CSV, which gave me a file for each candidate with the columns: ZIP, SUM and COUNT -- SUM being the total donations and COUNT the number of donations for the zip code.

-- I re-uploaded that aggregation export back to Fusion Tables. (If anyone knows how to save an aggregation in FT without exporting it and uploading it again, I'm all ears.)

-- Now that I had the contributions by zip code, I need the zip code shapes to go with them. The US Census has zip code shapefiles by state FIPS code, and for the entire United States. (Quick note: Census zip code data and US Postal Service zip codes aren't exactly the same, though we felt comfortable using the Census version for this project.)

-- I uploaded the New York state zip code shapefile to Fusion Tables, too, using Shpescape. (If you're working with New York State, you can save some work and just use mine.)

-- I opened the ZIP-SUM-COUNT file in Fusion Tables and merged it with the zip code shapefile, linking them with the ZIP field in the first file and ZTCA5CE10 in the second file.

-- Using Visualize -> Map, I could see all of the relevant zip codes for that candidate. By using the Configure Styles link, and then tinkering with Fill Color -> Buckets settings, I shaded the map according to total donations.

This map is ready to be embeded!

The Trouble with Tables

An admission, though: I didn't use the Fusion Tables embeddable map for this story. I did share the FT map with Colby, which let us know we had a couple of good stories. FT is great and fast for that. It also works in production with smaller data sets.

But the long time it takes Fusion Tables to populate a map with large data sets can make for a frustrating user experience. That's compounded by the fact there's no way (yet) to "listen" for a sign that the layer has fully loaded, which would let me display a "Please wait ..." sign until it did.

So in this case, I built my own KML, or Keyhole Markup Language, file (5 of them, actually; one for each candidate). I then compressed those files in to much smaller KMZ files, which are just "zipped" KML files, so they load quickly. I then used those files as layers with Google Maps' KmlLayer() constructor. I also used a "listener" to find out when the layer is fully loaded, and display an alert to the user until it is.

More to Come

As for how I built the KML file, I'm going to share that in another post once I clean up the Ruby code I used to automate the process. (If your project can't wait for that post, drop me a note and I'll try to help.)

But the basics are these:

1. The layout of a KML file, and the format for using different styles to color different shapes, is pretty straightforward and nicely documented. In my code, I changed the style name for a given shape based on the value of the "SUM" variable.

2. The hardest part of writing a KML file is defining each shape in the proper format. But the merged file I made linking the ZIP-SUM-COUNT data and the shapefile actually has that information! The "geometry" column of that table is straight KML! (Thank you, Shpescape.) Export that merged file as a .CSV, and you've got all of the building blocks for your map.

If you have ideas, improvements or questions about this post, please don't hesitate to drop a note in the comments or drop me a note via email.

Water Begone

Thousands of people live in the Hudson River.

That's what you'd think, at least, by looking at U.S. Census tract maps for New York City, because census tracts extend to the state line.

But a population map drawn like this isn't attractive, and isn't accurate, either. It suggests inhabited areas at the coast are far larger than actually they are.

So what's a journo-mapper to do?

Fortunately, the Census Bureau also publishes shapefiles of all of the water in the U.S. So one solution is to tell you trusty computer to subtract the water areas from the tracts -- and the difference will be the parts on land.

Doing this turns out to be far easier than I expected. (Thanks to Michael Corey and Nathan Woodrow who responded to my help tweet.) Here's what I did:

1. Opened my census tract shapefile with QGIS (a free, open-source mapping program I'm getting to know).

2. Found the water shapefile for Manhattan (New York County) and opened it as a new layer in QGIS.

3. From the QGIS menu, selected Vector -> Geoprocessing Tools -> Symetrical Difference and followed the prompts, choosing the tract shapefile as the "Input vector layer" and the water shapefile as the "Difference layer."

4. Compressed the resulting shapefile set into one .zip file and uploaded it to Google Fusion tables using shpescape. Once in Fusion Tables, I can play with it, view the map and merge it with population data.


A few extra notes and tips for those trying this at home:

- I've found water shapefiles only for individual counties, which makes for a small pain to do an entire state. For New York City, which is five counties, I loaded the five water shapefiles into QGIS, made sure they were all visible, and used Layer -> Save Selection as Vector File... to save them all as one shapefile. I then used the resulting shapefile in the Symetrical Difference process.

- Be sure the water map represents the same census year as the tract map (and, of course, your data). Very likely you'll be using shapefiles for the previous decennial census. For our New Littles map we had 2009 data, so we used the appropriate tract and water shapes -- which were from the 2000 census.

- I get an error about missing coordinate information when I do step 3, but it hasn't caused me any problems I know of. Also, on my Mac version of QGIS, the Symetirical Difference window and the file-saving dialog box conflict -- but I just moved them to their own side of the screen.

- Census tracts are made up of census "blocks," which are smaller and generally DO follow coastlines. So if you're mapping blocks, you can eliminate the watery ones by excluding any block with an "area land" of zero.

- The difference trick doesn't change the meta-data associated with each tract, which generally is a good thing. 

If you have any questions or suggestions, don't hesitate to post them in a comment below!

Screaming for a Map: The New "Littles"

When I saw the NYC ancestry data, I immediately thought, "That screams MAP!"

Brian Lehrer Show producer Jody Avrigan had been working on a great project looking for the new "Littles" in New York City -- neighborhoods where people of a certain ancestry or ethnicity live. He had a spreadsheet; I wanted to visualize it.

The result may be my favorite map project so far:

Mostly, I built on what I'd learned making WNYC's Census Maps, adding a few of things:

• An on-map drop-down menu (here's the CSS code for that).

• Code that selects different data from a single Google Fusion Table

• Panning and zooming to the neighborhood I want to highlight.

• A better "Share or Embed" pop-up box using jquery.alerts.js

I also tried to clean up and refactor my original code to make it easier to read (and reuse).

You can see that code on GitHub. I tried to document it clearly, but post a note below if you have any questions or would like clarification.

UPDATE: In making this map, I used a new (to me) trick to remove the water areas from census tract shapes on the coastline.  Here's how I did it, if you're interested.

911 vs Google Maps

Two weeks ago today, I called 911. It was an unsettling experience.

Walking by Inwood Hill Park in northern Manhattan, my wife spotted plumes of smoke rising through the trees. There was a fire in the woods, and it was growing.

My call to 911 started at 3:14 p.m. and lasted 3 minutes, according to my iPhone's log. Astonishingly, the operator spent almost all of that time -- probably 2.5 minutes -- trying to find my location on her computer.

Later, using the same information, I did it in 16 seconds. That's the time it takes to type "" into a browser and then "seaman avenue and 214th street nyc."

911 = 150 seconds.

Google Maps = 16 seconds.

Now, this is not a journalistic exploration of why it took so long for the operator to locate me. It is merely my experience. But it's startling enough that I think it is worth a careful recounting. It seems New York City doesn't release 911 calls as a matter of course, though I hope to get mine for a precise transcript of what happened. But the night of the call, I did my best to write down what happened:

• When the operator first answered, I said there's a fire in the woods "in Inwood Hill Park at Seaman Avenue and 214th Street." The park is big, and the fire was across a baseball field in the woods, but it was visible from I was standing and two entrances are nearby. So where I was standing seemed a good location to report.

• The operator asked me if I meant East 214th Street, and I said no, West 214th Street. (For what it's worth, Seaman Avenue doesn't cross East 214th Street.)

• The operator said she couldn't pull up that intersection, eventually asking me if she had spelled Seaman Avenue correctly: S-e-a-m-a-n. Yes, I said, that's right.

• She said to me again that it "wasn't coming up" but kept trying.

• I suggested another cross street, Isham Street, and she said, "In the Bronx?" No, I said surprised, Manhattan.

• That fixed it ... she was able to find my location.

• She then asked me to hold while she connected me to another operator. After several rings, she verbally conveyed my information to the second operator, mistakenly saying "the Bronx" -- which I corrected as she caught herself, "Manhattan!"

Three minutes.

A fire engine arrived a short time later and quickly got the fire under control.

Here are several related searches on Google Maps, all of which return results in less than a second:

"inwood hill park nyc" returns a pin on the west side of the park -- which isn't where the fire was. A fire truck going there would have been misdirected. But it's clearly in Manhattan. And the resulting map would have been a good starting place to work with me to pinpoint my location: "OK, I see Seaman Avenue running along the park ... were exactly from there?"

"214th street and seaman avenue nyc" returns a pin exactly where I was standing. No question about Manhattan or the Bronx.

"seaman avenue and east 214th street nyc" does the same thing, correcting to West 214th Street.

"seaman avenue bronx" returns a pin at 207th Street and Seaman Avenue, correctly in Manhattan, near the entrance to the park -- and, in this case, in sight of the smoke.

WNYC has done some good reporting on 911, but we never had such a concrete example of address confusion. I wonder if other people are having the same problem.

A Customized Viewer for DocumentCloud

This post is for newsrooms using DocumentCloud, the fantastic document viewer developed by journalist-programmers at ProPublica and The New York Times.

Want a custom viewer for your site's documents? You can have ours.

I built it so that once set up, this viewer will automagically fill in the title, source and "back-to-article" link based on information already associated with the document -- so one file serves all of your documents.

Here's how.

One-time Setup

You can make this work with a little knowledge of html and access to a web server. You'll need to host a single html page, called dc.html and a tiny javascript file, called jquery.url.min.js.

1. Download the html code for dc.html by right-clicking on this link (or view it here).

2. Use any text editor to edit the path to your logo image on line 101. (A logo that's 60 pixels high works well).

3. On line 101, change "" to your site's home page

4. Upload the file dc.html to a web server.

To extract the document info from the URL, the page uses a little JavaScript program called jquery.url.min.js which you can read about here and download here. Once you do:

5. Upload jquery.url.min.js to your web server (the page assumes it's in the js/ subdirectory)

6. If you need to change the location of jquery.url.min.js, edit the path on line 38 of the html code and re-upload.

Using the Viewer

To use the viewer, simply construct a link to it that combines dc.html's location and the ID of the document you want it to load. For example, the base URL for the WNYC's version of dc.html is here:

And the document I want to display is here:

I combine them into a new link by taking the base URL, adding "?doc=" and then adding the document ID -- which, here, is 11275-bill-a11354 (omitting the .html .) Like this:


Pages and Annotations

For extra trickiness, you can jump to specific page numbers and annotations by adding references to them into your link. Here you need to append "#document/p" and the page number. So for page 2, you'd use:

And for the annotation on page 3, it would be:

(You get the annotation number -- and the whole phrase after the #, actually -- by clicking on the little "link" icon next to the annotation's title.)

That's it. 

Credits and Disclaimers

The base design is built on code the Chicago Tribune News Apps Team wrote, which I modified with help from the DocumentCloud folks to dynamically take up the title, source and related-story information from the document's metadata.

Note that the version of dc.html at contains extra tracking code specific to our servers. The version here does not. It's the one you should download.

And I don't warrant in any way that this is perfect code, so please use at your own risk.

If you modify it -- especially if you improve on what's here -- please let me know and I'll share the updates here and on GitHub.