tag:johnkeefe.net,2013:/posts johnkeefe.net 2022-07-07T00:32:12Z John Keefe tag:johnkeefe.net,2013:Post/1852053 2022-07-07T00:31:53Z 2022-07-07T00:32:12Z New online class: Mapmaking for journalists

The video above is the trailer for my new online class: Mapmaking for Journalists.

It starts Monday, July 11, 2022 and runs for four weeks. It's all online, and you can go at your own pace. I'll be in the class forums and even host a couple of (optional) live chats.

Along the way, you'll learn how to make maps using Datawrapper, Mapbox, and Mapshaper.

No prior coding experience is needed, and you get to keep all of the code I share with you.

It's run by the Knight Center for Journalism in the Americas at the University of Texas at Austin. The cost is $95, and you can register at journalismcourses.org

Hope you'll join me!


]]>
John Keefe
tag:johnkeefe.net,2013:Post/481435 2022-02-07T04:30:00Z 2022-02-07T04:48:05Z Sharing NYC Police Precinct Data

Note: This post was originally published April 29, 2011, and updated in June 2020. In February 2022, I updated it again using 2020 Census data. 

Anyone doing population analysis by NYC police precinct might find this post helpful, especially if you're interested in race and/or ethnicity analysis by precinct.

Back in 2011, I wanted to compare the racial and ethnic breakdown of low-level marijuana arrests — reported by police precinct — with that of the general population. The population data, of course, is available from the US Census, but it's not provided by police precincts, which also don't follow any major census boundaries like census tracts. Instead, they generally follow streets and shorelines. Fortunately, census blocks (which in New York, are often just city blocks) also follow streets and shorelines.

So I used US Census block maps and precinct maps from the city to figure out which blocks are in which precincts. Since population data is available at the block level, that data can then be aggregated into precincts.

In this, the third version of this post, I've updated the counts now that the 2020 population data is available.

The 2020 data

• nyc_precinct_2020pop.csv is the 2020 Census population, race, and ethnicity (Hispanic/non-Hispanic) data by NYPD police precinct. The column headers from the US Census are a little cryptic, but you can translate them using the P1 table metadata file and the P2 table metadata file.

nyc_block_precinct_2020pop.csv — every populated block in NYC is identified by its ID (called "GEOID20"), is matched to the police precinct it sits within, and contains the block's race/ethnicity information. Use the same metadata tables to translate the column headers. Also be sure to read about the caveats below.

nyc_precincts.geojson depicts the geographic boundaries of the NYPD precincts I used for the files above, as they existed in February 2022. As of this post, the information on the NYC Open Data portal indicates it was last updated on Nov 24, 2021.

Caveats for the 2020 data

The biggest caveat is that the US Census has introduced data fuzziness, or "noise," to make it difficult to identify individuals based on census data. This fuzziness is more pronounced at smaller geographies — the smallest being census blocks, which I've used for these calculations. Hansi Lo Wang did a great primer on these data protections for NPR, and the US Census Bureau has put out a lot of material on how it uses "differential privacy."

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1783325 2022-01-15T04:42:03Z 2022-01-15T04:43:07Z Covid cases, animated

I've been both awed and terrified by the transmissibility of Omicron and the speed at which it's spread. As the case curves hockey-sticked upward and the maps all turned red, I thought it'd be interesting to visualize the spread of this new twist on the coronavirus.

So the other day, while doing some work mapping Covid-19 data by US counties, I realized it wouldn't take much to generate a map for each day of the pandemic ... and make those maps into a movie.

It almost seems like cheating to use a work project as one of my "Make Every Week" projects, but I'm lucky to have a job where creative tinkering is celebrated. When I shared a tinker-made movie of six months of case data with colleagues Kaeti Hinck and Sean O'Key, they thought it would make a good data feature for CNN.

While truly a horrible topic — nearly every county is now reporting more than 100 cases per 100,000 people — the process of turning that case data into a movie was a worthy project. I learned a lot, and I did it almost entirely from the command line (the text-only interface that is my Mac's "Terminal" program).

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1778503 2022-01-02T21:00:36Z 2022-01-02T21:17:31Z Make Every Week returns

The last two years were rough. And as 2021 ended, and a new coronavirus surge began, the outlook wasn't exactly sunny.

In an effort stay centered and battle the blues, I've turned again to my soothing practice: making things.

Back in 2015, I tried to make something every week for a year. I only averaged something every 1.7 weeks, but it was still successful fun.

So I'm doing it again for 2022.

Might be a gadget, might be a toy, might be a map, might be bread. I'll try to learn something new every time, and will share each thing here.

But make. Every week.

A 3D-printed flexi-dog

To kick things off I literally dusted off my 3D printer, which I had set aside when we got a pandemic puppy, and tried to remember how to use it.

Seemed appropriate to print a dog, so I found this Flexi Dog on Thingiverse. I downloaded the shape's .stl file, navigated PrusaSlicer to turn the object into printable slices ...

... and then used OctoPrint to actually send the dog to the printer.

It didn't work right away; the triangle at the end of the tail — in the foreground of the next picture — kept coming off the base plate during printing, leading to tangled messes. In the end, I warmed the plate an extra 5° Celsius, and that made it stick.

Several false starts and two hours of continuous printing later, I had my first "make" of 2022.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1654379 2021-02-15T19:29:01Z 2021-02-15T20:06:08Z Taking time to build a triangle-grid clock

I like what's possible with triangles.

Playing with rectangular blinky grids is super fun, and I've made a weather monitor and a pulse-oximeter with those.

But there's something additionally awesome about the pattern possibilities with triangle pixels.

So when I saw a Hackaday post about building a clock display with LED triangles, I was hooked.

The short story is that I made it! It now lights up my living room with dazzling animations and a funky time display.

The longer story involves perseverance made possible by my coronavirus lockdown.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1651048 2021-02-07T23:10:16Z 2021-02-07T23:13:47Z Is it Monday? My Pi has the answer

Keeping track of the days has been harder lately, it seems.

So I was excited to see a nifty blog post by Dave Gershgorn, where he described how he built a slick dashboard by attaching a screen to a Raspberry Pi computer. In fact, the Pi actually attaches to the back of the screen, out of sight.

I happen to be the kind of nerd who has a couple of Raspberry Pis around (in my case, some older Pi 3 model B's), so I ordered the recommended screen and followed Dave's great directions along with this ETA Prime video. If you're similarly inspired, just follow those guides.

If you're new to setting up a Pi, you might not realize that it doesn't come with an operating system. You need to install one on a micro SD card, and slide it into the Pi. I like to download the latest, recommended system from the Pi site, unzip it, and use the balena Etcher to flash the SD card.

One of the build steps that was unclear from the video was exactly how to attach the power lines to the Pi. For my Pi, the pins were these three:

Another tricky step was folding the ribbon cable so it fit nicely. Here's how I did it:

Then it was just a treat to see the tiny Pi desktop appear before my eyes:

I launched the Terminal application with the little cursor icon in the upper left corner, and in order to run the installation commands I increased the Terminal text size using Ctrl-Shift-+.

Once I got everything running, I installed MagicMirror, added a monthly calendar module, and played with the configuration settings to suit my needs. (I also toyed with the Javascript and the CSS because I couldn't help myself, but you certainly don't have to.)

Works like a charm.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1644598 2021-01-24T22:00:44Z 2021-01-24T23:30:05Z Drawing arcs of circles

Maybe you've seen them: Rainbows of circles representing members of the U.S. Senate, House of Representatives, or all of Congress.

I wanted to make such a visualization to show the number of members of Congress who've tested positive for coronavirus and the positive tests among each party.

If not for the serious nature of the topic, it was a fun puzzle to solve.

The steps I took were:

  1. Figure out how many circles fit in each ring
  2. Calculate the positions for every circle
  3. Sort the positions to suit my needs
  4. Marry the positions to data

It turns out that once I established a) the number of circles in each ring and b) the size of those circles, I could figure out the rest with code.

You can play with the final results, or take a look at the code yourself. But here's the explanation:

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1635247 2021-01-03T18:19:30Z 2021-01-03T18:46:11Z Printing a pumpkin

There's something exciting about holding an object you previously only imagined — whether it's a freshly baked loaf, a tomato off a garden vine, or a printed plastic pumpkin.

I've had that feeling a lot lately, with a pandemic purchase of a 3D printer.

Rolling an object in your fingers that was previously just a digital file on the internet is ridiculously fun. It's even more rewarding if the thing conjured was something you — or your kid — dreamed up.

That's what happened with this 3D pumpkin. My daughter drew it late one night for an animation class assignment using the program she was learning, Cinema 4D.

And then we made it real.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1564332 2020-06-24T21:11:21Z 2020-06-25T03:13:33Z Modeling the 2020 vote with Observable

I've been interested in how voter turnout might affect the 2020 US election and I've wanted to play with Observable notebooks.

So I blended the two projects, and you can play with my live Observable notebook that does those calculations.

The result is an admittedly super-simplistic model of how things might turn out. But you can increase the percentage of Republican and Democratic voters nationwide and see what happens!

Notably, even if Democrats were able to boost turnout more than Republicans — say 107% vs 106% — Trump still wins.

As written, it doesn't consider nuances such as regional differences in voting turnouts, swing voters, or faithless electors. (It does, however, account for the unique ways Maine and Nebraska divide their electoral votes). But I learned a lot in the process ... and there's more to come.

All my calculations are visible in the Observable notebook itself, and the initial data prep is documented in a Github repository. For good measure, I put all the raw data in my Datasette library.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1562413 2020-06-20T23:17:00Z 2020-06-20T23:32:19Z Minneapolis race and ethnicity data by neighborhood, served with Datasette

Minneapolis police report stops and other incidents by neighborhood, so I decided to calculate the racial makeup of those neighborhoods to make some comparisons — along the lines of what I've already done for New York, Chicago, and Washington, DC.

This time, though, I'm using Datasette.

I've seen creator Simon Willison tweet about Datasette, and with some extra time on my hands I took a look. It's so impressive!

With Datasette, one can publish data online easily, efficiently (even free!) and in a way that allows others to explore the data themselves using SQL and feed data visualizations and apps. At scale.

How is this not in every newsroom?

(Simon, by the way, has offered to help any newsroom interested in using Datasette — an offer I hope to take him up on someday.)

Minneapolis neighborhoods

Once again, I've married US Census blocks with other municipal zones, this time the official neighborhood map of Minneapolis.

That data is now online, served up with Datasette.

And with some nifty SQL queries, bookmarked as simple links, I can list the race and ethnic makeup of every neighborhood by raw number.

Or by percentage.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1556001 2020-06-08T03:02:30Z 2020-07-27T14:50:30Z Race and ethnicity data by Washington DC police zones

If you've got arrest or incident data from the Metropolitan Police in Washington DC, and that data is broken out by police district or public service area, you may want to compare it with the racial and ethnic makeup of the people living in those zones.

If so, this post is for you.

The US Census doesn't break out populations by police districts. But in DC and other large cities, census blocks serve as atomic units that usually do fall within police precinct boundaries. So by knowing which blocks are within which districts, you can calculate the populations. Unfortunately, block-level data is only available from the decennial count, so the latest data is from 2010.

This is my third spin at such data — I've also done New York City and Chicago

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1555906 2020-06-07T23:04:11Z 2020-12-23T22:30:17Z Chicago race and ethnicity data by police district

If you're trying to match Chicago police district data with the racial and ethnic makeup of those police districts, this post is for you.

The boundaries for police districts and precincts don't usually line up nicely with US census boundaries like census tracts or block groups. That makes it tough to compare incident and arrest data reported by precinct with the population of those precincts. 

But in bigger cities, census blocks are small enough to serve as atomic units that usually do fall within police precinct boundaries. So by knowing which blocks are within which districts, you can calculate the populations. Block-level data is only available from the decennial census count, so the latest data is from 2010. But it still should serve as a good measure — and a reason to fill out your 2020 census form online!

After doing these calculations for New York City, I put together Chicago's by request!

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1533004 2020-04-19T03:44:31Z 2020-04-19T18:06:38Z Lockdown loaves

It's become a coronavirus cliché, but for this week's #MakeEveryWeek I made sourdough bread. 

The twist: I made one loaf in the oven and one in a slow cooker.

It all started with sourdough starter, specifically this guide from Quartz colleague Tim McDonnell. This was a great project for my teens, incorporating chemistry, biology, and excellent smells.

Next was this incredibly fun and detailed sourdough recipe from Kitchn, which makes two loaves and relies on two oven-safe pots. Alas, our family has but one.

We do have a slow cooker, though. Could I make one of the loaves in that? The answer is yes!

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1529710 2020-04-11T18:56:31Z 2020-04-20T00:10:09Z Building a pulse oximeter

At-home pulse oximeters, those fingertip devices doctors use to measure the oxygen saturation in your blood, have been selling out everywhere thanks to the Covid-19 pandemic.

But as my Quartz colleague Amirta Khalid points out in this great article, most people don't need 'em. If your oxygen level is worryingly low, you'll know — you don't need a machine to tell you. Folks with some existing conditions, however, can use a pulse oximeter to help a remote doctor monitor their vitals or to adjust supplementary oxygen devices.

When Khalid mentioned she was working the story, it reminded me of the DIY "pulse ox" sensor Sparkfun sells. It, like other pulse oximeters, shines light into the skin and makes measurements based on how that light is absorbed. I've built heartbeat-driven projects before and had been exploring new ways to monitor pulse rates. So I got one.

Sparkfun warns in red letters that "this device is not intended to diagnose or treat any conditions," and I offer the same caution if you're tempted to build one. The process wasn't hard at all. I got it running quickly ... and then added an LED display for fun and flourish.

Here's how I made it, and the code, too.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1526323 2020-04-02T03:13:37Z 2020-04-02T12:53:12Z Work-from-home "on air" light

I'm incredibly lucky to be both healthy and able to work from home during this coronavirus crisis. That means I spend large chunks of my day on video calls.

As a courtesy to my family, all of whom are also working and schooling from home, I've tried to warn them when they risk being broadcast to my colleagues. 

Now I have a fun "on air" light to help! And I've put the code online so you can make one, too.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1522587 2020-03-23T00:39:43Z 2020-03-23T00:44:49Z DIY aquarium lights

Buy a new aquarium, and you often get hood lights that are ... meh. They're good enough, but not great.

There are plenty of high-quality replacement lights out there, but none of them had the nice, low profile of the plastic covers that came with this tank. So I decided to spruce up the existing illumination with some DYI lights — and even make them programmable with an Arduino.

That was more than a year ago. Now in coronavirus isolation, I finally made it happen.

Here's how.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1521884 2020-03-20T03:11:35Z 2020-03-20T03:18:00Z Amazon Aurora MySQL + Python

Ok, so this isn't the sexiest topic, but if you're completely stuck the way I was several times today, maybe you're happy you found this post.

Today I needed to spin up a database I want available to students at the Newmark Graduate School of Journalism and also colleagues at Quartz. I also want to connect to the database from my home and the school using Python.

Since we use Amazon's web services, and I wanted to show the students SQL, I decided to give the AWS Aurora system a whirl — specifically the MySQL-compatible version.

As with many things AWS, it was a bit of a slog to get set up ... and I've decided to jot it all down while it's fresh so I can remember how the heck I did it (and show my students).

After a few tries, here's how I finally got set up:

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1480566 2019-11-21T22:36:32Z 2019-11-24T19:25:30Z Machine learning in my pajamas

Tonight I gave a presentation at Newsgeist about how I did machine learning in my pajamas — in my pajamas.

I promised the gathered crowd I'd post how they, too, can make their own bike-detector, so here it goes:

  1. Follow the instructions here.
  2. When you get to the part about picking a notebook, use this one: notebooks/ee-searching-videos-with-fastai.ipynb

Then follow the steps to work through the code! Have fun!


]]>
John Keefe
tag:johnkeefe.net,2013:Post/1441702 2019-08-06T12:44:43Z 2020-04-21T22:51:27Z AI classes for journalists

(Promo video for the Knight Center course.)

If you're a journalist, you've probably done a story or two about about AI.  But did you know you can use machine learning, too? 

I'll show you! 

While the classes below have passed, the videos and accompanying code for the Knight Center course are now available free online.

Work at your own pace and enjoy. It could help with your next investigation, and the experience will help you report about machine learning, too.


Past classes:


September 13, 2019 • 11 am •  InterContinental New Orleans • Treme / 2nd Floor 

Hands-on Introduction: Machine Learning for Journalists at ONA

If you're going to ONA, get a practical, hands-on introduction to using machine learning to help pore through documents, images, and data records. This 90-minute training session by members of the Quartz AI Studio will give you the chance to use third-party tools and learn how to make custom machine-learning models. We'll walk you through pre-written code you can take home to your newsroom.


October 26 & 27, 2019 • Newmark Graduate School of Journalism • New York City

Weekend Bootcamp: Practical Machine Learning for Journalists

This will be a small-group, guided bootcamp where we'll spend the weekend working through practical machine-learning solutions for journalists. You'll learn to recognize cases when machine learning might help solve such reporting problems, to use existing and custom-made tools to tackle real-world issues, and to identify and avoid bias and error in your work. Students will get personalized instruction and hands-on experience for using these methods on any beat.


November 18 to December 15 • Knight Center for Journalism in the Americas • Online • $95

4-Week Online Course: Hands-on Machine Learning Solutions for Journalists

In this online video course, you will first learn how to use some off-the-shelf systems to get fast answers to basic questions: What’s in all of these images? What are these documents about? Then we’ll move to building custom machine learning models to help with a particular project, such as sorting documents into particular piles. Our work will be done with pre-written code, so you always start with a working base. You’ll then learn more by modifying it.

Updated 21 April 2020

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1423645 2019-06-24T01:56:54Z 2019-06-24T02:03:56Z Detecting feature importance in fast.ai neural networks

I'm working on a new neural network that tries to predict an outcome – true or false – based on 65 different variables in a table.

The tabular model I made with fast.ai is somewhat accurate at making those predictions (it's a small data set of just 5,000 rows). But to me even more interesting is determining which of the 65 features matter most. 

I knew calculating this "feature importance" was possible with random forests, but could I do it with neural nets?

It turns out I can. The trick is, essentially, to try the model without each feature. The degree to which the model gets worse with that feature missing indicates its importance – or lack of importance.

This blog post describes how to run this test, and this adaptation worked perfectly in my fast.ai notebook. Here's the code in a Gist:

Unfortunately, because my project uses internal Quartz analytics, I can't share the data or the charts I'm playing with. But with the code above, I can now "see into" the neural network and get cool insights about what's going on


]]>
John Keefe
tag:johnkeefe.net,2013:Post/1420535 2019-06-15T19:34:53Z 2019-06-15T19:36:24Z Converting videos to images for machine learning

This week I kept to my summer of training plan, however the model-building I did was for a Quartz project we're not ready to share. But! I learned something super useful in the process: how to quickly turn videos into many still images.

For our latest project, I'm training a model to identify specific objects available to me – much like how I trained a model to identify items in the office.

The fastest way to get lots of images of an object is to take a video of it. And a quick way to turn that video into images – called an "image sequence" – is ffmpeg. It seems to convert from many formats like .mp4, .mov, .avi to lots different image formats such as .jpg and .png.

There's plenty more detail in the ffmpeg docs, but here's what I did that worked so quickly on my Mac:

brew install ffmpeg

I use Homebrew to put things on my Mac, so this went pretty quickly. I had to update my Xcode command line tools, but Homebrew is super helpful and told me exactly what I needed to do.

Next, I did this from the Terminal:

ffmpeg -i IMG_1019.MOV -r 15 coolname%04d.jpg

Here's what's going on:

  • -i means the next thing is the input file
  • IMG_1019.MOV is the movie I Airdropped from my phone to my laptop
  • -r is the flag for the sample rate.
  • 15 is the rate. I wanted every other image, so 15 frames every second. 1 would be every second; 0.25 every 4th second.
  • coolname is just a prefix I picked for each image
  • %04d means each frame gets a zero-padded sequence number, starting with 0001 and going to 9999– so my image files are named coolname0001.jpg, coolname0002.jpg, coolname0003.jpg, etc.
  • .jpg is the image format I want. If I put .png I got PNGs instead.

In mere moments I had a dozens of JPG files I could use for training. And that's pretty great.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1418405 2019-06-09T22:19:57Z 2019-06-24T02:23:34Z Artisanal AI: Detecting objects in our office

Off-the-shelf services like the Google Vision are trained to identify objects in general, like car, vehicle, and road in the image below.

But many of the journalism projects we're encountering in the Quartz AI Studio benefit from custom-built models that identify very specific items. I recently heard Meredith Broussard call this kind of work "artisanal AI," which cracked me up and also fits nicely.

So as an experiment, and as part of my summer training program, I trained an artisanal model to identify between the three objects at the top of this page from the Quartz offices: A Bevi water dispenser, a coffee urn, and a Quartz Creative arcade game (don't you wish you had one of those?!)

I also made a little website where my colleagues and I can test the model. You can, too — though you'll have to come visit to get the best experience!

The results

The model is 100% accurate at identifying the images I fed it — which probably is not all that surprising. It's based on an existing model called resnet34, which was trained on the ImageNet data set to distinguish between thousands of things. Using a technique called transfer learning, I taught that base model to use all of its existing power to distinguish between just three objects.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1417028 2019-06-06T04:49:41Z 2019-06-06T17:11:26Z Making music with my arms

The brilliant Imogen Heap performed in New York a few weeks ago, and I got to experience live how she crafts sounds with her arms and hands

It was a great night of beautiful music and technology, both.

One mystery I couldn't solve from the audience was how her computer detected the position of her arms. Unlike in her early videos, I didn't see something akin to a Kinect on stage.

Now I think maybe I know.

That's because this week I took a workshop from Hannah Davis on using the ml5.js coding library, which touts itself as "friendly machine learning for the web," letting me use machine learning models in a browser. The class was part of the art+tech Eyeo Festival in Minneapolis.

One of the models Davis demonstrated was PoseNet (also here), which estimates the position of various body parts — elbows, wrists, knees, etc — in an image or video. I'd never seen PoseNet work before, let alone in JavaScript and in a browser.


Inspired by Heap, I set out to quickly code a music controller based on my arm movements, as seen by PoseNet through my laptop camera.

Try it yourself

It's pretty rough, but you can try it here. Just let the site use your camera, toggle the sound on, and try controlling the pitch by moving your right hand up and down in the camera frame!

I put it on Glitch, which means you can remix it. Or take a peek at the code on Github.

There are lots more ml5.js examples you can try. Just put the index.html, script.js, and models (if there's such a folder) someplace on the web where the files can be hosted. Or put them on your local machine and run a simple "localhost" server.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1417003 2019-06-06T02:12:07Z 2019-06-06T18:03:35Z My summer training program

This summer is all about training. Yes, I'm trying to run regularly, but I'm actually talking about training machine-learning algorithms.

I've been trying to learn machine learning for about three years — only to feel hopelessly overwhelmed. It was as though someone said, "With a chicken, a cow, and a field of wheat, you can make a lovely soufflé!"  

I took online classes, read books, and tried to modify sample code. But unless I devoted myself to the computer version of animal husbandry, it seemed, I was stuck.

Then someone at work mentioned fast.ai. It's a machine-learning library for Python that got me to the eggs-milk-flour stage, and provided some great starter recipes. Thanks to free guides and videos, I was soon baking algorithms that actually worked.

Now I want to get good, and experiment with different flavors and styles.

So this summer, I'm setting out to train and use new machine learning models, at least one each week. I'll try several techniques, use different kinds of data, and solve a variety of problems. It's a little like my Make Every Week project, providing constraints to inspire and motivate me.

I'll share what I learn, both here and at qz.ai where the Quartz AI Studio is helping journalists use machine learning, and I get to practice machine learning at work. 

In the fall I'll be teaching a few workshops and classes that will incorporate, I hope, some of the things I've learned this summer. If you'd like to hear about those once they're announced, drop your email address into the signup box on this page and I'll keep you posted.

Time to train!

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1388405 2019-03-21T18:03:08Z 2019-03-21T18:16:54Z You can't make your NY Times subscription online-only online

Our family believes in paying for good journalism, so we have a few subscriptions – including the New York Times.

When we signed up, we got online access along with physical papers delivered on the weekend. But we almost never read the paper version anymore, and thought it a waste. So today I went online to change my subscription to all-digital.

But you can't.

You must actually call the New York Times and speak to someone. I had to call two phone numbers, speak to two robots, and two people. All together, it took me 15 minutes. Not forever, but the user experience was a C-minus at best.

Here's what I did:

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1356763 2018-12-24T22:27:09Z 2018-12-24T22:28:59Z Beginning as a practice: The movie

Well, "the 5 minute video" at least.

Here is an Ignite talk at Newsgeist 2018 this past autumn in which I make the case for beginning repeatedly and intentionally, and even making it a practice.

By the way, the AI Studio, which I ask the audience to keep secret, has since been announced

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1342595 2018-11-10T20:37:33Z 2018-11-10T20:38:21Z A bot now updates my Slack status

One of my closest collaborators is a teammate far away — I'm in New York and Emily Withrow is in Chicago.

We stay connected chatting on Slack. But recently Emily asked if I could regularly update my Slack status to indicate what I was doing at the moment, like coding, meeting, eating. It's the kind of thing colleagues in New York know just by glancing toward my desk.

Changing my Slack status isn't hard; remembering do it is. So I built a bot to change it for me.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1323095 2018-09-19T02:20:06Z 2018-09-19T02:41:03Z I Bought Civil Tokens Today. I think.

I'm pretty sure I purchased Civil tokens today — literally buying into an experiment to put journalism on the blockchain.

After the sale, There were no tokens in my wallet and no indication my purchase was "on its way." Just a blank screen.

Unsettling, but I'm not actually worried.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1321673 2018-09-14T16:38:10Z 2019-03-06T14:45:29Z New Kid on the Blockchain

UPDATED at 7:45 pm ET on 9/17/2018 with new information. See the end of the post for details.

It's my time to go crypto.

I've followed blockchain technology, principles and trends for years without getting involved, but now have couple of reasons to get real: A new blockchain-based journalism project is about to launch, and my employer, Quartz, just launched a new cryptocurrency newsletter.

It also seemed perfect for my practice of beginning new things repeatedly.

The inspiration

Earlier this year, friends Manoush Zomorodi and Jen Poyant left their public radio jobs to join a new journalism … thing … called Civil. I had heard snippets about Civil, and started listening to Manoush's and Jen's podcast, ZigZag, part of which attempts to explain it.

After weeks of being pretty confused, I think I get it. Here's my attempt: Civil is a system designed to foster and reward quality journalism in a decentralized way, in contrast to platforms like Facebook and Google upon which so much journalism rests today.

The system’s backbone is the blockchain-based Civil token, abbreviated CVL. Holders of tokens can start news organizations in the system, challenge the membership of other news organizations in the system and/or cast votes when such challenges arise.

I have no idea if it will work. But I’m interested, and I’d rather participate than watch from the sidelines. So I’m willing to give it a whirl and okay with losing a little money in the process.

To participate, I just needed to buy some CVL ... though it turns out there's no just about it. But that's okay, too.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1320562 2018-09-12T22:39:11Z 2019-01-04T00:20:18Z Beginning as a Practice

[I recently presented this post as a 5-minute Ignite talk.]

On a morning flight some years back, the pilot's cheerful voice came over the speakers.

"I'm glad you're flying with us. This is the first time I've flown a Boeing 747,” the captain said with a pause. “Today."

We all laughed, of course. Who’d want to be on a pilot’s maiden flight?!

Not us. We want experts. Society counts on them. Companies pay them better. Spectators watch them play. Vacationers rely on their forecasts. We attend educational institutions and work long hours to become them — the qualified, the trusted, the best.

Nobody likes being a beginner.

Except that I do.

]]>
John Keefe