tag:johnkeefe.net,2013:/posts johnkeefe.net 2021-02-15T20:06:08Z John Keefe tag:johnkeefe.net,2013:Post/1654379 2021-02-15T19:29:01Z 2021-02-15T20:06:08Z Taking time to build a triangle-grid clock

I like what's possible with triangles.

Playing with rectangular blinky grids is super fun, and I've made a weather monitor and a pulse-oximeter with those.

But there's something additionally awesome about the pattern possibilities with triangle pixels.

So when I saw a Hackaday post about building a clock display with LED triangles, I was hooked.

The short story is that I made it! It now lights up my living room with dazzling animations and a funky time display.

The longer story involves perseverance made possible by my coronavirus lockdown.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1651048 2021-02-07T23:10:16Z 2021-02-07T23:13:47Z Is it Monday? My Pi has the answer

Keeping track of the days has been harder lately, it seems.

So I was excited to see a nifty blog post by Dave Gershgorn, where he described how he built a slick dashboard by attaching a screen to a Raspberry Pi computer. In fact, the Pi actually attaches to the back of the screen, out of sight.

I happen to be the kind of nerd who has a couple of Raspberry Pis around (in my case, some older Pi 3 model B's), so I ordered the recommended screen and followed Dave's great directions along with this ETA Prime video. If you're similarly inspired, just follow those guides.

If you're new to setting up a Pi, you might not realize that it doesn't come with an operating system. You need to install one on a micro SD card, and slide it into the Pi. I like to download the latest, recommended system from the Pi site, unzip it, and use the balena Etcher to flash the SD card.

One of the build steps that was unclear from the video was exactly how to attach the power lines to the Pi. For my Pi, the pins were these three:

Another tricky step was folding the ribbon cable so it fit nicely. Here's how I did it:

Then it was just a treat to see the tiny Pi desktop appear before my eyes:

I launched the Terminal application with the little cursor icon in the upper left corner, and in order to run the installation commands I increased the Terminal text size using Ctrl-Shift-+.

Once I got everything running, I installed MagicMirror, added a monthly calendar module, and played with the configuration settings to suit my needs. (I also toyed with the Javascript and the CSS because I couldn't help myself, but you certainly don't have to.)

Works like a charm.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1644598 2021-01-24T22:00:44Z 2021-01-24T23:30:05Z Drawing arcs of circles

Maybe you've seen them: Rainbows of circles representing members of the U.S. Senate, House of Representatives, or all of Congress.

I wanted to make such a visualization to show the number of members of Congress who've tested positive for coronavirus and the positive tests among each party.

If not for the serious nature of the topic, it was a fun puzzle to solve.

The steps I took were:

  1. Figure out how many circles fit in each ring
  2. Calculate the positions for every circle
  3. Sort the positions to suit my needs
  4. Marry the positions to data

It turns out that once I established a) the number of circles in each ring and b) the size of those circles, I could figure out the rest with code.

You can play with the final results, or take a look at the code yourself. But here's the explanation:

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1635247 2021-01-03T18:19:30Z 2021-01-03T18:46:11Z Printing a pumpkin

There's something exciting about holding an object you previously only imagined — whether it's a freshly baked loaf, a tomato off a garden vine, or a printed plastic pumpkin.

I've had that feeling a lot lately, with a pandemic purchase of a 3D printer.

Rolling an object in your fingers that was previously just a digital file on the internet is ridiculously fun. It's even more rewarding if the thing conjured was something you — or your kid — dreamed up.

That's what happened with this 3D pumpkin. My daughter drew it late one night for an animation class assignment using the program she was learning, Cinema 4D.

And then we made it real.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1564332 2020-06-24T21:11:21Z 2020-06-25T03:13:33Z Modeling the 2020 vote with Observable

I've been interested in how voter turnout might affect the 2020 US election and I've wanted to play with Observable notebooks.

So I blended the two projects, and you can play with my live Observable notebook that does those calculations.

The result is an admittedly super-simplistic model of how things might turn out. But you can increase the percentage of Republican and Democratic voters nationwide and see what happens!

Notably, even if Democrats were able to boost turnout more than Republicans — say 107% vs 106% — Trump still wins.

As written, it doesn't consider nuances such as regional differences in voting turnouts, swing voters, or faithless electors. (It does, however, account for the unique ways Maine and Nebraska divide their electoral votes). But I learned a lot in the process ... and there's more to come.

All my calculations are visible in the Observable notebook itself, and the initial data prep is documented in a Github repository. For good measure, I put all the raw data in my Datasette library.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1562413 2020-06-20T23:17:00Z 2020-06-20T23:32:19Z Minneapolis race and ethnicity data by neighborhood, served with Datasette

Minneapolis police report stops and other incidents by neighborhood, so I decided to calculate the racial makeup of those neighborhoods to make some comparisons — along the lines of what I've already done for New York, Chicago, and Washington, DC.

This time, though, I'm using Datasette.

I've seen creator Simon Willison tweet about Datasette, and with some extra time on my hands I took a look. It's so impressive!

With Datasette, one can publish data online easily, efficiently (even free!) and in a way that allows others to explore the data themselves using SQL and feed data visualizations and apps. At scale.

How is this not in every newsroom?

(Simon, by the way, has offered to help any newsroom interested in using Datasette — an offer I hope to take him up on someday.)

Minneapolis neighborhoods

Once again, I've married US Census blocks with other municipal zones, this time the official neighborhood map of Minneapolis.

That data is now online, served up with Datasette.

And with some nifty SQL queries, bookmarked as simple links, I can list the race and ethnic makeup of every neighborhood by raw number.

Or by percentage.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1556001 2020-06-08T03:02:30Z 2020-07-27T14:50:30Z Race and ethnicity data by Washington DC police zones

If you've got arrest or incident data from the Metropolitan Police in Washington DC, and that data is broken out by police district or public service area, you may want to compare it with the racial and ethnic makeup of the people living in those zones.

If so, this post is for you.

The US Census doesn't break out populations by police districts. But in DC and other large cities, census blocks serve as atomic units that usually do fall within police precinct boundaries. So by knowing which blocks are within which districts, you can calculate the populations. Unfortunately, block-level data is only available from the decennial count, so the latest data is from 2010.

This is my third spin at such data — I've also done New York City and Chicago

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1555906 2020-06-07T23:04:11Z 2020-12-23T22:30:17Z Chicago race and ethnicity data by police district

If you're trying to match Chicago police district data with the racial and ethnic makeup of those police districts, this post is for you.

The boundaries for police districts and precincts don't usually line up nicely with US census boundaries like census tracts or block groups. That makes it tough to compare incident and arrest data reported by precinct with the population of those precincts. 

But in bigger cities, census blocks are small enough to serve as atomic units that usually do fall within police precinct boundaries. So by knowing which blocks are within which districts, you can calculate the populations. Block-level data is only available from the decennial census count, so the latest data is from 2010. But it still should serve as a good measure — and a reason to fill out your 2020 census form online!

After doing these calculations for New York City, I put together Chicago's by request!

]]>
John Keefe
tag:johnkeefe.net,2013:Post/481435 2020-06-05T20:53:02Z 2020-12-30T18:48:02Z Sharing NYC Police Precinct Data

Note: This post was originally published April 29, 2011. I've updated it completely with fresh info. Also just did the same type of calculation for Chicago.

Anyone doing population analysis by NYC police precinct might find this post helpful. 

Back in 2011, I wanted to compare the racial and ethnic breakdown of low-level marijuana arrests — reported by police precinct — with that of the general population. The population data, of course, is available from the US Census, but police precincts don't follow any nice, relatively large census boundary like a census tract. Instead, they generally follow streets and shorelines. Fortunately, census blocks (which in New York, are often just city blocks) also follow streets. But there are almost 40,000 census blocks in the city.

So I used precinct maps from the city and US Census block maps to figure out which blocks are in which precincts. With that, the population data is just math.

The original stories, and the Google Fusion Tables where the data lived, are all gone to digital internet history. But I've recreated them here, and also updated the calculations — some precinct boundaries changed slightly, and those on Staten Island changed significantly with the addition of a fourth precinct on the island in 2013.

So here are the updated tables. The population data is from the 2010 census, the precincts are as they exist as I write this in June 2020.

Have at it.

2010pop_2020precincts.csv is the 2010 population breakdown within each precinct as they are drawn in June 2020. The column headings are cryptic, but follow the codes starting on this page, which is from this rather large Census Bureau PDF.  

• precinct_block_key_2020.csv is the Rosetta Stone for this project. It has two columns: each block's identifier, which the census calls "geoid10," and the precinct in which that block sits. Note that some blocks aren't in any precinct, usually because they're actually in the water. 

• nyc_2010censusblocks_2020policeprecincts.csv contains base-level 2010 Census data for each block, married to the precinct for that block. For descriptions of the population columns, follow the codes starting on this page or see pages 6-21 in the Census Bureau PDF

• NYC_Police_Precincts_2020.zip is the official police precinct map shapefile, downloaded from the city's open data portal.

Caveats

I did my best to be accurate in computing the intersection of blocks and precincts, even generating precinct maps and inspecting them visually. But errors may exist. You can check my math in the Jupyter notebooks I used.

Census blocks generally fall nicely within precinct outlines, but they don't always. In particular, three blocks significantly straddle two precincts. If you're doing very precise analysis, you'll want to account for them:

• Block 360470071002003: An area near the north end of the Gowanus Canal in Brooklyn. About half is in Precinct 76 and half in Precinct78. Total people: 51

• Block 360050096002000: Mainly industrial. Half in Precinct 76, half in Precinct 78. Total people: 5.

• Block 360610265003001: This block consists of five similar-sized apartment buildings near the George Washington Bridge. The northern set of buildings are in the 34th Precinct, with part of one building in the 33rd. I put the entire block, and the 687 people living there, in the 34th Precinct. Looks like roughly an 80/20 split.

Credits

I originally did this work while at WNYC, using PostgreSQLPostGIS and QGIS. I was helped by the generosity and insights of Jeff Larson, Al Shaw, and Jonathan Soma.

If you find this information useful, drop me a note or post a comment below. I'd love to know about it.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1533004 2020-04-19T03:44:31Z 2020-04-19T18:06:38Z Lockdown loaves

It's become a coronavirus cliché, but for this week's #MakeEveryWeek I made sourdough bread. 

The twist: I made one loaf in the oven and one in a slow cooker.

It all started with sourdough starter, specifically this guide from Quartz colleague Tim McDonnell. This was a great project for my teens, incorporating chemistry, biology, and excellent smells.

Next was this incredibly fun and detailed sourdough recipe from Kitchn, which makes two loaves and relies on two oven-safe pots. Alas, our family has but one.

We do have a slow cooker, though. Could I make one of the loaves in that? The answer is yes!

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1529710 2020-04-11T18:56:31Z 2020-04-20T00:10:09Z Building a pulse oximeter

At-home pulse oximeters, those fingertip devices doctors use to measure the oxygen saturation in your blood, have been selling out everywhere thanks to the Covid-19 pandemic.

But as my Quartz colleague Amirta Khalid points out in this great article, most people don't need 'em. If your oxygen level is worryingly low, you'll know — you don't need a machine to tell you. Folks with some existing conditions, however, can use a pulse oximeter to help a remote doctor monitor their vitals or to adjust supplementary oxygen devices.

When Khalid mentioned she was working the story, it reminded me of the DIY "pulse ox" sensor Sparkfun sells. It, like other pulse oximeters, shines light into the skin and makes measurements based on how that light is absorbed. I've built heartbeat-driven projects before and had been exploring new ways to monitor pulse rates. So I got one.

Sparkfun warns in red letters that "this device is not intended to diagnose or treat any conditions," and I offer the same caution if you're tempted to build one. The process wasn't hard at all. I got it running quickly ... and then added an LED display for fun and flourish.

Here's how I made it, and the code, too.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1526323 2020-04-02T03:13:37Z 2020-04-02T12:53:12Z Work-from-home "on air" light

I'm incredibly lucky to be both healthy and able to work from home during this coronavirus crisis. That means I spend large chunks of my day on video calls.

As a courtesy to my family, all of whom are also working and schooling from home, I've tried to warn them when they risk being broadcast to my colleagues. 

Now I have a fun "on air" light to help! And I've put the code online so you can make one, too.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1522587 2020-03-23T00:39:43Z 2020-03-23T00:44:49Z DIY aquarium lights

Buy a new aquarium, and you often get hood lights that are ... meh. They're good enough, but not great.

There are plenty of high-quality replacement lights out there, but none of them had the nice, low profile of the plastic covers that came with this tank. So I decided to spruce up the existing illumination with some DYI lights — and even make them programmable with an Arduino.

That was more than a year ago. Now in coronavirus isolation, I finally made it happen.

Here's how.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1521884 2020-03-20T03:11:35Z 2020-03-20T03:18:00Z Amazon Aurora MySQL + Python

Ok, so this isn't the sexiest topic, but if you're completely stuck the way I was several times today, maybe you're happy you found this post.

Today I needed to spin up a database I want available to students at the Newmark Graduate School of Journalism and also colleagues at Quartz. I also want to connect to the database from my home and the school using Python.

Since we use Amazon's web services, and I wanted to show the students SQL, I decided to give the AWS Aurora system a whirl — specifically the MySQL-compatible version.

As with many things AWS, it was a bit of a slog to get set up ... and I've decided to jot it all down while it's fresh so I can remember how the heck I did it (and show my students).

After a few tries, here's how I finally got set up:

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1480566 2019-11-21T22:36:32Z 2019-11-24T19:25:30Z Machine learning in my pajamas

Tonight I gave a presentation at Newsgeist about how I did machine learning in my pajamas — in my pajamas.

I promised the gathered crowd I'd post how they, too, can make their own bike-detector, so here it goes:

  1. Follow the instructions here.
  2. When you get to the part about picking a notebook, use this one: notebooks/ee-searching-videos-with-fastai.ipynb

Then follow the steps to work through the code! Have fun!


]]>
John Keefe
tag:johnkeefe.net,2013:Post/1441702 2019-08-06T12:44:43Z 2020-04-21T22:51:27Z AI classes for journalists

(Promo video for the Knight Center course.)

If you're a journalist, you've probably done a story or two about about AI.  But did you know you can use machine learning, too? 

I'll show you! 

While the classes below have passed, the videos and accompanying code for the Knight Center course are now available free online.

Work at your own pace and enjoy. It could help with your next investigation, and the experience will help you report about machine learning, too.


Past classes:


September 13, 2019 • 11 am •  InterContinental New Orleans • Treme / 2nd Floor 

Hands-on Introduction: Machine Learning for Journalists at ONA

If you're going to ONA, get a practical, hands-on introduction to using machine learning to help pore through documents, images, and data records. This 90-minute training session by members of the Quartz AI Studio will give you the chance to use third-party tools and learn how to make custom machine-learning models. We'll walk you through pre-written code you can take home to your newsroom.


October 26 & 27, 2019 • Newmark Graduate School of Journalism • New York City

Weekend Bootcamp: Practical Machine Learning for Journalists

This will be a small-group, guided bootcamp where we'll spend the weekend working through practical machine-learning solutions for journalists. You'll learn to recognize cases when machine learning might help solve such reporting problems, to use existing and custom-made tools to tackle real-world issues, and to identify and avoid bias and error in your work. Students will get personalized instruction and hands-on experience for using these methods on any beat.


November 18 to December 15 • Knight Center for Journalism in the Americas • Online • $95

4-Week Online Course: Hands-on Machine Learning Solutions for Journalists

In this online video course, you will first learn how to use some off-the-shelf systems to get fast answers to basic questions: What’s in all of these images? What are these documents about? Then we’ll move to building custom machine learning models to help with a particular project, such as sorting documents into particular piles. Our work will be done with pre-written code, so you always start with a working base. You’ll then learn more by modifying it.

Updated 21 April 2020

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1423645 2019-06-24T01:56:54Z 2019-06-24T02:03:56Z Detecting feature importance in fast.ai neural networks

I'm working on a new neural network that tries to predict an outcome – true or false – based on 65 different variables in a table.

The tabular model I made with fast.ai is somewhat accurate at making those predictions (it's a small data set of just 5,000 rows). But to me even more interesting is determining which of the 65 features matter most. 

I knew calculating this "feature importance" was possible with random forests, but could I do it with neural nets?

It turns out I can. The trick is, essentially, to try the model without each feature. The degree to which the model gets worse with that feature missing indicates its importance – or lack of importance.

This blog post describes how to run this test, and this adaptation worked perfectly in my fast.ai notebook. Here's the code in a Gist:

Unfortunately, because my project uses internal Quartz analytics, I can't share the data or the charts I'm playing with. But with the code above, I can now "see into" the neural network and get cool insights about what's going on


]]>
John Keefe
tag:johnkeefe.net,2013:Post/1420535 2019-06-15T19:34:53Z 2019-06-15T19:36:24Z Converting videos to images for machine learning

This week I kept to my summer of training plan, however the model-building I did was for a Quartz project we're not ready to share. But! I learned something super useful in the process: how to quickly turn videos into many still images.

For our latest project, I'm training a model to identify specific objects available to me – much like how I trained a model to identify items in the office.

The fastest way to get lots of images of an object is to take a video of it. And a quick way to turn that video into images – called an "image sequence" – is ffmpeg. It seems to convert from many formats like .mp4, .mov, .avi to lots different image formats such as .jpg and .png.

There's plenty more detail in the ffmpeg docs, but here's what I did that worked so quickly on my Mac:

brew install ffmpeg

I use Homebrew to put things on my Mac, so this went pretty quickly. I had to update my Xcode command line tools, but Homebrew is super helpful and told me exactly what I needed to do.

Next, I did this from the Terminal:

ffmpeg -i IMG_1019.MOV -r 15 coolname%04d.jpg

Here's what's going on:

  • -i means the next thing is the input file
  • IMG_1019.MOV is the movie I Airdropped from my phone to my laptop
  • -r is the flag for the sample rate.
  • 15 is the rate. I wanted every other image, so 15 frames every second. 1 would be every second; 0.25 every 4th second.
  • coolname is just a prefix I picked for each image
  • %04d means each frame gets a zero-padded sequence number, starting with 0001 and going to 9999– so my image files are named coolname0001.jpg, coolname0002.jpg, coolname0003.jpg, etc.
  • .jpg is the image format I want. If I put .png I got PNGs instead.

In mere moments I had a dozens of JPG files I could use for training. And that's pretty great.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1418405 2019-06-09T22:19:57Z 2019-06-24T02:23:34Z Artisanal AI: Detecting objects in our office

Off-the-shelf services like the Google Vision are trained to identify objects in general, like car, vehicle, and road in the image below.

But many of the journalism projects we're encountering in the Quartz AI Studio benefit from custom-built models that identify very specific items. I recently heard Meredith Broussard call this kind of work "artisanal AI," which cracked me up and also fits nicely.

So as an experiment, and as part of my summer training program, I trained an artisanal model to identify between the three objects at the top of this page from the Quartz offices: A Bevi water dispenser, a coffee urn, and a Quartz Creative arcade game (don't you wish you had one of those?!)

I also made a little website where my colleagues and I can test the model. You can, too — though you'll have to come visit to get the best experience!

The results

The model is 100% accurate at identifying the images I fed it — which probably is not all that surprising. It's based on an existing model called resnet34, which was trained on the ImageNet data set to distinguish between thousands of things. Using a technique called transfer learning, I taught that base model to use all of its existing power to distinguish between just three objects.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1417028 2019-06-06T04:49:41Z 2019-06-06T17:11:26Z Making music with my arms

The brilliant Imogen Heap performed in New York a few weeks ago, and I got to experience live how she crafts sounds with her arms and hands

It was a great night of beautiful music and technology, both.

One mystery I couldn't solve from the audience was how her computer detected the position of her arms. Unlike in her early videos, I didn't see something akin to a Kinect on stage.

Now I think maybe I know.

That's because this week I took a workshop from Hannah Davis on using the ml5.js coding library, which touts itself as "friendly machine learning for the web," letting me use machine learning models in a browser. The class was part of the art+tech Eyeo Festival in Minneapolis.

One of the models Davis demonstrated was PoseNet (also here), which estimates the position of various body parts — elbows, wrists, knees, etc — in an image or video. I'd never seen PoseNet work before, let alone in JavaScript and in a browser.


Inspired by Heap, I set out to quickly code a music controller based on my arm movements, as seen by PoseNet through my laptop camera.

Try it yourself

It's pretty rough, but you can try it here. Just let the site use your camera, toggle the sound on, and try controlling the pitch by moving your right hand up and down in the camera frame!

I put it on Glitch, which means you can remix it. Or take a peek at the code on Github.

There are lots more ml5.js examples you can try. Just put the index.html, script.js, and models (if there's such a folder) someplace on the web where the files can be hosted. Or put them on your local machine and run a simple "localhost" server.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1417003 2019-06-06T02:12:07Z 2019-06-06T18:03:35Z My summer training program

This summer is all about training. Yes, I'm trying to run regularly, but I'm actually talking about training machine-learning algorithms.

I've been trying to learn machine learning for about three years — only to feel hopelessly overwhelmed. It was as though someone said, "With a chicken, a cow, and a field of wheat, you can make a lovely soufflé!"  

I took online classes, read books, and tried to modify sample code. But unless I devoted myself to the computer version of animal husbandry, it seemed, I was stuck.

Then someone at work mentioned fast.ai. It's a machine-learning library for Python that got me to the eggs-milk-flour stage, and provided some great starter recipes. Thanks to free guides and videos, I was soon baking algorithms that actually worked.

Now I want to get good, and experiment with different flavors and styles.

So this summer, I'm setting out to train and use new machine learning models, at least one each week. I'll try several techniques, use different kinds of data, and solve a variety of problems. It's a little like my Make Every Week project, providing constraints to inspire and motivate me.

I'll share what I learn, both here and at qz.ai where the Quartz AI Studio is helping journalists use machine learning, and I get to practice machine learning at work. 

In the fall I'll be teaching a few workshops and classes that will incorporate, I hope, some of the things I've learned this summer. If you'd like to hear about those once they're announced, drop your email address into the signup box on this page and I'll keep you posted.

Time to train!

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1388405 2019-03-21T18:03:08Z 2019-03-21T18:16:54Z You can't make your NY Times subscription online-only online

Our family believes in paying for good journalism, so we have a few subscriptions – including the New York Times.

When we signed up, we got online access along with physical papers delivered on the weekend. But we almost never read the paper version anymore, and thought it a waste. So today I went online to change my subscription to all-digital.

But you can't.

You must actually call the New York Times and speak to someone. I had to call two phone numbers, speak to two robots, and two people. All together, it took me 15 minutes. Not forever, but the user experience was a C-minus at best.

Here's what I did:

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1356763 2018-12-24T22:27:09Z 2018-12-24T22:28:59Z Beginning as a practice: The movie

Well, "the 5 minute video" at least.

Here is an Ignite talk at Newsgeist 2018 this past autumn in which I make the case for beginning repeatedly and intentionally, and even making it a practice.

By the way, the AI Studio, which I ask the audience to keep secret, has since been announced

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1342595 2018-11-10T20:37:33Z 2018-11-10T20:38:21Z A bot now updates my Slack status

One of my closest collaborators is a teammate far away — I'm in New York and Emily Withrow is in Chicago.

We stay connected chatting on Slack. But recently Emily asked if I could regularly update my Slack status to indicate what I was doing at the moment, like coding, meeting, eating. It's the kind of thing colleagues in New York know just by glancing toward my desk.

Changing my Slack status isn't hard; remembering do it is. So I built a bot to change it for me.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1323095 2018-09-19T02:20:06Z 2018-09-19T02:41:03Z I Bought Civil Tokens Today. I think.

I'm pretty sure I purchased Civil tokens today — literally buying into an experiment to put journalism on the blockchain.

After the sale, There were no tokens in my wallet and no indication my purchase was "on its way." Just a blank screen.

Unsettling, but I'm not actually worried.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1321673 2018-09-14T16:38:10Z 2019-03-06T14:45:29Z New Kid on the Blockchain

UPDATED at 7:45 pm ET on 9/17/2018 with new information. See the end of the post for details.

It's my time to go crypto.

I've followed blockchain technology, principles and trends for years without getting involved, but now have couple of reasons to get real: A new blockchain-based journalism project is about to launch, and my employer, Quartz, just launched a new cryptocurrency newsletter.

It also seemed perfect for my practice of beginning new things repeatedly.

The inspiration

Earlier this year, friends Manoush Zomorodi and Jen Poyant left their public radio jobs to join a new journalism … thing … called Civil. I had heard snippets about Civil, and started listening to Manoush's and Jen's podcast, ZigZag, part of which attempts to explain it.

After weeks of being pretty confused, I think I get it. Here's my attempt: Civil is a system designed to foster and reward quality journalism in a decentralized way, in contrast to platforms like Facebook and Google upon which so much journalism rests today.

The system’s backbone is the blockchain-based Civil token, abbreviated CVL. Holders of tokens can start news organizations in the system, challenge the membership of other news organizations in the system and/or cast votes when such challenges arise.

I have no idea if it will work. But I’m interested, and I’d rather participate than watch from the sidelines. So I’m willing to give it a whirl and okay with losing a little money in the process.

To participate, I just needed to buy some CVL ... though it turns out there's no just about it. But that's okay, too.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1320562 2018-09-12T22:39:11Z 2019-01-04T00:20:18Z Beginning as a Practice

[I recently presented this post as a 5-minute Ignite talk.]

On a morning flight some years back, the pilot's cheerful voice came over the speakers.

"I'm glad you're flying with us. This is the first time I've flown a Boeing 747,” the captain said with a pause. “Today."

We all laughed, of course. Who’d want to be on a pilot’s maiden flight?!

Not us. We want experts. Society counts on them. Companies pay them better. Spectators watch them play. Vacationers rely on their forecasts. We attend educational institutions and work long hours to become them — the qualified, the trusted, the best.

Nobody likes being a beginner.

Except that I do.

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1320798 2018-09-11T21:05:06Z 2018-09-11T21:42:36Z Eyeo Festival Videos: Check them out

The Eyeo Festival just posted all of the videos from the 2018 festival, which is such a great service. Above is my talk on how we in the Quartz Bot Studio tell stories with conversational interfaces.

The festival had so many great speakers, and it's literally impossible to see them all live.

Here are some of my favorites I did see at the time, and highly recommend:

Check them out! 


]]>
John Keefe
tag:johnkeefe.net,2013:Post/1267726 2018-04-01T20:41:33Z 2018-04-01T20:51:10Z Giving Better Weather to Alexa

Nearly every day, someone in our family asks Alexa for the day's weather. The default response is fine -- high temp, low temp, sun or rain.

But given our three nor'easters, intense wind chills, and high-wind days, that wasn't enough. How much rain? When will it start? How much snow? How cold will it feel?

We needed something better.

Fortunately the US National Weather Service does a fantastic job writing up little descriptions of what's in store for every spot in the country. It's been my go-to source for years. Could I get Alexa to say that?

Short answer: Yes, I could. And now you can add "Better Weather" to your Alexa, too. For free. (In the US only, for now.)

For a longer description of how I made it, read on. 

]]>
John Keefe
tag:johnkeefe.net,2013:Post/1177135 2017-07-26T02:52:41Z 2017-07-26T19:28:43Z Chibimojis: A dad and his daughter walk into the App Store

We made adorable manga faces you can add to iPhone messages!

They're iMessage "stickers," a fairly obscure feature of Apple's texting system that, it turns out, are pretty easy to make – and make public.

]]>
John Keefe