If you're a journalist, you've probably done a story or two about about AI. But did you know you can use machine learning, too?
I'll show you!
I'm teaching, or helping to teach, several upcoming workshops. Take a peek and see if any fit for you. It could help with your next investigation, and the experience will help you report about machine learning, too.
If you have questions, feel free to reach out to me at john [at] johnkeefe.net.
November 18 to December 15 • Knight Center for Journalism in the Americas • Online • $95
In this online video course, you will first learn how to use some off-the-shelf systems to get fast answers to basic questions: What’s in all of these images? What are these documents about? Then we’ll move to building custom machine learning models to help with a particular project, such as sorting documents into particular piles. Our work will be done with pre-written code, so you always start with a working base. You’ll then learn more by modifying it.
If you're going to ONA, get a practical, hands-on introduction to using machine learning to help pore through documents, images, and data records. This 90-minute training session by members of the Quartz AI Studio will give you the chance to use third-party tools and learn how to make custom machine-learning models. We'll walk you through pre-written code you can take home to your newsroom.
October 26 & 27, 2019 • Newmark Graduate School of Journalism • New York City
This will be a small-group, guided bootcamp where we'll spend the weekend working through practical machine-learning solutions for journalists. You'll learn to recognize cases when machine learning might help solve such reporting problems, to use existing and custom-made tools to tackle real-world issues, and to identify and avoid bias and error in your work. Students will get personalized instruction and hands-on experience for using these methods on any beat.
I'm working on a new neural network that tries to predict an outcome – true or false – based on 65 different variables in a table.
The tabular model I made with fast.ai is somewhat accurate at making those predictions (it's a small data set of just 5,000 rows). But to me even more interesting is determining which of the 65 features matter most.
I knew calculating this "feature importance" was possible with random forests, but could I do it with neural nets?
It turns out I can. The trick is, essentially, to try the model without each feature. The degree to which the model gets worse with that feature missing indicates its importance – or lack of importance.
Unfortunately, because my project uses internal Quartz analytics, I can't share the data or the charts I'm playing with. But with the code above, I can now "see into" the neural network and get cool insights about what's going on
This week I kept to my summer of training plan, however the model-building I did was for a Quartz project we're not ready to share. But! I learned something super useful in the process: how to quickly turn videos into many still images.
The fastest way to get lots of images of an object is to take a video of it. And a quick way to turn that video into images – called an "image sequence" – is ffmpeg. It seems to convert from many formats like .mp4, .mov, .avito lots different image formats such as .jpg and .png.
There's plenty more detail in the ffmpeg docs, but here's what I did that worked so quickly on my Mac:
brew install ffmpeg
I use Homebrew to put things on my Mac, so this went pretty quickly. I had to update my Xcode command line tools, but Homebrew is super helpful and told me exactly what I needed to do.
Next, I did this from the Terminal:
ffmpeg -i IMG_1019.MOV -r 15 coolname%04d.jpg
Here's what's going on:
-i means the next thing is the input file
IMG_1019.MOV is the movie I Airdropped from my phone to my laptop
-r is the flag for the sample rate.
15 is the rate. I wanted every other image, so 15 frames every second. 1 would be every second; 0.25 every 4th second.
coolname is just a prefix I picked for each image
%04d means each frame gets a zero-padded sequence number, starting with 0001 and going to 9999– so my image files are named coolname0001.jpg, coolname0002.jpg, coolname0003.jpg, etc.
.jpg is the image format I want. If I put .png I got PNGs instead.
In mere moments I had a dozens of JPG files I could use for training. And that's pretty great.
Off-the-shelf services like the Google Vision are trained to identify objects in general, like car, vehicle, and road in the image below.
But many of the journalism projects we're encountering in the Quartz AI Studio benefit from custom-built models that identify very specific items. I recently heard Meredith Broussard call this kind of work "artisanal AI," which cracked me up and also fits nicely.
So as an experiment, and as part of my summer training program, I trained an artisanal model to identify between the three objects at the top of this page from the Quartz offices: A Bevi water dispenser, a coffee urn, and a Quartz Creative arcade game (don't you wish you had one of those?!)
I also made a little website where my colleagues and I can test the model. You can, too — though you'll have to come visit to get the best experience!
The model is 100% accurate at identifying the images I fed it — which probably is not all that surprising. It's based on an existing model called resnet34, which was trained on the ImageNet data set to distinguish between thousands of things. Using a technique called transfer learning, I taught that base model to use all of its existing power to distinguish between just three objects.
It was a great night of beautiful music and technology, both.
One mystery I couldn't solve from the audience was how her computer detected the position of her arms. Unlike in her early videos, I didn't see something akin to a Kinect on stage.
Now I think maybe I know.
That's because this week I took a workshop from Hannah Davis on using the ml5.js coding library, which touts itself as "friendly machine learning for the web," letting me use machine learning models in a browser. The class was part of the art+tech Eyeo Festival in Minneapolis.
Inspired by Heap, I set out to quickly code a music controller based on my arm movements, as seen by PoseNet through my laptop camera.
Try it yourself
It's pretty rough, but you can try it here. Just let the site use your camera, toggle the sound on, and try controlling the pitch by moving your right hand up and down in the camera frame!
There are lots more ml5.js examples you can try. Just put the index.html, script.js, and models (if there's such a folder) someplace on the web where the files can be hosted. Or put them on your local machine and run a simple "localhost" server.
This summer is all about training. Yes, I'm trying to run regularly, but I'm actually talking about training machine-learning algorithms.
I've been trying to learn machine learning for about three years — only to feel hopelessly overwhelmed. It was as though someone said, "With a chicken, a cow, and a field of wheat, you can make a lovely soufflé!"
I took online classes, read books, and tried to modify sample code. But unless I devoted myself to the computer version of animal husbandry, it seemed, I was stuck.
Then someone at work mentioned fast.ai. It's a machine-learning library for Python that got me to the eggs-milk-flour stage, and provided some great starter recipes. Thanks to free guides and videos, I was soon baking algorithms that actually worked.
Now I want to get good, and experiment with different flavors and styles.
So this summer, I'm setting out to train and use new machine learning models, at least one each week. I'll try several techniques, use different kinds of data, and solve a variety of problems. It's a little like my Make Every Week project, providing constraints to inspire and motivate me.
I'll share what I learn, both here and at qz.ai where the Quartz AI Studio is helping journalists use machine learning, and I get to practice machine learning at work.
In the fall I'll be teaching a few workshops and classes that will incorporate, I hope, some of the things I've learned this summer. If you'd like to hear about those once they're announced, drop your email address into the signup box on this page and I'll keep you posted.
Our family believes in paying for good journalism, so we have a few subscriptions – including the New York Times.
When we signed up, we got online access along with physical papers delivered on the weekend. But we almost never read the paper version anymore, and thought it a waste. So today I went online to change my subscription to all-digital.
But you can't.
You must actually call the New York Times and speak to someone. I had to call two phone numbers, speak to two robots, and two people. All together, it took me 15 minutes. Not forever, but the user experience was a C-minus at best.
One of my closest collaborators is a teammate far away — I'm in New York and Emily Withrow is in Chicago.
We stay connected chatting on Slack. But recently Emily asked if I could regularly update my Slack status to indicate what I was doing at the moment, like coding, meeting, eating. It's the kind of thing colleagues in New York know just by glancing toward my desk.
Changing my Slack status isn't hard; remembering do it is. So I built a bot to change it for me.