Analytics, Data Visualization

The MIT Big Data Challenge: Visualizing Four Million Taxi Rides

The other day, I was showing a colleague how to use Python and Jupyter notebooks for some quick-and-dirty data visualization. It reminded me of some work I’d done while competing in the MIT Big Data Challenge. I meant to blog about it at the time, but never got down to it. However, it’s never too late, so I’m starting with this post.

The Visualization

The main point of this post is the following animated visualization which overlays a heatmap of the pickup locations of around 4.2 million taxi rides over a period of about five months in 2012 on top of a map of downtown Boston. The interesting thing about the heatmap of pickup spots is that it reveals the streets Boston, and highlights popular hotspots. Overlaying it on the actual satellite map of Boston shows this more clearly:

Those familiar with Boston will see that main streets like Massachusetts Ave, Boylston St and Broadway are starkly delineated. Hotspots like Fenway Park, Prudential Center, the Waterfront and Mass General Hospital show up quite clearly. The three most popular hotspots appear to be Logan Airport, South Station, and Back Bay Station, which is very much to be expected. I remember several occasions where I’ve taken a cab from these places after a night out or returning home after a flight or Amtrak, particularly on a freezing winter day!

The Challenge

Even though the challenge is a few years old (winter of 2013-2014), the context is still very much relevant today, perhaps even more so. The main goal of the challenge was to predict taxi demand in downtown Boston. Specifically, contestants were required to build a model that predicted the number of pickups within a certain radius of a location given (i) the latitude and longitude of the location, (ii) a day, and (iii) a time period (typically spanning a few hours) in that day. In addition, information about weather and events around the city was also available. Such a model has obvious uses – drivers on the likes of services like Uber and Lyft could use it to tell where future demand is likely to be, and use that information to plan their driving and optimize their earnings. The services themselves could also use it to anticipate times and locations of high demand, and to dynamically meet that demand by appropriately incentivizing drivers well in advance through surge —err, I mean dynamic pricing, so that there is enough time to get enough drivers at the location by the time the demand starts to pick up.

I was thrilled when after a lot of perseverance, I finally managed to get on the leaderboard. My self-congratulations were brief though, as I was soon blown away. Which was not surprising in the least; this challenge was organized at MIT’s Computer Science and Artificial Intelligence Lab after all. Enough said.

Getting on the leaderboard, albeit briefly, was fun. But winning was never the goal. Rather, my goals for competing in the challenge were three-fold. First, I wanted to getter a deeper understanding of data science workflows and processes through a relatively complex project. Second, I wanted to expand my machine learning skills. Finally, I wanted to try using Python (and in particular scikit-learn). So far I’d only used R for building predictive models (recognizing handwritten digits and predicting the number of “useful” votes a Yelp review will receive). But I had recently learned Python and had used it extensively during my summer internship at Google building analytical models for Google Express and Hotel Ads. One of the main drivers was that I was about to start an product management internship at a stealth-mode visual analytics startup called DataPad founded by the Wes McKinney and Chang She, the creators of Pandas (the super popular Python library that I had used extensively in my work at Google), and wanted to be prepared with some product ideas.

In the end the process was extremely educational and led to many insights in several areas of data science and machine learning. I even distilled some of the work for a Python for Data Science Bootcamp which I conducted for the MIT Sloan Data Analytics Club along with my co-founder and co-president. I’ll write about some of the learnings and insights in future posts, but for this one one I’d like to talk briefly about the heatmap visualization from the start of this post.

The Making Of

Inevitably the first thing one does when encountered with a new dataset, particularly for a predictive challenge such as this, is to understand the “shape” of the data. To help with this, it’s fairly typical to create visualizations using one or more variables from the data. So the first thing I did was to fire up IPython Notebook (now known as Jupyter), and start to look at the pickups dataset provided.

After staring at it for a while, the proverbial light bulb went off in my head. Even though longitude and latitude are measured in degrees and are used to indicate a point on the surface of a the earth which is a sphere, what if they could be used as Cartesian coordinates for a scatter plot on a plane with longitude on the y-axis and latitude on the x-axis? I tried precisely that (for just one day’s rides) and lo and behold the map of downtown Boston was revealed:

It became obvious that for a small area (relative to the size of the earth) like Boston, the curvature of the earth could be ignored. From here, creating a heatmap was fairly straightforward. I used matplotlib’s hexagonal binning plot (essentially, a 2-d histogram) with a logarithmic scale for the color map. If you’d like to understand it from the ground up, I made a simple step-by-step introductory tutorial of data visualization in Jupyter that ends with the the generation of this heatmap for the MIT Sloan Analytics Club “Python for Data Science” bootcamp .

To create the visualization for this post, I redid the heatmap using all the pickup data available (around 4.2 million pickups over 5 months) and used a different color palette to end up with this:

The code can be found here, but the full dataset itself is not included because it’s over 300MB in size and GitHub has a limit of 100MB.

Then it was a matter of taking a satellite photo of Boston, and messing about with GIMP to overlay the heatmap on top of it, create the animation by blending the two, and exporting it as an animated GIF. This was the first time I used GIMP in anger (I always thought it was for Linux and didn’t realize there was a Mac app available), and I have to say it’s pretty awesome as a free alternative to Photoshop. It doesn’t quite feel like a native Mac app — the behavior and look of the menus and navigation are a little funky— but it got the job done really well for what I needed to do.

Bonus Interactive Visualization

While trying to figure out the best way to present the heatmap overlayed on the Boston map (and eventually settling on the simplicity and versatility of an animated GIF), I came across the cool “onion skin” image comparison feature of GitHub. Click on “Onion Skin” in the image comparison that shows up for this commit.

github1

You can use a slider to manually blend the two images and clearly see the how the taxirides heatmap maps onto (pun intended!) the streets of Boston.

github2

Improvements

Even though I was relatively familiar with Boston having lived there for two years, it was still not immediately obvious what some of the specific hotspots where. This could be addressed in a couple of ways:

Alternative “Static” Visualization

Create an similar animated GIF visualization but using a street map with labels.

Dynamic Overlay on “Live” Interactive Map

A better approach would be to create an app that uses something like the Google Maps API to show a ”live” interactive map view that allows the user to use all the features of Google Maps like zooming, switching between street and satellite views etc.. The app would let the user toggle visibility of the overlay heatmap overlay on top of the map. The user could choose from a set of colormaps for the overlay (some would be more suitable for street vs satellite views), and also use a slider to play with the overlay’s opacity (like with GitHub’s onion skin tool).

Dynamic Overlay on 3D Map

The next logical step would be to take the dynamic overlay concept and apply it to a live 3D map view. Here is a “concept” of that idea:

3d_overlay_concept

Standard
Analytics, Data Visualization

Where Do Sloanies Go After They Graduate?

It’s been a year since I graduated from the full-time MBA program at MIT Sloan and moved to the San Francisco Bay Area to work as a Product Manager. I thought it would be interesting to see where Sloanies go after they graduate and, using data from a survey sent out shortly before graduation, came up with an interactive visualization: Sloanies Around the World.

Sloanies Around the World

Clicking on “USA” from the menu presents a clearer picture of where Sloanies ended up in the States:

Sloanies Around the World - USA

The top 4 cities where my classmates ended up are:

  1. Boston
  2. San Francisco Bay Area
  3. New York
  4. Seattle

It is interesting to see how post-MBA career choice determines location. I wanted to remain in software, and chose MIT because of its reputation in technology and entrepreneurship. In fact, technology is the second most popular career choice (after consulting) for Sloanies. Indeed, out of all the M7 business schools, Sloan had the highest proportion of graduates choosing technology (26%). (Source: The M7: The Super Elite Business Schools By The Numbers) For me, the Bay Area was the obvious choice even though it’s on the opposite coast. Many of my classmates echoed this sentiment, which explains San Francisco and Seattle being top post-MBA destinations.

While I’d expect this year’s graduating class to have a similar map, what would be most interesting is visualizations from other schools. I would expect to see maps for schools with a strong finance reputations like Harvard, Booth, Wharton and Columbia be much more heavily skewed towards financial centers like New York.

Standard
Analytics

Predicting the Number of “Useful” Votes a Yelp Review Will Receive

A few months ago I wrote about creating a submission for the Digit Recognizer tutorial “competition” on Kaggle that could correctly recognize 95% of handwritten digits.

Enthused by this, I decided to participate in a real competition on Kaggle, and picked the Yelp Recruiting Competition which challenges data scientists to predict the number of “useful” votes a review on Yelp will receive. Good reviews on Yelp accumulate lots of Useful, Funny and Cool votes over time. “What if we didn’t have to wait for the community to vote on the best reviews to know which ones are high quality?”, was the question posed by Yelp as the motivation for the challenge.

I am pleased to report that I ended the competition in the top one-third of the leaderboard (110 out of 352). Although the final result was decent, there were many stumbling blocks along the way.

Data

The training data consisted of ~230,000 reviews along with data on users, businesses and checkins. The data was in JSON format, so the first step was to convert it to tab-delimited format, using a simply Python script, so that it could be easily loaded into R.

Visualization

Next, I tried to understand the data by visualizing it. Here is a distribution of the number of useful votes:NewImage

Evaluation

Because Kaggle only allows two submissions every day, I created a function to evaluate the results of the prediction before submission, by replicating the algorithm used by Kaggle to evaluate the results i.e. the Root Mean Squared Logarithmic Error (“RMSLE”):

Evaluation  Yelp Recruiting Competition | Kaggle

where:

  • ϵ is the RMSLE value (score)
  • n is the total number of reviews in the data set
  • pi is the predicted number of useful votes for review i
  • ai is the actual number of useful votes for review i
  • log(x) is the natural logarithm of x

Refining the Model

Next, I split the data into training and validation sets in a 70:30 ratio and created a linear regression using just two independent variables: ‘star rating’ and ‘length of review’. This model resulted in an error of ~0.67 on the test data i.e. after submission.

Next, I hypothesized that the good reviews were written by good reviewers and for each review, calculated the average number of useful votes that the user writing the review received for all the other reviews that he/she wrote. Including this variable reduced the error dramatically to ~0.55.

Next, I incorporated more user data i.e. the number of reviews written by the user, the number of funny/useful/cool votes given and the the average star rating. None of these variables proved to be predictive of the number of useful votes with linear regression so I tried random forests, but to no avail.

Next, I incorporated business data to see if the type of business, the star rating or number of reviews received would increase the predictive power of the model. But again, these failed to reduce the error.

Next, I incorporated checkin data to see if the number of checkins would improve the model. Again, this failed to reduce the error.

Having exhausted all the easy options, I turned to text mining to analyze the actual content of the review. I split the reviews into two categories – ham (good reviews with more than five useful votes) and spam (bad reviews with five useful votes or less). For each category, I created a “term document matrix” i.e. a matrix with terms as columns, documents (review text) as rows and cells as the frequency of the term in the document. I then created a list of the most frequent terms in each category that were distinct i.e. that were only in one category or the other. To the model I added variables from the frequencies of each of these words and in addition added the frequencies of the exclamation mark (!) and comma (,). The final list of words for which I created frequency variables was:

  • , (comma)
  • !
  • nice
  • little
  • time
  • chicken
  • good
  • people
  • pretty
  • you
  • service
  • wait
  • cheese
  • day
  • hot
  • night
  • salad
  • sauce
  • table

The frequency variables improved the predictive power of the model significantly and resulted in an error of ~0.52.

Visualization of Final Model

Here is a heatmap of predicted (x-axis) vs actual (y-axis) useful votes:

NewImage

For lower numbers of useful votes (i.e. up to ~8) there is a relatively straight diagonal line indicating that by-and-large the prediction and actual values coincide. Beyond this, the model starts to falter and there is a fair amount of scattering.

Improvements

I couldn’t find time to improve the model even further, but I am fairly confident that additional text mining approaches such as stemming and natural language processing would do so.

Standard
Analytics, MBA

Amateur Data Scientist?: How I Built a Handwritten Digit Recognizer with 95% Accuracy

Almost two years ago, I wrote a post entitled Stats are Sexy, in which I mentioned the emerging discipline of data science. Soon after, I discovered the amazing platform Kaggle, which lets companies host Netflix Prize -style competitions where data scientists from all over the world compete to come up with the best predictive models. Fascinated, I really wanted to learn machine learning by competing in the “easy” Digit Recognizer competition which requires taking an image of a handwritten digit, and determining what that digit is, but struggled to gather the know-how in the limited free time that I had. Instead, I quenched my desire to do innovative work with data by building data visualization showcases as a Technical Evangelist for Infragistics, my employer at the time: Population Explosion and Flight Watcher.

Now, as an MBA student at the Massachusetts Institute of Technology, I am taking a class called The Analytics Edge, which I am convinced is one of the most important classes I will take during my time at business school. More on that later. Part of my excitement for taking the class was to learn R, the most widely-used tool (by a long margin) among data scientists competing on Kaggle.

After several lectures, I had some basic knowledge of how to identify and solve the three broad types of data mining problems – regression, classification and clustering. So, I decided to see if I could apply what I had learnt so far by revisiting the Digit Recognizer competition on Kaggle, and signed up as a competitor.

I recognized this as a classification problem – given input data (an image), the problem is to determine which class (a number from 0 to 9) it belongs to. I decided to try CART (Classification and Regression Trees). I used 70% of the data (which contains 42,000 handwritten digits along with labels that identify what numbers the digits actually are) to build a predictive model, and 30% to test it’s accuracy. The CART model was only about 62% accurate in recognizing digits, so I tried Random Forests, which to my surprise turned out to be ~90% accurate! I downloaded the “real” test set which contains 28,000 handwritten digits, ran it through the model and created a file that predicted what each of the digits actually was.

I uploaded my prediction file and to my surprise it turned out that the accuracy was 93%. I increased the number of tree in the random forests to see if I could do better, and it indeed worked, bumping up the accuracy to 95% and moving me up 43 positions in the leaderboard:

 Digit submission

Here is the code in it’s entirety: DigitRecognizer.r

I was amazed that I was able to build something like this in a couple of hours in under 30 lines of code (including sanity checks). (Of course, I didn’t have to clean and normalize the data, which can be painful and time-consuming.) Next up: a somewhat ambitious project to recognize gestures made by moving smartphones around in the air. Updates to follow in a future post.

What’s really exciting from a business standpoint is that predictive analytics can be applied in a large number of business scenarios to gain actionable insights or to create economic value. Have a look at the competitions on Kaggle to get an idea.

It has been clear for some time that companies can obtain a significant competitive advantage through data analytics, but this is not limited to specific industries. A few experts from the MIT Sloan Management Review 2013 Data & Analytics Global Executive Study and Research Project published only a few days ago hint at the scale of “The Analytics Revolution”:

How organizations capture, create and use data is changing the way we work and live. This big idea, which is gaining currency among executives, academics and business analysts, reflects a growing belief that we are on the cusp of an analytics revolution that may well transform how organizations are managed, and also transform the economies and societies in which they operate.

Fully 67% of survey respondents report that their companies are gaining a competitive edge from their use of analytics. Among this group, we identified a set of companies that are relying on analytics both to gain a competitive advantage and to innovate. These Analytical Innovators constitute leaders of the analytics revolution. They exist across industries, vary in size and employ a variety of business models. 

If I was enthused about the power of analytics before, I am even more convinced now. Which is why I consider classes like the Analytics Edge that teach hard analytics skills to be extremely valuable for managers in the current global business environment.

Will you be an Analytics Innovator?

Standard