Blog

 
 
 

Build a Data Pipeline with Qubole

 

build_pipeline_qubole

Build a Data Pipeline with Qubole

We get many questions about how Qubole makes data science easy. The one line answer is – we have developed a number of features to help with all stages in a data pipeline’s lifecycle – import data, analyze, visualize and deploy to production. However, the best way to answer these questions is by letting the product talk. We have created a demo to show case how Qubole lowers the barrier to leverage the power of Apache Hive with Hadoop.

Wikimedia, the organization behind Wikipedia publishes the page views of each topic, every hour of the day. From the page views, trending topics i.e. those topics with sudden jumps in page views can be found. We are not the first ones to attempt such an exercise. Data Wrangling has documented a previous attempt here.

We have also deployed a rails app to visualize the results at http://demotrends.qubole.com. The code for the rails app is available in Github.

wikitopics

Requirements to run the demo

  1. Qubole Trial Plan

OR

  1. An Amazon Web Services account
  2. Qubole Free Plan

 

PageCount Data

On the Wikimedia website, the data is partitioned by year followed by month. There is one directory per year starting from 2007. Each year has one directory per month. For e.g. in the year 2013, there are directories named

  • 2013-01
  • 2013-02
  • 2013-03
  • 2013-04
  • 2013-05
  • 2013-06
  • 2013-07

 

Each month directory has one file for every hour in a day. For e.g.

 

 

Lets take a peek into the page count data feed. Each row ends with a new line ‘\n’. There are 4 columns in each row and the columns are separated by a space.

Column Description
Group A category chosen by Wikipedia.
Title Name of the topic
Page Views No. of views of each topic in an hour
Bytes Sent No. of bytes served by the server in an hour

 

A sample of the rows are:

Group Title Page Views Bytes Sent
En India 1265 155611676
En Al_Hind 1 745733
Af 2007_Super_14-seisoen 14 91226
Af 2008_Super_14-seison 14 27131
En Ubuntu_(Operating_System) 19 366003

Resolve Synonyms

The page titles in a row can be synonyms. For e.g. “Bhart”, “Al_Hind” and “India” redirect to same Article of India on wikipedia, ie these 3 are synonyms and wikipedia keeps track of synonyms. Wikipedia calls the synonym page titles as “Redirect Titles”.

wikipedia

The page visit count data that we discussed above gives info about page visits for “Bhart”, “Al_Hind” and “India” separately. To get the actual page visit count of “India” we need to aggregate the data for all synonyms/Redirect titles for “India”. Wikipedia publishes the data about page synonyms in form of 2 dimension tables.

Page Id Page Title
12345 India
67567 Bhart
56765 Al_Hind
Redirect from Page Title
67567 India
56765 India

Page Table has entry for each page that exists in wiki. Redirect table has entry for only redirect page ids. We need to join these two tables to get a lookup table.

Redirect Id Redirect Title True Title True Id
12345 India India 12345
67567 Bhart India 12345
56765 Al_Hind India 12345

We can then use this lookup table to find the True Title for any redirect table.

Final Output

The final output exported to the web app looks like

Date Title Monthly Trend Daily Trend
2013-07-13 India 3456 129908.6

Flowchart

Flowchart

The data pipeline is visualized in the flowchart shown above. There are four distinct parts:

  1. Import and clean up Pagecount
  2. Import and join Pages & Redirect tables to create a lookup table.
  3. Resolve Synonyms and calculate trend ranks for each topic.
  4. Export the results to the webapp database.

In the second part of the blog series, we’ll describe the ETL steps required to process the raw data.

In the third part, we’ll describe analysis and export steps.


Leave a Reply

Your email address will not be published. Required fields are marked *

Are you human? *

 
 
 
 
clear