Big + Far Math Challenge @ ICC

April 22, 2017

Hello,

Recently I participated and won First prize in Big Far Math challenge hosted by ICC. The challenge description can be found here – http://big-far.webflow.io/

Participating in it was a quite exciting and learning experience for me. I could explore different technical areas while gathering data and preparing visualizations with it.

I have shared the source code and a static version of the visualization on GitHub. The dynamic version was hosted on Apache Solr running on my local desktop.

You can visit the project page @ https://amitlondhe.github.io/bigfarmathchallenge/ from which you can navigate to the visualizations that I came up with.

I am also sharing the presentation given to the judges as part of assessment if you are looking for more details.

– Amit


Running Apache Spark on Windows

July 10, 2016

Running hadoop on windows is not trivial, however running Apache Spark on Windows proved not too difficult. I came across couple of blogs and stackoverflow discussion which made this possible. Putting down my notes below which are outcome of these reference material.

  1. Download http://d3kbcqa49mib13.cloudfront.net/spark-1.6.0-bin-without-hadoop.tgz ( http://spark.apache.org/downloads.html )
  2. Download Hadoop distribution for Windows from http://www.barik.net/archive/2015/01/19/172716/
  3. Create hadoop_env.cmd  in {HADOOP_INSTALL_DIR}/conf directory.
    SET JAVA_HOME=C:\Progra~1\Java\jdk1.7.0_80
    
  4. In a new command window run hadoop-env.cmd followed by  {HADOOP_INSTALL_DIR}/bin/hadoop classpath
    The output of this command is used to initialize SPARK_DIST_CLASSPATH in spark-env.cmd (You may need to create this file.)
  5. Create spark-env.cmd in {SPARK_INSTALL_DIR}/conf
     #spark-env.cmd content
     SET HADOOP_HOME=C:\amit\hadoop\hadoop-2.6.0
     SET HADOOP_CONF_DIR=%HADOOP_HOME%\conf
     set SPARK_DIST_CLASSPATH=<Output of hadoop classpath>
     SET JAVA_HOME=C:\Progra~1\Java\jdk1.7.0_80
    
  6. Now run the examples or spark shell from {SPARK_INSTALL_DIR}/bin directory. Please note that you may have to run spark-env.cmd explicitly prior running the examples or spark-shell.

References :


Big Data For Social Good Challenge

March 16, 2015

Hello,

During this winter, I participated in

Big Data For Social Good Challenge

which I just stumbled upon while searching something.

This challenge was about using IBM Bluemix’s “Analytics For Hadoop” service to process a data set that is minimum 500MB in size.

This was a wonderful opportunity to get some hands on on IBM Bluemix ( IBM is giving extended trial access if you are a participant). Apart from this I was also keen to build some Data visualization app on my own.

I selected CitiBike data for one year (2013-2014). Initially I did not had a clue about what insights I could gather from the dataset, but as soon as I ran some Apache Pig scripts and started looking at the output, I could see more and more use cases around the dataset.  I could not address all the use cases I thought as I soon hit the deadline pressure. I had to finish the video demonstration and write some write up about the project.

Overall it was a very enriching experience as I did so many things for the very first time.

Listing some of them below

  • IBM Bigsheets and  BigSQL
  • Using Chart.js library
  • Using Google Maps JavaScript APIs –  It was remarkably simpler than I thought. Much appreciate these APIs from Google.
  • Creating the custom Map icon – Never realized it would be this difficult
  • HTML 5/CSS challenges when putting up the UI
  • Last but not the least GitHub’s easy way to publish your work online.

Now that the challenge is in Public voting and judging phase, appreciate if you could take a look at

http://ibmhadoop.challengepost.com/submissions/33509-citibike-looking-back-in-a-year

and provide your feedback and vote if you like it.


Introduction to Apache Pig

September 28, 2014

Hello,

I had created this presentation on introduction of Apache Pig. Hope you find this useful to understand basics of Apache Pig.

Introduction to Apache Pig
Introduction to Apache Pig

Thanks,
Amit


Viewing Log statements in Hadoop Map-reduce jobs

August 23, 2014

Hello,

Anyone who is new to the hadoop world often ends up frantically searching for the debug log statements that he might have added in mapper or reducer function. At least this has been the case with me when I started working on Hadoop. Hence I thought it might be a good idea to post this particular entry.

The mapper and reducer are not executed locally and hence you can not find the logs for those on local file system. The mapper/reducer are run on the hadoop cluster and hence cluster is the place where you should look for them. However you do need to know the “JobTracker” URL for your cluster.

  1. Access the “JobTracker.  The default URL for JobTracker is “http://{hostname}:50030/jobtracker.jsp”.
  2. This simple UI lists down the different Map-reduce jobs and their states ( namely “running”,”completed”, “failed” and “retired”).
  3. You need to locate the Map-reduce job that you started (Please Refer the attached screenshot.)jobtracker
  4. There are various ways to identify your Map-reduce job.
    1. If you run a Pig script, the Pig client will log the job id which you can search in jobtracker. The screen shot shows the map-reduce job corresponding to a Pig script.
    2. If you are running a plain map-reduce application then once you submit the job, you can search the same either by using the userid used to submit the job or the application name.
  5. Clicking on the job number reveals the job details as shown below –map reduce job details
  6. Following screenshot shows way to navigate to the log statements. Please note that the screenshot comprises of three steps.Map Reduce Log Statements

Hope you find this post useful while learning Hadoop.

Thanks,

Amit