Friday, October 11, 2013

Launching a Cascading job from Apache Oozie

The Cascading framework has its own workflow management system embedded in it, so when I tried to find information online about how to launch a Cascading job from within the Apache Oozie workflow scheduler tool, I found a dearth of information.

In fact, when I asked on the oozie-users mailing list how to do it, the only response I got back was to write an Oozie extension to run Cascading jobs. That may be the right solution long term (don't know enough yet), but I did find a way to get it working with what Oozie provides today.

/*---[ Failed attempts ]---*/

I tried unsuccessfully to use the map-reduce action and the shell action. The former won't work because it wants you to specify the Mapper and Reducer classes explicitly. That doesn't make sense in a Cascading job - you launch your main Cascading class and it auto-generates a bunch of mappers and reducers. Secondly, while you can use the oozie.launcher.action.main.class property and specify your main Cascading class, there seems to be no way to pass arguments to it.

I'm not sure why I couldn't get the shell action to work. I made the exec property /usr/bin/hadoop in order to run it as hadoop jar myjar.jar com.mycompany.MyClass arg1 arg2 argN, but several attempts to make that work failed. There probably is a way to make it work, however.

/*---[ Solution: use the java action ]---*/

In order to launch Cascading jobs, we build an uber-jar (which maven annoyingly calls a shaded jar) that has our specific Cascading code and supporting objects, as well as the Cascading library all bundled in it. But that's not enough as all that depends on the myriad Hadoop jars. We then use the hadoop jar invocation as I indicated above because it puts all the Hadoop jars in the classpath.

I didn't think using the Oozie java action would work unless I built a massive uber jar with all the Hadoop dependencies which then have to get farmed around the Hadoop cluster each time you run it -- a great waste.

But I was happily surprised to notice that Oozie sets up the classpath for java (and map-reduce) tasks with all the Hadoop jars present.

So, here's the workflow.xml file that works:

<workflow-app xmlns='uri:oozie:workflow:0.2' name='cascading-wf'>
  <start to='stage1' />
  <action name='stage1'>


      <capture-output />
    <ok to="end" />
    <error to="fail" />

  <kill name="fail">
    <message>FAIL: Oh, the huge manatee!</message>

  <end name="end"/>

The parameterized variables, such as ${EXEC}, are defined in a in the same directory as the workflow.xml file. The shaded jar is in a lib subdirectory as indicated.


Let me know if you find another way to launch a Cascading job from Oozie or find any problems with this solution.


  1. Somewhat more detailed post on how to submit Cascading jobs on Oozie, featuring Gradle Shadow plugin:

  2. HI,
    If I want to tun a runnable jar file, then where to mention it in the workflow.xml?
    I mean how to mention the java code that actually runs?

  3. A nice article. I totally love the way you presented the topic.Hope to see you post soon again.

    Selenium Training in Chennai

  4. We at Coepd declared Data Science Internship Programs (Self sponsored) for professionals who want to have hands on experience. We are providing this program in alliance with IT Companies in COEPD Hyderabad premises. This program is dedicated to our unwavering participants predominantly acknowledging and appreciating the fact that they are on the path of making a career in Data Science discipline. This internship is designed to ensure that in addition to gaining the requisite theoretical knowledge, the readers gain sufficient hands-on practice and practical know-how to master the nitty-gritty of the Data Science profession. More than a training institute, COEPD today stands differentiated as a mission to help you "Build your dream career" - COEPD way.