Rogue Logic

RogueLogic

We love to be Rogue in our Logic    . . .    confused ?

Rogue Logic is being unpredictable and yet logical; both at the same time. Research (no idea why they do it) says 93% of what humans do is predictable because every human move is logical. May be being predictable is a normal expectation, however, we believe world's best creations were completely unpredictable that surprised us with their capability. Unpredictable solutions have been simple solutions to complex problems. Out of this basic notion Rogue Logic was born.

Rogue Logic

PRODUCTS

These are just a few we have ready    . . .    more in pipeline.

We have built products which make life easier for data driven organizations, which nowadays covers almost everyone. Our products are created for the collecting, collating and linking data across various platforms and sources. Every organization is getting complex either due to incresing data, system upgradtion and responding to different events. This is where our products empower you to cope with the challenges of integration simillar data spread across various formats into single ecosystem. Why not make data collection for reporting and analytics simple for everyone and we decided to just do it.


Random Name

Data Governance

Data Linkage

Get More Info

Random Name

Meta Reports

Data Staging

Get More Info

Random Name

Data Locator

Data Storage Director

Get More Info

Rogue Logic

ABOUT US

Here we go bragging again    . . .    Remind us when its too much.

A bunch of programmers with skills of designing and developing software in world’s most complex fields. Don’t get us wrong we are not just geeks but also leaders in business and management. But again, as engineers the most advanced side of our brain loves to build cool things.


  

London

London

  

New York

New York

  

Pune India

Pune India

Rogue Logic

Contact

We love to talk    . . .    stop by for a coffee

Information ? Drop a note.

New York, US

Phone: +00 1515151515

Email: contact@roguelogic.com


Rogue Logic

From The Blog

Rob, Geek

Man, we've been on the road for some time now. Looking forward to talking to you.

It’s fitting that my first article on Big Data would be titled the “Master Map-Reduce Job”. I believe it truly is the one and only Map-Reduce job you will every have to write, at least for ETL (Extract, Transform and Load) Processes. I have been working with Big Data and specifically with Hadoop for about two years now and I achieved my Cloudera Certified Developer for Apache Hadoop (CCDH) almost a year ago at the writing of this post. So what is the Master Map-Reduce Job? Well it is a concept I started to architect that would become a framework level Map-Reduce job implementation that by itself is not a complete job, but uses Dependency Injection AKA a Plugin like framework to configure a Map-Reduce Job specifically for ETL Load processes. Like most frameworks, you can write your process without them, however what the Master Map-Reduce Job (MMRJ) does is break down certain critical sections of the standard Map-Reduce job program into plugins that are named more specific to ETL processing, so it makes the jump from non-Hadoop based ETL to Hadoop based ETL easier for non-Hadoop-initiated developers. I think this job is also extremely useful for the Map-Reduce pro who is implementing ETL jobs, or groups of ETL developers that want to create consistent Map-Reduce based loaders, and that’s the real point of the MMRJ. To create a framework for developers to use that will enable them to create robust, consistent, and easily maintainable Map-Reduce based loaders. It follows my SFEMS – Stable, Flexible, Extensible, Maintainable, Scalable development philosophy. The point of the Master Map Reduce concept framework is to breaks down the Driver, Mapper, and Reducer into parts that non-Hadoop/Map-Reduce programmers are well familiar with; especially in the ETL world. It is easy for Java developers who build Loaders for a living to understand vocabulary like Validator, Transformer, Parser, OutputFormatter, etc. They can focus on writing business specific logic and they do not have to worry about the finer points of Map-Reduce. As a manager you can now hire a single senior Hadoop/Map-Reduce developer and hire normal core Java developers for the rest of your team or better yet reuse your existing team and you can have the one senior Hadoop developer maintain your version of the Master Map-Reduce Job framework code, and the rest of your developers focus on developing feed level loader processes using the framework. In the end all developers can learn Map-Reduce, but you do not need to know Map-Reduce to get started writing loaders that will work on the Hadoop cluster by using this framework. The design is simple and can be show by this one diagram: Notice: Please note that all designs, suggestions, code, and other writing on this web site are of my personal opinion and not the opinion of my employers and all the Intellectual Property and other information is from my Personal Projects and experiences outside of my professional work.