International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
|
Volume 129 - Issue 15 |
Published: November 2015 |
Authors: Arpit Gupta, Rajiv Pandey, Komal Verma |
![]() |
Arpit Gupta, Rajiv Pandey, Komal Verma . Analysing Distributed Big Data through Hadoop Map Reduce. International Journal of Computer Applications. 129, 15 (November 2015), 26-31. DOI=10.5120/ijca2015907156
@article{ 10.5120/ijca2015907156, author = { Arpit Gupta,Rajiv Pandey,Komal Verma }, title = { Analysing Distributed Big Data through Hadoop Map Reduce }, journal = { International Journal of Computer Applications }, year = { 2015 }, volume = { 129 }, number = { 15 }, pages = { 26-31 }, doi = { 10.5120/ijca2015907156 }, publisher = { Foundation of Computer Science (FCS), NY, USA } }
%0 Journal Article %D 2015 %A Arpit Gupta %A Rajiv Pandey %A Komal Verma %T Analysing Distributed Big Data through Hadoop Map Reduce%T %J International Journal of Computer Applications %V 129 %N 15 %P 26-31 %R 10.5120/ijca2015907156 %I Foundation of Computer Science (FCS), NY, USA
This term paper focuses on how the big data is analysed in a distributed environment through Hadoop Map Reduce. Big Data is same as “small data” but bigger in size. Thus, it is approached in different ways. Storage of Big Data requires analysing the characteristics of data. It can be processed by the employment of Hadoop Map Reduce. Map Reduce is a programming model working parallel for large clusters. There are some principles that are followed by Hadoop Map Reduce. It also solves the challenges of cluster computing as it hides complexity and minimizes the movement of data.