Research Article

Analysing Distributed Big Data through Hadoop Map Reduce

by  Arpit Gupta, Rajiv Pandey, Komal Verma
journal cover
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 129 - Issue 15
Published: November 2015
Authors: Arpit Gupta, Rajiv Pandey, Komal Verma
10.5120/ijca2015907156
PDF

Arpit Gupta, Rajiv Pandey, Komal Verma . Analysing Distributed Big Data through Hadoop Map Reduce. International Journal of Computer Applications. 129, 15 (November 2015), 26-31. DOI=10.5120/ijca2015907156

                        @article{ 10.5120/ijca2015907156,
                        author  = { Arpit Gupta,Rajiv Pandey,Komal Verma },
                        title   = { Analysing Distributed Big Data through Hadoop Map Reduce },
                        journal = { International Journal of Computer Applications },
                        year    = { 2015 },
                        volume  = { 129 },
                        number  = { 15 },
                        pages   = { 26-31 },
                        doi     = { 10.5120/ijca2015907156 },
                        publisher = { Foundation of Computer Science (FCS), NY, USA }
                        }
                        %0 Journal Article
                        %D 2015
                        %A Arpit Gupta
                        %A Rajiv Pandey
                        %A Komal Verma
                        %T Analysing Distributed Big Data through Hadoop Map Reduce%T 
                        %J International Journal of Computer Applications
                        %V 129
                        %N 15
                        %P 26-31
                        %R 10.5120/ijca2015907156
                        %I Foundation of Computer Science (FCS), NY, USA
Abstract

This term paper focuses on how the big data is analysed in a distributed environment through Hadoop Map Reduce. Big Data is same as “small data” but bigger in size. Thus, it is approached in different ways. Storage of Big Data requires analysing the characteristics of data. It can be processed by the employment of Hadoop Map Reduce. Map Reduce is a programming model working parallel for large clusters. There are some principles that are followed by Hadoop Map Reduce. It also solves the challenges of cluster computing as it hides complexity and minimizes the movement of data.

References
  • Laney, Douglas. "The Importance of 'Big Data': A Definition". Gartner. Retrieved 21 June 2012.http://www.gartner.com/resId=2057415
  • M.H. Padgavankar, Dr. S.R. Gupta. “Big Data Storage and Challenges” M.H. Padgavankar et al, / (IJCSIT) International Journal of Computer Science and InformationTechnologies,Vol.5(2)2014http://www.ijcsit.com/docs/Volume%205/vol5issue02/ijcsit20140502284.pdf
  • M.H. Padgavankar, Dr. S.R. Gupta. “Big Data Storage and Challenges” M.H. Padgavankar et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol.5(2)2014http://www.ijcsit.com/docs/Volume%205/vol5issue02/ijcsit20140502284.pdf
  • M.H. Padgavankar, Dr. S.R. Gupta. “Big Data Storage and Challenges” M.H. Padgavankar et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol.5(2)2014http://www.ijcsit.com/docs/Volume%205/vol5issue02/ijcsit20140502284.pdf
  • Jeffrey Dean and Sanjay Ghemawat. “MapReduce: Simplified Data Processing on Large” Google, Inc.http://static.googleusercontent.com/media/research.google.com/es/us/archive/mapreduce-osdi04.pdf
  • Jeffrey Dean and Sanjay Ghemawat. “MapReduce: Simplified Data Processing on Large” Google, Inc.http://static.googleusercontent.com/media/research.google.com/es/us/archive/mapreduce-osdi04.pdf
  • http://www.websitemagazine.com/content/blogs/posts/achive/2012/08/04/thefuturelooksbrightforhadoopmapreduce.aspx
  • http://www.websitemagazine.com/content/blogs/posts/archive/2012/08/04/thefuturelooksbrightforhadoopmapreduce.aspx
Index Terms
Computer Science
Information Sciences
No index terms available.
Keywords

Map Reduce Hadoop

Powered by PhDFocusTM