Analysis of MapReduce Model on Big Data Processing within Cloud Computing

Abstract: Nowadays  cloud  computing  is  becoming  a trend  on  big  data  processing.  Google  created  MapReduce model  to  simplify  the  complex  computation  of  big  data processing  by  configuring  and  splitting  the  data  into key/values  pair  to  be  processed  in  parallel,  usually  within  a network  of  computers,  then  merge  the  results.  However, MapReduce model has its limitations. Researchers have been trying to improve the model resulting in some newer models, such  as  Mantri,  Camdoop,  Sudo,  and  Nectar  model.  Each model  exploits  the  different  characteristics  of  MapReduce model  to  create  improvements  in  different  way  and  cases. Challenges  and  improvements  still  remain  within  these enhanced  models,  which  open  new  possibilities  on  area  of research.
Keywords: cloud computing, MapReduce, data processing, distributed model
Author: Marcellinus Ferdinand Suciadi
Journal Code: jptinformatikagg150004

Artikel Terkait :

Jp Teknik Informatika gg 2015