MapReduce is one of the most popular scheduling theoretical account and model. It is developed by Google to treat big sum of informations. In MapReduce programming theoretical account we specify two basic maps 1.function_map, 2.function_reduce.MapReduce back uping many existent universe applications such as constructing big scale computing machine systems for informations intensive and calculating applications for high processing and storage capacity, pull off big sum of geographic informations and maintain bank client inside informations, etc. MapReduce operations perform on set of machines by parallel mode. In this type of executing the machine takes less sum of clip to treat informations and the velocity and public presentation comparatively high comparison to other type of executing such as consecutive. In MapReduce, the runtime system plays the major function of partitioning the big sum of informations ( input ) , placing and retrieving system failure, pull offing the executing of plans, doing communicating between machines and treating end product informations. MapReduce allows easy entree to a coder who does n’t hold any experience with distributed and parallel system to utilize the resources of distributed system. MapReduce plans have been developed in many different linguistic communications such as C++ , Java, python, C # and other scheduling linguistic communications. To treat TBs of informations in many existent universe applications many different plans have been developing by coders. In this article, we represent the realisation of MapReduce including SQL & A ; Lisp functional scheduling theoretical account.
Over past ten old ages, the Google coders and others are developing different calculation methods to treat big sum of informations such as World Wide Web ( WWW ) paperss and geographic informations, etc. These existent universe undertakings have big sum of input informations so should partition the input informations and processed on set of machines by parallel mode. The other computational methods have extremely complex computational stairss to treat these informations. Basically to treat big sum of informations need to execute many undertakings such as partitioning the input informations, treating these informations on set of machines, scheduling the undertaking for each and every machine, doing communicating between machines and identifying and retrieving machine failure. Should execute these computational stairss precisely or else it will go a clip devouring procedure. ( Dean et al, 2004, P1 )
We specify most popular computational theoretical account MapReduce to treat these big sum of informations. It is easy method to bring forth these paperss compare to other computational methods. Nowadays, most of the existent universe applications are utilizing MapReduce construct. Map procedure assigns set of intermediate key/value braces to each input record. Ex-husband: Operation_ Map ( cardinal, value ) . The Reduce operation processes these values with the same key to recover informations and generates the end product informations. Ex-husband: Operation_ Reduce ( key, list ( value ) ) . MapReduce does n’t hold complex computational stairss so we can easy parallelize the big sum of input informations and can bring forth the end product. MapReduce is suited to bring forth homogenous datasets but it is non suited to bring forth heterogenous datasets ( drawback ) . Necessitate to execute some more computational stairss to treat heterogenous datasets. ( Buyya, 2008, P2 & A ; Dean, 2004, P1 )
Section 2 explains the MapReduce basic, functional theoretical account, bring forthing key/value brace, MapReduce executing and similar MapReduce Models. Section 3 explains the parallel databases. Section 4 explains about the utilizations of indices. Section 5 explains about the mistake tolerance. Section 6 explains the public presentation issues.
MapReduce is a programming theoretical account to treat the collective of informations with the aid of 100s of systems. While processing, the informations will be stored in local and planetary storage files. The storage devices are databases or files. The databases use indexes to recover informations so it is a structured storage system.
Diagram 1: generating geographic informations. ( Brumitt, 2007, P2 )
Map Phase: –
Master takes the big input paperss divided it into smaller paperss and allocates these paperss on their slaves to procedure. Again every slave may execute this measure once more and may organize like a tree. The slave procedures all paperss and returns the produced end product to the maestro. ( 3 ) .
Maestro collects the consequences from all the slaves, processed these consequences and bring forth a cut down stage end product. ( 3 ) .
Functional Model: –
Let us discourse one illustration, numbering the figure of same mistake happening while running a plan. We use MapReduce map construct to bring forth the end product.
Pseudo _ Code: –
Function _ Map ( threading key, threading value ) :
// key: plan name
// value: plan codifications
For each mistake E in value:
Emit Intermediate ( E, ” 1 ” ) ;
Function _ Reduce ( threading key, loop values ) :
//key: mistake name
//values: figure of counts
Int O=0 ;
For each E in values:
O=O + parseint ( E ) ;
Emit ( Asstring ( ( O ) ) ;
//O: end product
( Dean, 2004, 2 )
Diagram 2: Generating key/value brace. ( Lammel, 2007, P15 )
Generating key/value brace: –
MapReduce operations have been written in functional linguistic communications such as Lisp. Map operation takes the input informations and generates key/value braces. It is structured ( cardinal, value ) brace. The end product is different braces with different sphere and performed by parallel mode.
Operation _ Map ( cardinal 1, value 1 ) – & gt ; group ( cardinal 2, value 2 )
Then MapReduce model generates individual group for all generated keys in Map operation.
The cut down operation takes these end product key/value brace as input and produces new decreased values as end product.
Operation _Reduce ( cardinal 2, group ( value 2 ) ) – & gt ; group ( value 3 ) . ( Buyya, 2008, P2 )
MapReduce Execution: –
Map stage splits input informations into figure of X pieces and procedures on set of machines by parallel mode and generates key/value braces for every piece of informations. Reduce stage splits the intermediate keys into y pieces with the aid of partitioning map and allocates these pieces on maestro and break one’s back to bring forth end product. Ex-husband: ( hash ( K ) mod Y ) . ( Dean, 2004, P3 )
Let us discourse about the basic MapReduce executing with the aid of following diagram,
Diagram 3: MapReduce executing. ( Dean, 2004, P3 )
Measure 1: The input informations is divided into many pieces ie.16MB to 64MB per piece and it passes to put of machines to procedure.
Measure 2: There are X figure of Map and Y figure of Reduce undertaking. The maestro takes the lead and allocates these Map and Reduce undertakings on slaves.
Measure 3: The Map stage slave reads the input informations and analyze key/value braces. The Map undertaking shops the generated intermediate key/value brace paperss into impermanent memory.
Measure 4: The slaves send message to the maestro which includes end product informations memory location. Then the maestro informs these locations to cut down stage slaves.
Measure 5: The procedure of cut down stage slave is to read the corresponding end product informations from the impermanent memory with the aid of Remote Procedure Calls ( RPC ) . Then utilizing internal kind such as merge kind and pile kind it groups all the values based on keys. If the information volume is high so it will prefer external kind such as poly stage merge.
Measure 6: Then the Reduce stage slave forwards end product key/value brace to the Reduce undertaking so it produces concluding end product. ( Dean, 2004, P3 )
Similar MapReduce Models: –
Some other MapReduce calculations are shown below,
Grep Method: –
The Map map identifies the line from the papers with the aid of given form and produces end product. The cut down map merely copies the line to the end product. ( Dean, 2004, P3 ) .
Frequently accessing URL count: –
The Map operation finds the figure of petitions to that peculiar URL and gives the end product.
Operation _ Map ( web reference, 1 )
The Reduce operation lists all the values for the same web reference and gives the concluding end product.
Operation _ Reduce ( web reference, end product )
( Dean, 2004, P3 )
Web nexus: –
Map operation generates ( URL, page name ) brace if the URL is found in specific page.
Operation _ Map ( URL, page name )
Reduce operation counts all the pages which have the specified URL and produces end product.
Operation _ Reduce ( URL, count ( page name ) )
( Dean, 2004, P3 )
Indexing Method: –
Map operation reads the input and gives ( word, record name ) for records which have the specified word.
Operation _ Map ( word, record name )
The Reduce operation counts all the record names and generates end product.
Operation _ Reduce ( word, list ( record name ) )
( Dean, 2004, P3 )
Sort Method: –
Map operation emits the papers ID for every papers.
Operation _ Map ( ID, papers name )
Reduce operation produces the end product without doing any alterations of Map operation end product.
Operation _ Reduce ( ID, papers name )
Normally it uses internal kind if the information volume is high so it will utilize external kind. ( Dean, 2004, P3 )
Parallel Databases: –
The parallel databases had been created by 15 old ages ago. Now a twenty-four hours ‘s parallel database uses in many existent universe applications which have big sum of informations. The big datasets splits into little parts and stored in different disc. It follows horizontal breakdown and every disc has search cardinal. So, easy we can recover the information from database. ( Silberschatz, 2006 )
The procedure executing velocity is high and can complete the procedure in less clip comparison to other database theoretical accounts and it is cheap. Parallel databases have two types of operations,
Intra operation correspondence: –
Merely one operation executes at a clip on different disc and processor. Ex-husband: kind. ( Silberschatz, 2006 )
Inter operation correspondence: –
Different operations performed at a clip analogue with one another. Both type of operation increases the system executing velocity. ( Silberschatz, 2006 )
Uses of Indexs: –
Indexs are used to recover a record from the database. It plays the similar function of index in books and library. In big databases the index size itself will be bigger so it takes much clip to recover informations from the database. We use some indexing techniques to recover informations from database easy. ( Silberschatz, 2006 )
There are two types of indexing techniques,
Ordered indices: –
Using the ordered hunt keys it retrieves records from database. All the records which contains hunt key in it. If the hunt keys are in consecutive order so we call it as constellating indices other than consecutive order call as non constellating indices. It has one drawback that to recover information we must entree index file or should execute binary hunt so need to execute many I/O operations. ( Silberschatz, 2006 )
Hash indices: –
To avoid more I/O operations, we perform another technique called as hash indices. Perform this technique we create a map called hash map. In this method, group of records stored in a pail based on hunt key such as bank subdivision name. The pail size is a disc block but it is smaller or larger than that. Let us specify the hash map,
Hash function= Hash ( K )
K denotes hunt key, B denotes pail and the hash map based on K and B. ( Silberschatz, 2006 )
See the illustration, keeping university pupil inside informations. Take section name as a hunt key ( search key _dept ) . Based on the section name the pupil inside informations are stored in different pails. If the pail has free infinite we can hive away many pupil inside informations at any clip and besides we can cancel a record from it. Now utilizing the hunt cardinal _dept we can easy entree inside informations without more I/O operations. ( Silberschatz, 2006 )
Mistake tolerance: –
Mistake tolerance is of import portion in MapReduce undertaking. It makes the operation continue to work if any machine failure occurs besides. There are two types of system failure and recover stage. ( Dean, 2004, P4 )
Failure of slave: –
The maestro node continuously checks the position of slave. In peculiar sum of clip if there is no response from the slave so the maestro identifies the slave as failed. Any of the slaves finished its work so the maestro resets the slave to initial province so merely it is eligible to make another work. If the MapReduce operation is running on failure system so it must go to initial province so merely it is eligible to reschedule the work. ( Dean, 2004, P4 )
Map operation end product is stored in local disc ( impermanent memory ) . So if any failure occurs so it needs to re put to death the operation otherwise we ca n’t entree the end product of Map operation. But no demand to re put to death the cut down operation if any failure occurs on it and besides we can entree the end product because the end product is stored in planetary memory. At the same clip failure occurs on many slaves so the maestro rhenium executes those slave plants and produces the end product. In the Map stage undertaking 1 one tally on slave 1 but failure occurs on slave 1 so the undertaking 1 transforms to break one’s back 2. The slave2 rhenium execute the undertaking 1 and generates the end product. This transmutation will be informed to all slaves in the Reduce stage. If any slave did n’t read the end product of slave 1 so it will read the end product of slave 2. ( Dean, 2004, P4 )
Failure of maestro: –
There is merely one maestro is available in MapReduce procedure. And besides failure occurs on maestro node seldom. We can happen the maestro failure easy. If any failure occurs on maestro so the new maestro will get down treat the staying work which has non done by old maestro. The new maestro does n’t necessitate to get down the procedure from the beginning. ( Dean, 2004, P4 )
Analysis of present failure: –
After finish the operation each and every Map slave creates impermanent storage files ( X files ) and shops the end product. If the map operation completes the undertaking so the slave sends message to the maestro. The message includes impermanent storage file name and location. The maestro checks the message if it is new message so the maestro shops impermanent storage file in its information construction. If it is already received message merely it will fling the message. ( Dean, 2004, P4 )
The cut down operation merely one impermanent storage file ( Y file ) . If the cut down slave coating its work so it automatically rename its impermanent storage file name to the concluding end product storage file. The same operation tallies on many slaves so for the same file it will put to death many rename calls to the end product file. ( Dean, 2004, P4 )
Performance issues: –
We measure the public presentation of MapReduce in this subject. MapReduce has two major computational stairss to treat big datasets. First computational measure splits big datasets into smaller dataset with the given form. The 2nd calculation measure groups that end product and generates concluding end product. The user defines plan to bring forth big dataset. The first portion splits the big dataset ant the 2nd portion retrieves little informations from big dataset. ( Dean, 2004, P8 )
Cluster public presentation: –
MapReduce calculations performed by set of machines. It is called as bunch. With the aid of maestro, slave relationship the machines may structured like a tree. Every machine has its ain velocity and memory. Based on machines public presentation the bunch public presentation will be measured. ( Dean, 2004, P8 )
Grep public presentation: –
See 1010 100 byte informations as input. It splits into many pieces 64 MB per piece, wholly 15000 pieces. Now we searches three word form available records we get the end product as 92337 records. The end product is stored in one file. To finish this operation MapReduce takes 150 seconds. ( Dean, 2004, P8 )
Kind public presentation: –
Sort operation uses upper limit of 50 lines plan to bring forth big dataset. In that 3 line utilizations for Map plan, the Map plan extracts the hunt key from the text. The Map map generates original text and cardinal intermediate key/value brace. The Reduce operation produces intermediate key/value brace as the end product key/value brace with the aid of plan. ( Dean, 2004, P9 )
System failure: –
See 1746 machines procedure MapReduce operation. At the same clip failure occurs on 200 machines instantly the maestro schedules the new machines to remake the work. To finish this operation it takes 933 seconds. It is 5 % more than the normal clip because of 200 machines failure. ( Dean, 2004, P8 )
MapReduce model has been successfully applied in many existent universe applications. The major three grounds for utilizing MapReduce programming theoretical account in many existent universe undertakings are given below,
MapReduce has easy computational stairss. It is easy to utilize. Provides easy entree to a coder who does n’t hold experience with distributed and parallel systems to utilize distributed system resources.
MapReduce uses to treat big sum of informations easy. Ex-husband: generating geographic informations, keeping bank inside informations. Since it hides many treating constructs.
MapReduce processes on set of machines by parallel mode. It utilizes system resources expeditiously and increases system executing velocity and public presentation.
From this article, MapReduce has taught us many thoughts. Parallelize the big sum of distributed informations with MapReduce programming theoretical account, analyze fault tolerance of the system, utilizes the web bandwidth with the aid of hive awaying a individual transcript in impermanent local disc alternatively of hive awaying many transcripts. Besides we have learned about excess informations emmet mistake tolerance through this article.
Dean, Jeff and Ghemawat, Sanjay. MapReduce: Simplified Data Processing on Large Clusters hypertext transfer protocol: //labs.google.com/papers/mapreduce-osdi04.pdf
Lammal, Ralf. Google ‘s MapReduce Programming Model Revisited. hypertext transfer protocol: //www.cs.vu.nl/~ralf/MapReduce/paper.pdf
Google Code University.
hypertext transfer protocol: //code.google.com/edu/parallel/mapreduce-tutorial.html
Barry Brumitt. MapReduce Design Patterns.
hypertext transfer protocol: //www.google.co.uk/ # hl=en & A ; source=hp & A ; q=MapReduce+Design+Patterns+by+barry+brumitt+ph.d & A ; btnG=Google+Search & A ; meta= & A ; aq=f & A ; oq=MapReduce+Design+Patterns+by+barry+brumitt+ph.d & A ; fp=c6c9946001627c7b.
Alexander Horch, Friedrun Heiber. On Measuring Control Performance on Large Data Sets.
hypertext transfer protocol: //www05.abb.com/global/scot/scot267.nsf/veritydisplay/24d56839eca60be285256f9b0057aa2f/ $ File/Dycops_Horch.pdf.
Michael Beynon, Chialin Chang, Umit Catalyurek, Tahsin Kurc, Alan Sussman, Henrique Andrade, Renato Ferreira, Joel Saltz. Processing Large _ graduated table Multi _ dimensional informations in analogue and distributed environments.
hypertext transfer protocol: //www.google.co.uk/url? sa=t & A ; source=web & A ; ct=res & A ; cd=8 & A ; ved=0CCMQFjAH & A ; url=http % 3A % 2F % 2Fciteseerx.ist.psu.edu % 2Fviewdoc % 2Fdownload % 3Fdoi % 3D10.1.1.23.6073 % 26rep % 3Drep1 % 26type % 3Dpdf & A ; rct=j & A ; q=6. % 09Michael+Beynon % 2C+Chialin+Chang % 2C+Umit+Catalyurek % 2C+Tahsin+Kurc % 2C+Alan+Sussman % 2C+Henrique+Andrade % 2C+Renato+Ferreira % 2C+Joel+Saltz.+Processing+Large+_+scale+Multi+_+dimensional+data+in+parallel+and+distributed+environments.pdf+file & A ; ei=LXCQS6_zBo640wSyk835DQ & A ; usg=AFQjCNGNt48dBo1E9r726JV2rAfS5VL8WA.
Chao Jin and Rajkumar Buyya. MapReduce programming Model for.NET _ based Cloud Computing.
hypertext transfer protocol: //www.buyya.com/gridbus/reports/MapReduce-NET-2008.pdf.