My Blog

November 24, 2008

MapReduce

Filed under: Miscellaneous — by enotebook @ 11:15 am

Copy from: http://en.wikipedia.org/wiki/MapReduce
MapReduce is a software framework introduced by Google to support parallel computations over large (multiple petabyte[1]) data sets on clusters of computers. This framework is largely taken from map and reduce functions commonly used in functional programming,[2] although the actual semantics of the framework are not the same.[3]
MapReduce implementations have been written in C++, Java, Python and other languages.

Logical view
The Map and Reduce functions of MapReduce are both defined with respect to data structured in (key, value) pairs. Map takes one pair of data with a type on a data domain, and returns a list of pairs in a different domain:
Map(k1,v1) -> list(k2,v2)
The map function is applied in parallel to every item in the input dataset. This produces a list of (k2,v2) pairs for each call. After that, the MapReduce framework collects all pairs with the same key from all lists and groups them together, thus creating one group for each one of the different generated keys.
The Reduce function is then applied in parallel to each group, which in turn produces a collection of values in the same domain:
Reduce(k2, list (v2)) -> list(v2)
Each Reduce call typically produces either one value v2 or an empty return, though one call is allowed to return more than one value. The returns of all calls are collected as the desired result list.
Thus the MapReduce framework transforms a list of (key, value) pairs into a list of values. This behavior is different from the functional programming map and reduce combination, which accepts a list of arbitrary values and returns one single value that combines all the values returned by map.
It is necessary but not sufficient to have implementations of the map and reduce abstractions in order to implement MapReduce. Furthermore effective implementations of MapReduce require a distributed file system to connect the processes performing the Map and Reduce phases.

Example
The canonical example application of MapReduce is a process to count the appearances of each different word in a set of documents:
map(String name, String document):
// key: document name
// value: document contents
for each word w in document:
EmitIntermediate(w, 1);

reduce(String word, Iterator partialCounts):
// key: a word
// values: a list of aggregated partial counts
int result = 0;
for each v in partialCounts:
result += ParseInt(v);
Emit(result);

Here, each document is split in words, and each word is counted initially with a “1” value by the Map function, using the word as the result key. The framework puts together all the pairs with the same key and feeds them to the same call to Reduce, thus this function just needs to sum all of its input values to find the total appearances of that word.

Dataflow
The frozen part of the MapReduce framework is a large distributed sort. The hot spots, which the application defines, are:
* an input reader
* a Map function
* a partition function
* a compare function
* a Reduce function
* an output writer
Input reader
The input reader divides the input into 16MB to 128MB splits and the framework assigns one split to each Map function. The input reader reads data from stable storage (typically a distributed file system like Google File System) and generates key/value pairs.
A common example will read a directory full of text files and return each line as a record.
Map function
Each Map function takes a series of key/value pairs, processes each, and generates zero or more output key/value pairs. The input and output types of the map can be (and often are) different from each other.
If the application is doing a word count, the map function would break the line into words and output the word as the key and “1” as the value.
Partition function
The output of all of the maps is allocated to particular reduces by the application’s partition function. The partition function is given the key and the number of reduces and returns the index of the desired reduce.
A typical default is to hash the key and modulo the number of reduces.
Comparison function
The input for each reduce is pulled from the machine where the map ran and sorted using the application’s comparison function.
Reduce function
The framework calls the application’s reduce function once for each unique key in the sorted order. The reduce can iterate through the values that are associated with that key and output 0 or more key/value pairs.
In the word count example, the reduce function takes the input values, sums them and generates a single output of the word and the final sum.
Output writer
The Output Writer writes the output of the reduce to stable storage, usually a distributed file system, such as Google File System.

Distribution and reliability
MapReduce achieves reliability by parceling out a number of operations on the set of data to each node in the network; each node is expected to report back periodically with completed work and status updates. If a node falls silent for longer than that interval, the master node (similar to the master server in the Google File System) records the node as dead, and sends out the node’s assigned work to other nodes. Individual operations use atomic operations for naming file outputs as a double check to ensure that there are not parallel conflicting threads running; when files are renamed, it is possible to also copy them to another name in addition to the name of the task (allowing for side-effects).
The reduce operations operate much the same way, but because of their inferior properties with regard to parallel operations, the master node attempts to schedule reduce operations on the same node, or in the same rack as the node holding the data being operated on; this property is desirable as it conserves bandwidth across the backbone network of the datacenter.
Implementations may not be highly-available; in Hadoop, for example, the NameNode is a Single Point of Failure for the distributed filesystem; if theJobTracker fails, all outstanding work is lost.

Uses
MapReduce is useful in a wide range of applications, including: “distributed grep, distributed sort, web link-graph reversal, term-vector per host, web access log stats, inverted index construction, document clustering, machine learning, statistical machine translation…” Most significantly, when MapReduce was finished, it was used to completely regenerate Google’s index of the World Wide Web, and replaced the old ad hoc programs that updated the index and ran the various analyses. [4]
MapReduce’s stable inputs and outputs are usually stored in a distributed file system. The transient data is usually stored on local disk and fetched remotely by the reduces.

Criticism
David DeWitt and Michael Stonebraker, pioneering experts in parallel databases and shared nothing architectures, have made some controversial assertions about the breadth of problems that MapReduce can be used for. They called its interface too low-level, and questioned whether it really represents the paradigm shift its proponents have claimed it is.[5] They challenge the MapReduce proponents claims of novelty, citing Teradata as an example of prior art that has existed for over two decades; they compared MapReduce programmers to Codasyl programmers, noting both are “writing in a low-level language performing low-level record manipulation”.[5] MapReduce’s use of input files and lack of schema support prevents the perf
orm
ance improvements enabled by common database system features such as B-trees and hash partitioning, though projects such as PigLatin and Sawzall are starting to address these problems.[6]

Implementations
* The Google MapReduce framework is implemented in C++ with interfaces in Python and Java.
* The Hadoop project is a free open source Java MapReduce implementation.
* Greenplum is a commercial MapReduce implementation, with support for Python, Perl, SQL and other languages.[7]
* Phoenix [1] is a shared-memory implementation of MapReduce implemented in C.
* MapReduce has also been implemented for the Cell Broadband Engine, also in C. [2]
* MapReduce has been implemented on NVIDIA GPUs (Graphics Processors) using CUDA [3].
* Qt Concurrent is a simplified version of the framework, implemented in C++, used for distributing a task between multiple processor cores.
* CouchDB uses a MapReduce framework for defining views over distributed documents
* Skynet is an open source Ruby implementation of Google’s MapReduce framework
* Disco is an open source MapReduce implementation by Nokia. Its core is written in Erlang and jobs are normally written in Python.
* Aster Data Systems nCluster In-Database MapReduce implements MapReduce inside the database.
* The open-source Hive framework from Facebook (which provides a SQL-like language over files, layered on the open-source Hadoop MapReduce engine.)

References
Specific references:
1. ^ Google spotlights data center inner workings | Tech news blog – CNET News.com
2. ^ “Our abstraction is inspired by the map and reduce primitives present in Lisp and many other functional languages.” -“MapReduce: Simplified Data Processing on Large Clusters”, by Jeffrey Dean and Sanjay Ghemawat; from Google Labs
3. ^ “Google’s MapReduce Programming Model — Revisited” — paper by Ralf Lammel; from Microsoft
4. ^ “How Google Works”. baselinemag.com. “As of October, Google was running about 3,000 computing jobs per day through MapReduce, representing thousands of machine-days, according to a presentation by Dean. Among other things, these batch routines analyze the latest Web pages and update Google’s indexes.”
5. ^ a b David DeWitt; Michael Stonebraker. “MapReduce: A major step backwards”. databasecolumn.com. Retrieved on 2008-08-27.
6. ^ David DeWitt; Michael Stonebraker. “MapReduce II”. databasecolumn.com. Retrieved on 2008-08-27.
7. ^ Parallel Programming in the Age of Big Data
General references:
* Dean, Jeffrey & Ghemawat, Sanjay (2004). “MapReduce: Simplified Data Processing on Large Clusters”. Retrieved Apr. 6, 2005.
* “High-performance analytics”.

September 17, 2008

Funny math

Filed under: Miscellaneous — by enotebook @ 12:56 pm

Funny math
1.


2.

3.

4.

5.

6.

April 23, 2008

VMware server comes out error when boot 2 virtual linux boxes

Filed under: Miscellaneous — by enotebook @ 10:04 am

Error:
Cannot open the disk ‘FileLocation.vmdk’ or one of the snapshot disks it depends on.
Reason: Failed to lock the file.

Action:
Add ‘disk.locking = “FALSE”‘ to the confituration file(.vmx)
This also effective to some unknown reasons when 2 or more boxes booting failure.(as i know)

April 10, 2008

Linux hangs on Enabling swap space when bootup

Filed under: Miscellaneous — by enotebook @ 2:05 pm

Problem:
with enabling swap space on “Boot Time“.
while Redhat Linux Booting into run level 3 , then it suddenly got struck up with swap space enabling.

Solution:
Boot into single user mode and make sure /etc/fstab has correct entries. Also can format swap partition.

Before formatting swap partition please disable that partition. do(in single user mode):

#swapoff /dev/hda5 (your /etc/fstab entry for swap).
To Create new swap partition do the follow,
step1:
Create new partition by using “fdisk”.Say here “/dev/hdaX”(Also works with deleting old swap partition and recreate it to use original space.)
step2:
Create Swap signature on that partition.to do that,
#mkswap -v1 /dev/hdaX
step3:
Enable swap partition,
#swapon /dev/hdaX
step4:
Check the status of swap partition.
#swapon -s

December 14, 2007

Great article about RAID

Filed under: Miscellaneous — by enotebook @ 2:58 pm

Redundant Array of Independent Disks (RAID) Original URL: http://www.commodore.ca/windows/raid5/raid5.htm

RAID is an acronym for Redundant Array of Independent Disks. The term was coined in 1988 in a paper describing array configuration and application by researchers and authors Patterson, Gibson and Katz of the University of California at Berkeley.

In the past computer systems were often restricted to writing information to a single disk. This disk was usually expensive and prone to failure. Hard disks have always been the weakest link in computer systems, because the devices are the only mechanical member of an otherwise all- electronic system. The disk drive contains a mass of moving, mechanical parts operating at high speed. The question is not whether the hard drive will fail, but when a hard drive will fail.

RAID was designed to revolutionize the way computers managed and accessed mass storage of data by providing an inexpensive and redundant system of disks. It was called Redundant Array of Inexpensive Disks (RAID). Instead of writing to one Single Large Expensive Disk (SLED), RAID wrote to multiple inexpensive disks. Originally the name stood for Redundant Array of Inexpensive Disks but has been revised to Independent disks.

RAID Fundamentals
RAID accomplishes its goals of redundancy and fault tolerance by doing two things: one is striping and the other is parity checking. Striping means that files are written a block at a time over multiple disks. The striping technique divides data across many drives and improves data transfer rates and total disk transaction times. Such systems are good for transaction processing, but suffer from poor reliability because the system is only as reliable as the weakest individual drive.

Parity checking ensures that the data is valid by performing a redundancy check on all data following a transmission. With parity, one of the disks on a RAID system can fail and the other disks have the ability to rebuild the failed disk. In both cases, these functions are transparent to the operating system. The Disk Array Controller (DAC) handles both striping and parity control.

Components
The major components in RAID are the Disk Array Controller (DAC) and a rank of five disks. The picture below shows an example of RAID-5. Data is striped across all five disks and the parity is used to recover a failed disk. There are many different RAID levels. Some RAID levels are designed for speed, some for protection, and some, like RAID-5 provide a combination of both. We will discuss them all.

Striping the Data
In the past a computer would write a file to a single disk. Striping allows you to break up a file and write different pieces to multiple disks concurrently. If you have 5 blocks of data in a file and stripe them across 5 disks, each block would be written to a separate disk simultaneously. If you had 5 OLTP transactions, each containing less than one block, you could process 5 different transactions concurrently.

Most RAID levels stripe at the block level but RAID can stripe at the bit or byte level. The size of the block is determined by the system administrator and is referred to as the stripe depth.

To maximize a disk array subsystem’s transaction processing capabilities, data must be written and read concurrently to and from multiple drives. To accomplish this, blocks of user data are striped across the array of drives. A stripe consists of a row of sectors (a sector consists of 512 bytes) located in the same position on each disk across the width of the array. Stripe depth, or the number of sectors in each data block, is defined by the subsystem software.

Stripe depth directly affects performance in that a too-shallow depth requires the system to execute more I/O commands than are needed. If the specified depth is too large, the processor’s multi-tasking capabilities and the advantages provided by multiple drives and actuators may be negated.

In an ideal transaction environment, each request from the host involves only one drive, allowing multiple concurrent transactions to multiple drives.
The process of striping data to the array drives resolves the problem described earlier of overloading one system drive while another sits idle. Data striping eliminates the use of dedicated drives and ensures that the data processing load is balanced among the available drives, while increasing performance by writing multiple blocks concurrently.

Parity
People often confuse parity with mirroring (or shadowing). Mirroring involves the creation of a duplicate copy of a disk. Mirroring is a technique where the data is written simultaneously to a pair of drives. These systems offer excellent reliability and have very good transaction processing results because the work can be carried out by either drive in the pair. The penalty paid is that two drives must be purchased to get the capacity of only one. The overhead of mirroring is 100% or double the disk space. When a disk fails the mirrored disk is used in its place.

Parity provides the same general protection as mirroring, but has less overhead. If a user has an array of five disks, four are used for data and one is used for parity. The overhead is only 20%. This is quite an advantage when cost is a concern.

A computer writes only zeros or ones to represent data. A method to generate parity is called eXclusive OR (XOR). A bit is taken (either a 0 or l) from each disk and totaled. If the total is even the parity bit is set to 0. If the total is odd the parity bit is set to 1. The picture below shows an example of the parity bits. The first four bits on the top of each disk add up to a total of 2. This even number causes the parity bit to be zero. The second four bits on the bottom of each disk add up to a total of 3. This odd number causes the parity bit to be one.

Depending on the RAID level the parity will either be on one disk or be spread among all the disks. Either way it is 1/5th or 20% of the space when you are utilizing five disks. Parity is 1/4 th or 25% of the space when utilizing four disks, and 1/3rd or 33% when utilizing three disks.

The picture below shows a RAID disk failure. Once the disk is replaced the Disk Array Controller will rebuild the disk to its previous contents. It will rebuild a 1 and a 1.

RAID Configuration Levels
The industry currently has agreed upon six RAID configuration levels and designated them as RAID 0 through RAID 5. Each RAID level is designed for speed, data protection, or a combination of both. The RAID levels are:
*
RAID – 0 Data striping Array
*
RAID – 1 Mirrored Disk Array
*
RAID – 2 Parallel Array, Hamming Code
*
RAID – 3 Parallel Array with Parity
*
RAID – 4 Independent Actuators with a dedicated Parity Drive
*
RAID – 5 Independent Actuators with parity spread across all drives
The most popular RAID lev
els are RAID-0, RAID-1, and RAID-5. These are described next in more detail.

RAID – 0 Data Striping Array
RAID-0 stripes the data across all the drives, but doesn’t utilize parity. If one of the disks fails, the data must be restored on all five drives from backups. This RAID is designed for speed and is the fastest of all the RAIDs, but provides the least protection.

RAID – 1 Transparent or Striped Mirroring
The RAID-1 technology requires that each primary data disk have a mirrored disk. The contents of the primary disk and the mirror disk are identical. RAID- I provides for the best data protection, but is slower than RAIDS 0 or 5.

When data is written on the primary disk, a write also occurs on the mirror disk. The mirroring process is invisible to the user. For this reason, RAID- I is also called transparent mirroring, The user can set up RAID-1 to write to a single disk and have that disk mirrored can stripe to a number of disks with each of the striped disks also having a mirrored copy. This_ is called striped mirroring, RAID 1+0, RAID 10, or some cases RAID 6.

RAID – 5 Independent Actuators, Parity Spread
RAID-5 stripes data at the block level and also utilizes parity. With the RAID-5 technology, user information and parity are combined on every disk in the array. Independent and/or parallel data read and write operations are performed. This RAID is the most popular of all RAIDS. RAID-5 is not as fast as RAID-0 and does not provide as much protection as RAID-1 mirroring. RAID-5 provides good-speed and good protection. This is why it is often the RAID level of choice.

RAID Disk Array Components
The major components of RAID Disk Arrays are the Disk Array Controllers, 5 SCSI Channels, and one or more ranks of disks. There are usually two Disk Array Controllers (DACs) working as a team. The implementation used to consist of one Active DAC and one Passive DAC in case of a DAC failure. Most implementations today utilize two Active DACs. They both share responsibilities for controlling the disks, but either one will control all ranks if the other DAC should fail. In the picture below you see two DACs. They share responsibility for four ranks. You can configure the disks with any supported RAID level. You can even break up the disks to configure multiple RAIDS within the same rank.

Internal/External Disk Arrays
In the past Disk Arrays were exclusively connected to the main computer through a cable and were always in an external box. There is a SCSI length limitation for external Disk Arrays of around 80ft or 25 meters. A repeater could be used for an additional 25 meters but that would result in a five-percent loss in performance.

Many computers today have internal RAID. The CPUs communicate with the disks internally, but the same fundamentals still exist. Whether internal or external, the disk array will have ranks of disks that are controlled by one or two disk array controllers.

Key Points to Remember
*
RAID was created to enhance data performance, reliability and availability.
*
Striping, parity checking and mirroring are three primary functions of RAID systems.
*
RAID performs its functions transparent to the operating system.
*
Systems are typically defined by ranks consisting of five disks each connected to one or two Disk Array Controllers.
*
Different RAID levels provide varying degrees of speed and data protection.

Create a free website or blog at WordPress.com.