What is Big Data
Big Data is “Not a Replacement for existing Analytical systems” like Cubes, Data-warehouses etc. Big Data processes both remake, and complement existing analytic workflows by Simplifying production of structured information from emerging “ambient” data sources.
When you have non-traditional Datasources like Social media, IOT devices, Automated robotics etc. Big Data allows you to make sense out of these un-structured or semi-structured data into sensible analytical data. So here are the key points:
- Enabling rapid sense-making over un-enriched and un-modeled data
- Enabling analytics at scale over ambient data
- Enabling creation of ambient data driven models
- Existing systems enable sense-making over modeled data
- There is tremendous potential value in making sense of ambient data
Comparison Chart between an RDBMS System and a Big Data based MapReduce
As you process more and more data, and you want interactive response ypically in most cases, you need more expensive hardware to support the infrastructure. Failures at the points of disk and network can be quite problematic and maintaining ACID (atomicity, consistency, isolation, durability) could be a challenge.
You can work around this problem with more expensive HW and systems like purchasing Database Appliances from Oracle or Microsoft (ESSBASE, PDW) but adoption would be small due to high costs.
In case of Big Data and Hadoop, We are using commodity hardware without the need for specialized and expensive network and disk. Not so much ACID, but we get BASE (basically available, soft state, eventually consistent)
Map Reduce (Split, Shuffle)
The Hadoop Ecosystem
What is NoSQL ?
Broadly put, the NoSQL is analogous to OLTP if you imagine Hadoop as a BI system. They are comprised of many components:
- MemcacheD and more.
Implementations of Google’s BigTable – distributed storage system for managing structured data at very large sizes
A Bigtable is a sparse, distributed, persistent multidimensional sorted map. The map is indexed by a row key, column key, and a timestamp; each value in the map is an uninterpreted array of bytes.
What is HBASE
- Efficient at Random Reads/Writes
- Distributed, large scale data store
- Utilizes Hadoop for persistence
- Both HBase and Hadoop are distributed
Cassandra implementation AT Netflix
Where did Cassandra originated from?
A lot of its origin stems from Facebook. Cassandra was originally created at Facebook. However in case of Facebook messaging, Facebook decided to use HBase instead.
What is HIVE and HIVE Queries
This is the favourite method for most SQL Professionals becuase it uses SQL like Queries to query data from Hadoop. It is a “Data warehouse” system for Hadoop.
With Hive, you can do the following:
- Analysis of large datasets stored in HDFS
- SQL–Like Interface
- No Java programming needed.
- Ad-hoc queries via HiveQL
(translate into MapReduce)
You can connect from PowerQuery, PowerBI or PowerPivot for Excel etc using the Hive ODBC driver or native connectors.
Example HIVE Query:
CREATE TABLE indro_managed (bar int);
LOAD DATA INPATH ‘/user/larar/data.txt’
INTO TABLE indro_managed
CREATE EXTERNAL TABLE indro_external (bar int)
LOAD DATA INPATH ‘/user/larar/data.txt’
INTO TABLE indro_external
Comparison Table for RDBMS and Hive
What is Mahout?
It is a scalable machine learning library that leverages the Hadoop infrastructure
Key Use Cases:
- Recommendation mining: Examine user behavior, build recommendation model
- Clustering: Grouping data into related topics
- Classification: Learn from classified documents to assign categories to unlabeled data
What is R Programming?
- Statistical computing and graphing programming language
- RHIPE: R and Hadoop Integration
- Open source GNU Project
What us Scoop?
Data connector system for Hadoop and RDBMS
- Importing RDBMS data to files (delimited or sequence) in HDFS, or tables in Hive
- Importing RDBMS query results to files (delimited or sequence) in HDFS, or tables in Hive
- Exporting files and Hive tables to RDBMS tables
- Executes MapReduce jobs to transfer data in parallel with fault tolerance
What is Pig?
It is a Data-flow platform to transform and analyze HDFS data. It have the following benefits:
- Scripting – No Java Programming Needed!
- Focus on semantics, not on implementation
- Extensible through user defined functions and methods
- Pig can operate on data whether it has metadata or not.
- Pig is not tied to one particular parallel framework.
- Pig is designed to be easily controlled and modified by its users.
- Pig processes data quickly.
Read More about Pigs here: http://pig.apache.org/philosophy.html
Workflow, Management & Monitoring with Oozie, Ambari, & ZooKeeper
What is Oozie?
It is a Workflow processing system. Users define a series of jobs written in multiple languages and link them to one another. For example: a particular query is only to be initiated after specified previous jobs on which it relies for data are completed.
What is Ambari?
It is a management system for monitoring a Hadoop system. With Ambari you can:
- Install: Wizard for installing Hadoop services across any number of nodes
- Manage: Central management for starting, stopping, and reconfiguring Hadoop services across the entire cluster
- Monitor: Dashboard for monitoring health and status of the Hadoop cluster. Sends email alerts when your attention is needed (e.g., a node goes down, remaining disk space is low, etc)
What is Zookeeper?
It is a Centralized service for mMaintaining configuration information and naming.
- Providing distributed synchronization
- Providing group services
- High throughput, low latency, highly available, strictly ordered access
What is HCatalog ?
Centralized Metadata Management for Shared schema and data type and Table storage.
- Notifications via Java Message Service (JMS)
- Works across Pig, Map Reduce, and Hive
I hope this introductory post gave you a good understanding of what Big Data is all about. If this was helpful, do not forget to give your feedback in the comments section. Cheers!