How to Import Data From MySql to Hadoop Using Sqoop Sqoop is the basic data transfer tool and used to import/export data from Relational Database into hadoop. Sqoop is able to import Teradata and Other JDBC Databases. For Hadoop integration sqoop installation is most important so first install the sqoop on Hadoop. Before accessing the MySql you have to make two changes in MySql db 1. Enable remote access for database: Step 1: Open the file vim /etc/mysql/my.cnf Step 2: Change Bind address localhost to IP address bind-address = <IP Address> Step 3: Restart MySql /etc/init.d/mysqld restart 2. Create user for all nodes: Step 1: Connect with root password mysql -uroot -p<root_pass> Step 2: Create users create user ‘hadoop’@'<ip of master>’ IDENTIFIED BY ‘hadoop’; (First ‘hadoop’ is username and second ‘hadoop’ is password) Sqoop Installation: Step 1: Sqoop install from Apche sqoop Link – http://www.apache.org/d
Posts
Showing posts from July, 2017
- Get link
- X
- Other Apps
Apache Sqoop Tutorial – Basic Sqoop Import and Export Operations What is Sqoop? Sqoop is one type of tool which used to transfer data between RDBMS and HDFS . It is export and import data from datastores to HDFS. It uses a MapReduce for export the data for processing the large amount of data. Sqoop only works with relational databases and it is a open source tool written by Cloudra. Main Functions of Sqoop: Import one and selected tables. Import Complete Hadoop Database Filter out selected column and row from any table WorkFlow of Sqoop: Sqoop Import – It import separate table from RDBMS to HDFS and all rows of table is one record in sqoop which stored as textfile or sequence Files Sqoop Expor t – It used to export file from HDFS to RDBMS and that file stored to record which is called rows. Some Sqoop Import Operations: 1. General Syntax: $ sqoop import (generic args) (import args) $ sqoop-import (generic args) (import args) 2. How to import Tab
- Get link
- X
- Other Apps
Main Features of Apache Hadoop Hadoop open source framework and popular data storage system. Hadoop is used to stores large set of structured,semi structured and unstructured data. Here we are discuss about main features of Apache hadoop 1. Hadoop is Open Source: Hadoop framework is the open source so we can changed project coding according to business requirements 2. Fault Tolerant: In Hadoop all data stored in HDFS and that data are replicated two or more blocks. All blocks data are across the hadoop cluster . If any block are failure or out of service the system automatically assigns the work to another block so processing the data continuously working. 3. Scalable: Hadoop is open source so its run on hardware. In Hadoop new nodes can be easily added without any downtime. Hadoop provides horizontal scalability so new node added on the Fly model to system. In Hadoop applications run on more than thousands of node. 4. Cost Effective: Hadoop also offer cost le