1、正常运行Hadoop集群
2、检查HBase与Hadoop的兼容性,下载对应的正确版本(*如果要看后续文章,建议使用hadoop-2.5.2 hbase-1.1.2 hive-1.2.1 spark-2.0.0)
S:支持
X:不支持
NT:未测试
HBase-0.94.x | HBase-0.98.x (Support for Hadoop 1.1+ is deprecated.) | HBase-1.0.x (Hadoop 1.x is NOT supported) | HBase-1.1.x | HBase-1.2.x | |
Hadoop-1.0.x | X | X | X | X | X |
Hadoop-1.1.x | S | NT | X | X | X |
Hadoop-0.23.x | S | X | X | X | X |
Hadoop-2.0.x-alpha | NT | X | X | X | X |
Hadoop-2.1.0-beta | NT | X | X | X | X |
Hadoop-2.2.0 | NT | S | NT | NT | NT |
Hadoop-2.3.x | NT | S | NT | NT | NT |
Hadoop-2.4.x | NT | S | S | S | S |
Hadoop-2.5.x | NT | S | S | S | S |
Hadoop-2.6.0 | X | X | X | X | X |
Hadoop-2.6.1+ | NT | NT | NT | NT | S |
Hadoop-2.7.0 | X | X | X | X | X |
Hadoop-2.7.1+ | NT | NT | NT | NT | S |
3、节点分布
机器名 | IP |
hadoop-master | 172.16.172.13 |
hadoop-slave01 | 172.16.172.14 |
hadoop-slave02 | 172.16.172.15 |
主机 | 172.16.172.1 |
网关 | 172.16.172.2 |
4、解压文件
cd /home/hadoop/Deploy tar -zxvf hbase-1.1.2-bin.tar.gz
5、修改配置文件conf/hbase-env.sh
#... # The java implementation to use. Java 1.7+ required. export JAVA_HOME=/usr/javak1.7.0_79/ #... # Tell HBase whether it should manage it's own instance of Zookeeper or not. export HBASE_MANAGES_ZK=true #...
6、修改配置文件conf/hbase-site.xml
<?xml version="1.0"?> <?xml-stylesheet type="textl" href="configuration.xsl"?> <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://hadoop-master:9000/hbase<alue> </property> <property> <name>hbase.cluster.distributed</name> <value>true<alue> </property> <property> <name>hbase.master</name> <value>hdfs://hadoop-master:60000<alue> </property> <property> <name>hbase.zookeeper.quorum</name> <value>hadoop-master,hadoop-slave01,hadoop-slave02<alue> </property> </configuration>
7、修改配置文件conf/regionservers
hadoop-master hadoop-slave01 hadoop-slave02
8、先运行hadoop,然后运行hbase
bin/start-hbase.sh
9、连接hbase
bin/hbase shell >list