首先说明一下,如果要使用Linux Native的话,Hadoop是已经自带了哦
然后,如果要编译的话,建议直接从Hadoop源码按官方的说明进行编译,不要像我这样自己搞。。。
如果你喜欢折腾,请继续看:
1、按源码架构拷贝下面的文件及文件夹
hadoop-2.5.2-src\hadoop-common-project\hadoop-common\src\main\native hadoop-2.5.2-src\hadoop-common-project\hadoop-common\src\CMakeLists.txt hadoop-2.5.2-src\hadoop-common-project\hadoop-common\src\config.h.cmake hadoop-2.5.2-src\hadoop-common-project\hadoop-common\src\JNIFlags.cmake hadoop-2.5.2-src\hadoop-hdfs-project\hadoop-hdfs\src\main\native hadoop-2.5.2-src\hadoop-hdfs-project\hadoop-hdfs\src\CMakeLists.txt(可能需要调整一下依赖文件JNIFlags.cmake的相对路径) hadoop-2.5.2-src\hadoop-hdfs-project\hadoop-hdfs\src\config.h.cmake
2、编译libhadoop
2.1、检查并安装以来关系
#需要gcc、make、jdk,这些一般大家都有了 #需要zlib apt-get install zlib1g-dev #需要cmake apt-get install cmake
2.2、用cmake生成Makefile
cmake ./src/ -DGENERATED_JAVAH=~/Build/hadoop-2.5.2-src/build/hadoop-common-project/hadoop-common/native/javah -DJVM_ARCH_DATA_MODEL=64 -DREQUIRE_BZIP2=false -DREQUIRE_SNAPPY=false
2.3、用javah生成头文件
需要三个jar包,hadoop-common,hadoop-annotations,guava
javah org.apache.hadoop.io.compress.lz4.Lz4Compressor javah org.apache.hadoop.io.compress.lz4.Lz4Decompressor javah org.apache.hadoop.io.compress.zlib.ZlibCompressor javah org.apache.hadoop.io.compress.zlib.ZlibDecompressor javah org.apache.hadoop.io.nativeio.NativeIO javah org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory javah org.apache.hadoop.net.unix.DomainSocket javah org.apache.hadoop.net.unix.DomainSocketWatcher javah org.apache.hadoop.security.JniBasedUnixGroupsMapping javah org.apache.hadoop.security.JniBasedUnixNetgroupsMapping javah org.apache.hadoop.util.NativeCrc32
将生成的文件,拷贝到对应的c文件夹中
2.3、生成
make
3、编译libhdfs
3.1、用cmake生成Makefile
cmake ./src/ -DGENERATED_JAVAH=~/Build/hadoop-2.5.2-src/build/hadoop-common-project/hadoop-common/native/javah -DJVM_ARCH_DATA_MODEL=64 -DREQUIRE_LIBWEBHDFS=false -DREQUIRE_FUSE=false
3.2、生成
make
4、将生成的文件拷贝到HADOOP_HOME/lib/mynative
5、修改/etc/profile,增加下面一行
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/mynative"
6、刷新配置
source /etc/profile
搞定!