当前位置: 编程技术>java/j2ee
java连接hdfs ha和调用mapreduce jar示例
来源: 互联网 发布时间:2014-11-04
本文导语: Java API 连接 HDFS HA 代码如下:public static void main(String[] args) { Configuration conf = new Configuration(); conf.set("fs.defaultFS", "hdfs://hadoop2cluster"); conf.set("dfs.nameservices", "hadoop2cluster"); conf.set("dfs.ha.namenodes.hadoop2cluster", "nn1,nn2"); conf.set("df...
Java API 连接 HDFS HA
代码如下:
public static void main(String[] args) {
Configuration conf = new Configuration();
conf.set("fs.defaultFS", "hdfs://hadoop2cluster");
conf.set("dfs.nameservices", "hadoop2cluster");
conf.set("dfs.ha.namenodes.hadoop2cluster", "nn1,nn2");
conf.set("dfs.namenode.rpc-address.hadoop2cluster.nn1", "10.0.1.165:8020");
conf.set("dfs.namenode.rpc-address.hadoop2cluster.nn2", "10.0.1.166:8020");
conf.set("dfs.client.failover.proxy.provider.hadoop2cluster", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
FileSystem fs = null;
try {
fs = FileSystem.get(conf);
FileStatus[] list = fs.listStatus(new Path("/"));
for (FileStatus file : list) {
System.out.println(file.getPath().getName());
}
} catch (IOException e) {
e.printStackTrace();
} finally{
try {
fs.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
Java API调用MapReduce程序
代码如下:
String[] args = new String[24];
args[0] = “/usr/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar";
args[1] = "wordcount";
args[2] = "-D";
args[3] = "yarn.resourcemanager.address=10.0.1.165:8032";
args[4] = "-D";
args[5] = "yarn.resourcemanager.scheduler.address=10.0.1.165:8030";
args[6] = "-D";
args[7] = "fs.defaultFS=hdfs://hadoop2cluster/";
args[8] = "-D";
args[9] = "dfs.nameservices=hadoop2cluster";
args[10] = "-D";
args[11] = "dfs.ha.namenodes.hadoop2cluster=nn1,nn2";
args[12] = "-D";
args[13] = "dfs.namenode.rpc-address.hadoop2cluster.nn1=10.0.1.165:8020";
args[14] = "-D";
args[15] = "dfs.namenode.rpc-address.hadoop2cluster.nn2=10.0.1.166:8020";
args[16] = "-D";
args[17] = "dfs.client.failover.proxy.provider.hadoop2cluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider";
args[18] = "-D";
args[19] = "fs.hdfs.impl=org.apache.hadoop.hdfs.DistributedFileSystem";
args[20] = "-D";
args[21] = "mapreduce.framework.name=yarn";
args[22] = "/input";
args[23] = "/out01";
RunJar.main(args);