一、插件安装
git clone https://github.com/fangyuzhong2016/HadoopIntellijPlugin.git
注意:从Github上下载的源码需要经过编译才能使用
- 编译
①、目前 Intellij plugin for hadoop 的源码使用maven 进行编译和打包,因此在编译之前请确保安装JDK1.8和maven3 以上版本。
②、Intellij plugin for hadoop插件基于 IntelliJ IDEA Ultimate 2017.2 版本进行开发的,因此需要安装IntelliJ IDEAUltimate 2017 以上版本
③、进入源码目录 ../HadoopIntellijPlugin/ 修改 pom.xml 文件,主要修改hadoop的版本和IntelliJ IDEA 安装的路径,设置如下:
# 修改properties 版本内容与idea安装路径
<!--设置hadoop版本-->
<hadoop.2.version>3.0.0-alpha2</hadoop.2.version>
<!--设置Intellij-IDEA的安装路径-->
<IntellijIde.dir>C:\Program Files\JetBrains\IntelliJ IDEA 2018.2</IntellijIde.dir>
④、执行mvn 命令:
先执行
C:\Users\Administrator>d:
D:\>cd HadoopIntellijPlugin
D:\HadoopIntellijPlugin>mvn clean
mvn clean
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 40.727 s
[INFO] Finished at: 2019-11-21T13:54:47+08:00
[INFO] ------------------------------------------------------------------------
然后执行
D:\HadoopIntellijPlugin>mvn assembly:assembly
[INFO] Reading assembly descriptor: assembly.xml
[INFO] artifact net.minidev:json-smart: checking for updates from aliyun-repos
[INFO] artifact net.minidev:json-smart: checking for updates from central
[INFO] artifact net.minidev:json-smart: checking for updates from dynamodb-local-oregon
[INFO] artifact net.minidev:json-smart: checking for updates from apache.snapshots.https
[INFO] artifact net.minidev:json-smart: checking for updates from repository.jboss.org
[INFO] HadoopIntellijPlugin/lib/HadoopIntellijPlugin-1.0.jar already added, skipping
[INFO] Building zip: D:\HadoopIntellijPlugin\target\HadoopIntellijPlugin-1.0.zip
[INFO] HadoopIntellijPlugin/lib/HadoopIntellijPlugin-1.0.jar already added, skipping
[INFO]
[INFO] <<< maven-assembly-plugin:2.2-beta-5:assembly (default-cli) < package @ HadoopIntellijPlugin <<<
[INFO]
[INFO]
[INFO] --- maven-assembly-plugin:2.2-beta-5:assembly (default-cli) @ HadoopIntellijPlugin ---
[INFO] Reading assembly descriptor: assembly.xml
[INFO] HadoopIntellijPlugin/lib/HadoopIntellijPlugin-1.0.jar already added, skipping
[INFO] Building zip: D:\HadoopIntellijPlugin\target\HadoopIntellijPlugin-1.0.zip
[INFO] HadoopIntellijPlugin/lib/HadoopIntellijPlugin-1.0.jar already added, skipping
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:00 min
[INFO] Finished at: 2019-11-21T16:28:22+08:00
[INFO] ------------------------------------------------------------------------
编译完成后 在.../target/HadoopIntellijPlugin-1.0.zip 即为该插件的安装包,然后安装到 IntelliJ 中即可。
D:\HadoopIntellijPlugin\target
-
安装HadoopIntellijPlugin
-
修改GUI
再次打开IDEA。由于用的是idea的框架。所以idea的ui要改成idea的动态生成插入,在这里设置:在Setting里面找到GUI Designer
-
HDFS设置
参数填写:
只需要填写HDFS地址即可
注意:测试功能似乎不完善,会提示连接失败,但是直接点确定就可以正常使用。
每次更改文件,可能都需要以用户登录权限,比较麻烦。这个可以配置,在hdfs-site.xml来配置
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
常见问题:
-
测试提示失败,但可以正常使用。
在源码的:
com.fangyuzhong.intelliJ.hadoop.fsconnection.ConnectionManager类中。
二、Intellij plugin for hadoop 插件配置和源码的相关说明
1、插件的源码说明
①、core 包,为插件项目的核心包,公共组件库,包括了 通用UI界面、多线程操作、Hadoop连接设置基类、Hadoop文件系统通用操作类、插件项目设置通用类和其他工具类
②、fsconnection 包,Hadoop文件系统连接实现类和连接相关配置实现类
③、fsobject 包,文件系统对象类的实现(对于HDFS来讲就是 目录树和文件树节点的组织方式的实现)
④、fsbrowser包,插件的主界面实现,包括读取HDFS文件系统相关数据进行展示、文件系统对象的创建、下载、删除、上传和其他一些操作
⑤、globalization包,插件多语言支持类
⑥、options 包,插件设置类
⑦、mainmenu包, 插件主菜单操作类
2、插件配置相关说明
插件配置在.../resources/目录下,包括HadoopNavigator_en_US.properties、HadoopNavigator_zh_CN.properties 、plugin.xml
HadoopNavigator_en_US.properties 文件为插件界面的英文语言配置
HadoopNavigator_zh_CN.properties 文件为插件界面的中文语言配置
目前插件界面的语言只支持 简体中文和英文,其他的语言,需要自行制作语言包。系统初始默认的语言为操作系统默认的语言。
三、插件使用
-
创建目录(测试不好使)
-
下载文件
-
上传文件
四、Hadoop编程示例
-
创建项目
添加Maven依赖
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.xtsz</groupId>
<artifactId>hadoop-exercise</artifactId>
<version>1.0.0</version>
<name>hadoop-exercise</name>
<url>hadoop练习</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<version.hadoop>2.9.2</version.hadoop>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>${version.hadoop}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>${version.hadoop}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${version.hadoop}</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<appendAssemblyId>false</appendAssemblyId>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<!-- 此处指定main方法入口的class -->
<mainClass>com.xtsz.WordCount</mainClass>
</manifest>
</archive>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>assembly</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
- 代码编写
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCount {
/**
* Mapper
*/
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
/**
* Reducer
*/
public static class IntSumReducer
extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
- 打包可执行jar
使用插件:
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<appendAssemblyId>false</appendAssemblyId>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<!-- 此处指定main方法入口的class -->
<mainClass>com.xtsz.WordCount</mainClass>
</manifest>
</archive>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>assembly</goal>
</goals>
</execution>
</executions>
</plugin>
-
上传运行
root@master:~# hadoop jar hadoop-exercise-1.0.0.jar hdfs://master:9000/wordcount hdfs://master:9000/output
hello 7
jerry 1
jone 1
kitty 1
marquis 1
tom 2
world 1
五、非Maven方式导包
1. 新建Java项目
2. 导入jar包
可以在hadoop的 share/hadoop 目录下找到,找到module点击右侧的小加号JARS or directories…
common:
comom/lib:
hdfs:
mapreduce:
yarn:
3. 代码编写
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
import java.util.StringTokenizer;
public class WordCountTest {
/**
* Mapper
*/
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
/**
* Reducer
*/
public static class IntSumReducer
extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCountTest.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.setInputPaths(job, new Path("hdfs://192.168.71.130:9000/wordcount"));
FileOutputFormat.setOutputPath(job, new Path("hdfs://192.168.71.130:9000/result"));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
4. 运行测试
六、常见问题
- Unable to import maven project: See logs for details
更换Maven版本为:3.5.4 - 清空日志
root@master:/usr/local/hadoop-2.9.2/logs# echo "">hadoop-root-namenode-master.log
- 打包插件
<plugin>
<artifactId>maven-dependency-plugin</artifactId>
<configuration>
<!--依赖传递-->
<excludeTransitive>false</excludeTransitive>
<!-- 表示复制的jar文件添加版本信息 -->
<stripVersion>true</stripVersion>
<!--用来指定拷出后Libraries的存放地-->
<outputDirectory>./lib</outputDirectory>
</configuration>
</plugin>
- 可执行Jar插件
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<appendAssemblyId>false</appendAssemblyId>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<!-- 此处指定main方法入口的class -->
<mainClass>com.xtsz.WordCount</mainClass>
</manifest>
</archive>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>assembly</goal>
</goals>
</execution>
</executions>
</plugin>
- java.lang.InterruptedException
当关闭DFSStripedOutputStream的时候,如果在向data/parity块刷回数据失败的时候,streamer线程不会被关闭。同时在DFSOutputStream#closeImpl中也存在这个问题。
DFSOutputStream#closeImpl总是会强制性地关闭线程,会引起InterruptedException。 - 缺少tools.jar
<dependency>
<groupId>jdk.tools</groupId>
<artifactId>jdk.tools</artifactId>
<version>1.8</version>
<scope>system</scope>
<systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
</dependency>