方案一
把依赖的jar包通过--jars的方式进行传递,spark-submit --jars a.jar,b.jar,c.jar
pom.xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>copy</id>
<phase>package</phase>
<goals>
<goal>copy-dependencies</goal>
</goals>
<configuration>
<outputDirectory>
${project.build.directory}/jars
</outputDirectory>
</configuration>
</execution>
</executions>
</plugin>
方案二 extraClassPath
spark.executor.extraClassPath=/home/hadoop/wzq_workspace/lib/*
spark.driver.extraClassPath=/home/hadoop/wzq_workspace/lib/*
需要注意的是,你要在所有可能运行spark任务的机器上保证该目录存在,并且将jar包考到所有机器上。这样做的好处是提交代码的时候不用再写一长串jar了,缺点是要把所有的jar包都拷一遍
方案三 把所有文件打成一个包 进行上传
pom.xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.2-beta-5</version>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<!--<archive>
<manifest>
<mainClass>com.jdcloud.stream.Main</mainClass>
</manifest>
</archive>-->
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>