在实际开发过程中,同一个项目很可能需要多个数据源进行支持,所以今天我把自己整合Mybatis多数据源的过程和大家分享下。
添加所需依赖
<!--mysql连接驱动 -->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
<!--通用mapper -->
<dependency>
<groupId>tk.mybatis</groupId>
<artifactId>mapper-spring-boot-starter</artifactId>
<version>1.1.0</version>
</dependency>
<!--druid数据源 -->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid</artifactId>
<version>1.0.25</version>
</dependency>
这里为了测试方便引用了Mybatis通用mapper插件,只适合对单表的增删改进行操作,避免了xml的配置和手写sql,特别方便。大家也可以利用常规方式进行测试,这里不做过多解释。在application.properties配置文件进行相关配置。我们配置了两个数据源,一个作为主数据源,另一个作为从数据源。
# master 主数据源配置
master.datasource.url=jdbc:mysql://localhost/master?useUnicode=true&characterEncoding=utf8
master.datasource.username=root
master.datasource.password=root
master.datasource.driverClassName=com.mysql.jdbc.Driver
# cluster 从数据源配置
cluster.datasource.url=jdbc:mysql://localhost/cluster?useUnicode=true&characterEncoding=utf8
cluster.datasource.username=root
cluster.datasource.password=root
cluster.datasource.driverClassName=com.mysql.jdbc.Driver
#连接池的配置信息
## 初始化大小,最小,最大
spring.druid.initialSize=5
spring.druid.minIdle=5
spring.druid.maxActive=20
## 配置获取连接等待超时的时间
spring.druid.maxWait=60000
# 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒
spring.druid.timeBetweenEvictionRunsMillis=60000
# 配置一个连接在池中最小生存的时间,单位是毫秒
spring.druid.minEvictableIdleTimeMillis=300000
spring.druid.validationQuery=SELECT 1 FROM DUAL
spring.druid.testWhileIdle=true
spring.druid.testOnBorrow=false
spring.druid.testOnReturn=false
spring.druid.poolPreparedStatements=true
spring.druid.maxPoolPreparedStatementPerConnectionSize=20
# 配置监控统计拦截的filters,去掉后监控界面sql无法统计,'wall'用于防火墙
spring.druid.filters=stat,wall,log4j
# 通过connectProperties属性来打开mergeSql功能;慢SQL记录
spring.druid.connectionProperties=druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000
主数据源的配置文件:
@Configuration
// 扫描 Mapper
@MapperScan(basePackages = MasterDataSourceConfig.MASTER_PACKAGE, sqlSessionFactoryRef = "masterSqlSessionFactory")
public class MasterDataSourceConfig {
//扫描从数据源mapper接口所在的包
static final String MASTER_PACKAGE = "cn.jj.mapper.master";
//扫描从数据源xml文件
static final String MAPPER_LOCATION = "classpath:mapper/master/*.xml";
@Value("${master.datasource.url}")
private String url;
@Value("${master.datasource.username}")
private String user;
@Value("${master.datasource.password}")
private String password;
@Value("${master.datasource.driverClassName}")
private String driverClass;
@Bean(name = "masterDataSource")
@Primary
public DataSource masterDataSource() {
DruidDataSource dataSource = new DruidDataSource();
dataSource.setDriverClassName(driverClass);
dataSource.setUrl(url);
dataSource.setUsername(user);
dataSource.setPassword(password);
return dataSource;
}
@Bean(name = "masterTransactionManager")
@Primary
public DataSourceTransactionManager masterTransactionManager() {
return new DataSourceTransactionManager(masterDataSource());
}
@Bean(name = "masterSqlSessionFactory")
@Primary
public SqlSessionFactory masterSqlSessionFactory(@Qualifier("masterDataSource") DataSource masterDataSource)
throws Exception {
final SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean();
sessionFactory.setDataSource(masterDataSource);
sessionFactory.setMapperLocations(new PathMatchingResourcePatternResolver()
.getResources(MasterDataSourceConfig.MAPPER_LOCATION));
return sessionFactory.getObject();
}
}
从数据源配置文件:
@Configuration
//扫描 Mapper
@MapperScan(basePackages = ClusterDataSourceConfig.CLASTER_PACKAGE, sqlSessionFactoryRef = "clusterSqlSessionFactory")
public class ClusterDataSourceConfig {
//扫描从数据源mapper接口所在的包
static final String CLASTER_PACKAGE = "cn.jj.mapper.cluster";
//扫描从数据源xml文件
static final String MAPPER_LOCATION = "classpath:mapper/cluster/*.xml";
@Value("${cluster.datasource.url}")
private String url;
@Value("${cluster.datasource.username}")
private String user;
@Value("${cluster.datasource.password}")
private String password;
@Value("${cluster.datasource.driverClassName}")
private String driverClass;
@Bean(name = "clusterDataSource")
public DataSource clusterDataSource() {
DruidDataSource dataSource = new DruidDataSource();
dataSource.setDriverClassName(driverClass);
dataSource.setUrl(url);
dataSource.setUsername(user);
dataSource.setPassword(password);
return dataSource;
}
@Bean(name = "clusterTransactionManager")
public DataSourceTransactionManager clusterTransactionManager() {
return new DataSourceTransactionManager(clusterDataSource());
}
@Bean(name = "clusterSqlSessionFactory")
public SqlSessionFactory clusterSqlSessionFactory(@Qualifier("clusterDataSource") DataSource clusterDataSource)
throws Exception {
final SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean();
sessionFactory.setDataSource(clusterDataSource);
sessionFactory.setMapperLocations(new PathMatchingResourcePatternResolver()
.getResources(ClusterDataSourceConfig.MAPPER_LOCATION));
return sessionFactory.getObject();
}
}
项目结构:
到这里Mybatis多数据源已经配置好,需要注意的是,主,从数据源一定要配置正确,特别是以下位置一定要和工程目录对应正确。
数据库数据信息
主数据库:
从数据库:
编写测试类测试
@RunWith(SpringRunner.class)
@SpringBootTest
public class MapperTest2 {
@Autowired
private CityService cityService;
@Autowired
private UserService userService;
// 查询主表
@Test
public void selectCity() {
List<City> city = cityService.selectAll();
System.out.println("主表信息如下:");
for (City city2 : city) {
System.out.println(
"id:"+ city2.getId() + "\t" + "name:" + city2.getName() + "\t" + "state:" + city2.getState());
}
}
// 查询从表
@Test
public void selectUser() {
try {
List<User> user = userService.searchAll();
System.out.println("从表信息如下:");
for (User user2 : user) {
System.out.println("id:"+user2.getId() + "\t" + user2.getGender() + "\t" + user2.getName());
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
控制台输出信息如下:
spring boot 整合Mabatis 配置多数据源完成。
(完)
参考文章
Spring Boot整合MyBatis