本文提纲
- 1、kakfa-producer
- 2、kafka-consumer
- 3、springboot整合
该项目依赖psyche
,将相关kafka组件作为moudle放在fast-plugins
中
运行环境
springboot + kafka2.11
1、前提
假设你已经了解过springboot和kafka,对这两门技术已经有简单的基础认知,包括知道kafka是mq组件,知道生产者消费者的概念
- kafka安装教程
项目整体架构如下
在
fast-plugins
moudle下创建fast-data-kafka,其中又包含consumer和producer两个moudle。web的项目结构如下图
web依赖kafka和base项目
- pom.xml依赖
相关依赖由于是公共的,都放入fast-data-kafka这个上层项目
<modules>
<module>fast-data-kafka-consumer</module>
<module>fast-data-kafka-producer</module>
</modules>
<dependencies>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>22.0</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-autoconfigure</artifactId>
</dependency>
</dependencies>
2、kafka-producer
producer
主要包含两个类
- KafkaProducerProperties:配置文件对应的bean
@Component
@ConfigurationProperties(prefix = KafkaProducerProperties.KAFKA_PRODUCER_PREFIX)
public class KafkaProducerProperties {
public static final String KAFKA_PRODUCER_PREFIX = "kafka";
private String brokerAddress;
public String getBrokerAddress() {
return brokerAddress;
}
public void setBrokerAddress(String brokerAddress) {
this.brokerAddress = brokerAddress;
}
}
该类对应配置文件中的kafka.brokerAddress
属性
- KafkaProducerAutoConfiguration:该类依赖
KafkaProducerProperties
配置bean
@Configuration
@EnableKafka
@EnableConfigurationProperties(KafkaProducerProperties.class)
@ConditionalOnClass(value = org.apache.kafka.clients.consumer.KafkaConsumer.class)
public class KafkaProducerAutoConfiguration {
private KafkaProducerProperties kafkaProducerProperties;
public KafkaProducerAutoConfiguration(KafkaProducerProperties kafkaProducerProperties) {
this.kafkaProducerProperties = kafkaProducerProperties;
}
public Map<String, Object> producerConfigs() {
String brokers = kafkaProducerProperties.getBrokerAddress();
if (StringUtils.isEmpty(brokers)) {
throw new RuntimeException("kafka broker address is empty");
}
Map<String, Object> props = Maps.newHashMap();
// list of host:port pairs used for establishing the initial connections
// to the Kakfa cluster
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProducerProperties
.getBrokerAddress());
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
// value to block, after which it will throw a TimeoutException
props.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 5000);
props.put(ProducerConfig.ACKS_CONFIG, "all");
props.put(ProducerConfig.RETRIES_CONFIG, 1);
props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
return props;
}
@Bean
public ProducerFactory<String, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
@Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<String, String>(producerFactory());
}
该类包含了kafka-producer的一些基础配置,并且创建了KafkaTemplate
以上就完成了kafka-producer
的配置
3、kafka-consumer
同上,该moudle也包含以下两个类
- KafkaConsumerProperties: 配置文件对应的bean
/**
* Describe:
*
* @Author sunliang
* @Since 2019/06/10
*/
@ConfigurationProperties(prefix = KafkaConsumerProperties.KAFKA_CONSUMER_PREFIX)
public class KafkaConsumerProperties {
public static final String KAFKA_CONSUMER_PREFIX = "kafka";
private String brokerAddress;
private String groupId;
public String getBrokerAddress() {
return brokerAddress;
}
public void setBrokerAddress(String brokerAddress) {
this.brokerAddress = brokerAddress;
}
public String getGroupId() {
return groupId;
}
public void setGroupId(String groupId) {
this.groupId = groupId;
}
- KafkaConsumerAutoConfiguration: 自动装配类
/**
* Describe:
*
* @Author sunliang
* @Since 2019/06/10
*/
@EnableKafka
@Configuration
@EnableConfigurationProperties(KafkaConsumerProperties.class)
@ConditionalOnClass(value = org.apache.kafka.clients.consumer.KafkaConsumer.class)
public class KafkaConsumerAutoConfiguration {
protected final Logger logger = LoggerFactory.getLogger(this.getClass());
private KafkaConsumerProperties kafkaConsumerProperties;
public KafkaConsumerAutoConfiguration(KafkaConsumerProperties kafkaConsumerProperties) {
logger.info("KafkaConsumerAutoConfiguration kafkaConsumerProperties:{}",
JSON.toJSONString(kafkaConsumerProperties));
this.kafkaConsumerProperties = kafkaConsumerProperties;
}
@Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new
ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(3);
factory.getContainerProperties().setPollTimeout(1000);
return factory;
}
@Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
@Bean
public Map<String, Object> consumerConfigs() {
String brokers = kafkaConsumerProperties.getBrokerAddress();
if (StringUtils.isEmpty(brokers)) {
throw new RuntimeException("kafka broker address is emptiy");
}
Map<String, Object> propsMap = new HashMap<>();
propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaConsumerProperties.getBrokerAddress());
propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaConsumerProperties.getGroupId());
propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true); //自动commit
propsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100"); //定时commit的周期
propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000"); //consumer活性超时时间
propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest"); //从何处开始消费,latest 表示消费最新消息,earliest 表示从头开始消费,none表示抛出异常,默认latest
return propsMap;
}
}
以上就完成了consumer的配置,接下来我们做一个boot应用,测试下kafka
3、fast-rest
- pom.xml
<dependencies>
<dependency>
<groupId>com.liangliang</groupId>
<artifactId>fast-base</artifactId>
<version>0.0.1-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>com.liangliang</groupId>
<artifactId>fast-data-kafka-consumer</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>com.liangliang</groupId>
<artifactId>fast-data-kafka-producer</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
</dependencies>
- kafkaUtils:kafka的工具类
/**
* Describe:
*
* @Author sunliang
* @Since 2019/06/11
*/
@Slf4j
@Component
public class KafkaUtils {
@Autowired
private KafkaTemplate kafkaTemplate;
public void sendMessage(String topic, String data) {
log.info("kafka sendMessage start");
ListenableFuture<SendResult<Integer, String>> future = kafkaTemplate.send(topic, data);
future.addCallback(new ListenableFutureCallback<SendResult<Integer, String>>() {
@Override
public void onFailure(Throwable ex) {
log.error("kafka sendMessage error, ex = {}, topic = {}, data = {}", ex, topic, data);
}
@Override
public void onSuccess(SendResult<Integer, String> result) {
log.info("kafka sendMessage success topic = {}, data = {}",topic, data);
}
});
log.info("kafka sendMessage end");
}
}
- listener:consumer监听程序
@KafkaListener(topics = {"test"})
public void listen(ConsumerRecord record){
String json = record.value().toString();
log.info("kafka consumer sessionListener session json:{}", json);
}
监听test
主题,并输出log
- controller: 可以从web端输入参数,作为kafka生产者,将相关信息,存入kafka
/**
* Describe:
*
* @Author sunliang
* @Since 2019/06/11
*/
@Slf4j
@RestController
public class KafkaProducerController {
@Autowired
private KafkaUtils kafkaUtils;
@GetMapping("/chat/{msg}")
public RestResult area(HttpServletResponse response, @PathVariable("msg")String msg){
response.setHeader("Access-Control-Allow-Origin", "*");
log.info(">>>>>msg = {}",msg);
kafkaUtils.sendMessage("test",msg);
return RestResultBuilder.builder().data(msg).success().build();
}
}
至此已经完成了kafka与springboot的整合。