1 console形式的消费者api
1.1,同步写入
bin/kafka-console-producer.sh --topic test1 --bootstrap-server localhost:9092
1.2,异步写入
bin/kafka-console-producer.sh --topic test1 --bootstrap-server localhost:9092 --sync
1.3 其他特性
顺序保证、重试、超时、压缩算法
➜ kafka_2.13-3.0.0 bin/kafka-console-producer.sh
#一个批次发送的消息数量
--batch-size <Integer: size> Number of messages to send in a single batch if they are not being sent synchronously. (default: 200)
#链接的broker地址
--bootstrap-server <String: server to REQUIRED unless --broker-list connect to> (deprecated) is specified. The server (s) to connect to. The broker list string in the form HOST1:PORT1,HOST2: PORT2.
#链接的broker地址
--broker-list <String: broker-list> DEPRECATED, use --bootstrap-server instead; ignored if --bootstrap- server is specified. The broker list string in the form HOST1:PORT1, HOST2:PORT2.
#压缩算法
--compression-codec [String: The compression codec: either 'none', compression-codec] 'gzip', 'snappy', 'lz4', or 'zstd'. If specified without value, then it defaults to 'gzip' --help Print usage information.
--line-reader <String: reader_class> The class name of the class to use for reading lines from standard in. By default each line is read as a separate message. (default: kafka. tools. ConsoleProducer$LineMessageReader)
--max-block-ms <Long: max block on The max time that the producer will send> block for during a send request (default: 60000)
--max-memory-bytes <Long: total memory The total memory used by the producer in bytes> to buffer records waiting to be sent to the server. (default: 33554432)
--max-partition-memory-bytes <Long: The buffer size allocated for a memory in bytes per partition> partition. When records are received which are smaller than this size the producer will attempt to optimistically group them together until this size is reached. (default: 16384)
--message-send-max-retries <Integer> Brokers can fail receiving the message for multiple reasons, and being unavailable transiently is just one of them. This property specifies the number of retries before the producer give up and drop this message. (default: 3)
--metadata-expiry-ms <Long: metadata The period of time in milliseconds expiration interval> after which we force a refresh of metadata even if we haven't seen any leadership changes. (default: 300000)
--producer-property <String: A mechanism to pass user-defined producer_prop> properties in the form key=value to the producer.
--producer.config <String: config file> Producer config properties file. Note that [producer-property] takes precedence over this config.
--property <String: prop> A mechanism to pass user-defined properties in the form key=value to the message reader. This allows custom configuration for a user- defined message reader. Default properties include: parse.key=true|false key.separator=<key.separator> ignore.error=true|false
--request-required-acks <String: The required acks of the producer request required acks> requests (default: 1)
--request-timeout-ms <Integer: request The ack timeout of the producer timeout ms> requests. Value must be non-negative and non-zero (default: 1500)
--retry-backoff-ms <Integer> Before each retry, the producer refreshes the metadata of relevant topics. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata. (default: 100)
--socket-buffer-size <Integer: size> The size of the tcp RECV size. (default: 102400)
--sync If set message send requests to the brokers are synchronously, one at a time as they arrive.
--timeout <Integer: timeout_ms> If set and the producer is running in asynchronous mode, this gives the maximum amount of time a message will queue awaiting sufficient batch size. The value is given in ms. (default: 1000)
--topic <String: topic> REQUIRED: The topic id to produce messages to.
--version Display Kafka version.
2,golang的消费者client
```golang
package main
import "fmt"
import "github.com/Shopify/sarama"
func main() {
config :=sarama.NewConfig()
config.Producer.RequiredAcks =sarama.WaitForAll //赋值为-1:这意味着producer在follower副本确认接收到数据后才算一次发送完成。
config.Producer.Partitioner =sarama.NewRandomPartitioner //写到随机分区中,默认设置8个分区
config.Producer.Return.Successes =true
//生产者结构体
msg := &sarama.ProducerMessage{}
msg.Topic =`nginx_log`
msg.Value =sarama.StringEncoder("this is a good test")
client, err :=sarama.NewSyncProducer([]string{"47.92.71.173:9092"}, config)
if err !=nil {
fmt.Println("producer close err, ", err)
return
}
defer client.Close()
pid, offset, err := client.SendMessage(msg)
if err !=nil {
fmt.Println("send message failed, ", err)
return
}
fmt.Printf("分区ID:%v, offset:%v \n", pid, offset)
}
```
思考:
1,如何保证消息100%投递?
--生产者
--消费者
2,如何保证消息的顺序性?
kafka同一个分区可以保证顺序性,因此只需要将需要保证顺序性的消息放到同一个分区即可(要么只用一个分区,要么用hash分区)。
3,如何保证消息不重复?
消息是有可能重复的,但是我们要保证消费了重复的消息也不出问题,就是要保证消息的幂等性
4,如何单播、多播?
通过消费者组来实现。一个主题可以设置0或者多个消费者组,在同一个消费者组内,不同消费者消费的内容是互斥的。注意,同一个消费者组内的消费者个数不能大于分区数量,否则多余的消费者不会分配到分区将处于阻塞状态