k8s部署zookeeper/kakfa集群

一、部署ZK集群

问题:
在制作好zk的docker镜像后,测试docker直接运行起3个实例,zk集群选举建立都是正常的,但是,通过k8s部署后,发现zk集群选举无论如何都不能成功,各种google都无法解决,弃用自制镜像,改用docker官方的镜像,问题依旧。最终通过headless service这一方式完美解决,在此记录一下解决过程。

首先,贴一下manifest.yaml文件,整合成了一个完整的yaml文件,基于官方的zk docker镜像:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ykszktest-n1
spec:
replicas: 1
selector:
matchLabels:
app: ykszktest-n1
template:
metadata:
labels:
app: ykszktest-n1
spec:
hostname: ykszktest-n1
volumes:
- name: ykszktest-data
hostPath:
path: /data/ykszktest-cluster/ykszktest-data-n1
- name: ykszktest-logs
hostPath:
path: /data/ykszktest-cluster/ykszktest-logs-n1
dnsPolicy: ClusterFirst
containers:
- name: ykszktest-n1
image: zookeeper:3.4.10
imagePullPolicy: Always
volumeMounts:
- name: ykszktest-data
readOnly: false
mountPath: "/data/ykszktest-data"
- name: ykszktest-logs
readOnly: false
mountPath: "/data/ykszktest-logs"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
#command: ['tail', '-f', '/etc/hosts']
env:
- name: ZOO_MY_ID
value: "1"
- name: ZOO_SERVERS
value: server.1=ykszktest-n1:2888:3888 server.2=ykszktest-n2:2888:3888 server.3=ykszktest-n3:2888:3888
- name: ZOO_DATA_DIR
value: '/data/ykszktest-data'
- name: ZOO_DATA_LOG_DIR
value: '/data/ykszktest-logs'

---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ykszktest-n2
spec:
replicas: 1
selector:
matchLabels:
app: ykszktest-n2
template:
metadata:
labels:
app: ykszktest-n2
spec:
hostname: ykszktest-n2
volumes:
- name: ykszktest-data
hostPath:
path: /data/ykszktest-cluster/ykszktest-data-n2
- name: ykszktest-logs
hostPath:
path: /data/ykszktest-cluster/ykszktest-logs-n2
dnsPolicy: ClusterFirst
containers:
- name: ykszktest-n2
image: zookeeper:3.4.10
imagePullPolicy: Always
volumeMounts:
- name: ykszktest-data
readOnly: false
mountPath: "/data/ykszktest-data"
- name: ykszktest-logs
readOnly: false
mountPath: "/data/ykszktest-logs"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
#command: ['tail', '-f', '/etc/hosts']
env:
- name: ZOO_MY_ID
value: "2"
- name: ZOO_SERVERS
value: server.1=ykszktest-n1:2888:3888 server.2=ykszktest-n2:2888:3888 server.3=ykszktest-n3:2888:3888
- name: ZOO_DATA_DIR
value: '/data/ykszktest-data'
- name: ZOO_DATA_LOG_DIR
value: '/data/ykszktest-logs'

---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ykszktest-n3
spec:
replicas: 1
selector:
matchLabels:
app: ykszktest-n3
template:
metadata:
labels:
app: ykszktest-n3
spec:
hostname: ykszktest-n3
volumes:
- name: ykszktest-data
hostPath:
path: /data/ykszktest-cluster/ykszktest-data-n3
- name: ykszktest-logs
hostPath:
path: /data/ykszktest-cluster/ykszktest-logs-n3
dnsPolicy: ClusterFirst
containers:
- name: ykszktest-n3
image: zookeeper:3.4.10
imagePullPolicy: Always
volumeMounts:
- name: ykszktest-data
readOnly: false
mountPath: "/data/ykszktest-data"
- name: ykszktest-logs
readOnly: false
mountPath: "/data/ykszktest-logs"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
#command: ['tail', '-f', '/etc/hosts']
env:
- name: ZOO_MY_ID
value: "3"
- name: ZOO_SERVERS
value: server.1=ykszktest-n1:2888:3888 server.2=ykszktest-n2:2888:3888 server.3=ykszktest-n3:2888:3888
- name: ZOO_DATA_DIR
value: '/data/ykszktest-data'
- name: ZOO_DATA_LOG_DIR
value: '/data/ykszktest-logs'

---
apiVersion: v1
kind: Service
metadata:
labels:
app: ykszktest-n1
name: ykszktest-n1
namespace: default
spec:
ports:
- port: 2181
protocol: TCP
targetPort: 2181
name: client
- port: 2888
protocol: TCP
targetPort: 2888
name: leader
- port: 3888
protocol: TCP
targetPort: 3888
name: leader-election
selector:
app: ykszktest-n1
sessionAffinity: None
type: ClusterIP

---
apiVersion: v1
kind: Service
metadata:
labels:
app: ykszktest-n2
name: ykszktest-n2
namespace: default
spec:
ports:
- port: 2181
protocol: TCP
targetPort: 2181
name: client
- port: 2888
protocol: TCP
targetPort: 2888
name: leader
- port: 3888
protocol: TCP
targetPort: 3888
name: leader-election
selector:
app: ykszktest-n2
sessionAffinity: None
type: ClusterIP

---
apiVersion: v1
kind: Service
metadata:
labels:
app: ykszktest-n3
name: ykszktest-n3
namespace: default
spec:
ports:
- port: 2181
protocol: TCP
targetPort: 2181
name: client
- port: 2888
protocol: TCP
targetPort: 2888
name: leader
- port: 3888
protocol: TCP
targetPort: 3888
name: leader-election
selector:
app: ykszktest-n3
sessionAffinity: None
type: ClusterIP

报错1:

1
Socket connection established to localhost/127.0.0.1:2181, initiating session ClientCnxn$SendThread@1158] - Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket conn

原因:zk peer不通,查看发现是svc未关联上endpoint,导致流量无法到达zk peer。

报错2:

1
2
3
4
5
6
2018-08-16 11:46:38,158 [myid:2] - WARN  [SendWorker:1:QuorumCnxManager$SendWorker@732] - Exception when using channel: for id 1 my id = 2 error = java.net.SocketException: Broken pipe
2018-08-16 11:46:38,159 [myid:2] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker
2018-08-16 11:46:38,166 [myid:2] - WARN [SendWorker:1:QuorumCnxManager$SendWorker@736] - Send worker leaving thread
2018-08-16 11:47:38,157 [myid:2] - INFO [QuorumPeer[myid=2]/0.0.0.0:2181:FastLeaderElection@852] - Notification time out: 60000
2018-08-16 11:47:38,260 [myid:2] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@810] - Connection broken for id 1, my id = 2, error =
java.io.EOFException

google出来通常的排查思路:
1.检查myid内的序号与本地的server.{ID}是否匹配
2.检查data目录是否有权限,是否正常生成pid文件
3.将本地对应的service.{ID}的IP配置为0.0.0.0
4./etc/hosts修改本地dns 条目localhost指向0.0.0.0

以上方法全部无效。

测试发现,如果直接以docker的形式运行,并配置好相应的环境变量后,依次启动zk服务,选举很快就完成,集群建立成功,为什么用k8s部署就会出现net.SocketException的报错呢,若为网络问题,可是在pod内通过svc ClusterIP检查zk peer的工作端口、选举端口、leader端口都是通的,难道是因为zk的选举交互过程不能使用经过kube-proxy转换过的svc IP而必须使用pod的local IP?

抱着以上猜测,手动将zoo.cfg内的配置文件的server实例的名称从svc-name改成了当前的POD IP,其余保持不变:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
###修改前
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zookeeper-3.4.9/data
clientPort=2181
server.1=ykszktest-n1:2888:3888
server.2=ykszktest-n1:2888:3888
server.3=ykszktest-n1:2888:3888

###修改后
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zookeeper-3.4.9/data
clientPort=2181
server.1=10.100.2.65:2888:3888
server.2=10.100.0.71:2888:3888
server.3=10.100.0.70:2888:3888

再次重启zk服务,果然,选举完成,zk集群建立成功。

1
2
3
4
5
6
~/mytest/zk# kubectl exec -it ykszktest-n2-566c5cd4db-zhvvr bash

bash-4.3# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: follower

思考:
svc的意义本身是为了防止Pod IP变化带来的不稳定性,因此通过svc在控制层面以标签的形式,在数据层面以ipvs (新版) | iptables (旧版)的形式,将流量转发给endpoint pod实例,以实现服务依赖对于pod IP的解耦,但是如果zk的集群建立只能以pod IP的方式进行,那将带来诸多不确定性与不稳定的因素。有没有什么办法可以既使用上层的svc,又不经过转换流量直接转发的pod呢?

答案是有。google了一圈,发现了service的一种独特的工作方式:headless service
配置这种工作方式很简单,将svc的ClusterIP指定为None即可:

1
Service.spec.clusterIP: None

经过这样配置后,kube-proxy只是单纯地转发流量,不再进行转换,同时集群dns的A记录不再指向svc IP,而直接指向后端的pod IP,但endpoints controller仍然会在API中创建Endpoints的记录。当然,不再代理流量的同时,svc的负载均衡的功能也同时失去了。在本处场景下,zk本身就是以集群的方式工作,有自己内部的负载均衡算法,因此也无需svc这一层来做LB.

修改完成之后的3个svc实例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
apiVersion: v1
kind: Service
metadata:
labels:
app: ykszktest-n1
name: ykszktest-n1
namespace: default
spec:
clusterIP: None
ports:
- port: 2181
protocol: TCP
targetPort: 2181
name: client
- port: 2888
protocol: TCP
targetPort: 2888
name: leader
- port: 3888
protocol: TCP
targetPort: 3888
name: leader-election
selector:
app: ykszktest-n1
sessionAffinity: None
type: ClusterIP

---
apiVersion: v1
kind: Service
metadata:
labels:
app: ykszktest-n2
name: ykszktest-n2
namespace: default
spec:
clusterIP: None
ports:
- port: 2181
protocol: TCP
targetPort: 2181
name: client
- port: 2888
protocol: TCP
targetPort: 2888
name: leader
- port: 3888
protocol: TCP
targetPort: 3888
name: leader-election
selector:
app: ykszktest-n2
sessionAffinity: None
type: ClusterIP

---
apiVersion: v1
kind: Service
metadata:
labels:
app: ykszktest-n3
name: ykszktest-n3
namespace: default
spec:
clusterIP: None
ports:
- port: 2181
protocol: TCP
targetPort: 2181
name: client
- port: 2888
protocol: TCP
targetPort: 2888
name: leader
- port: 3888
protocol: TCP
targetPort: 3888
name: leader-election
selector:
app: ykszktest-n3
sessionAffinity: None
type: ClusterIP

最后,重新部署yaml文件,进入pod内检查结果,部署成功:

1
2
3
4
bash-4.3# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: leader

二、部署kafka集群

使用star数量最高的镜像wurstmeister/kafka:1.0.1

1
2
3
root@yksv001238:~/test/kafka# docker search kafka
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
wurstmeister/kafka Multi-Broker Apache Kafka Image 654 [OK]

需要提供的环境变量如下,根据节点修改相应变量:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
env:
# broker ID,必须要有,各节点不能一致
- name: KAFKA_BROKER_ID
value: "1"
# 必须要有,zk集群
- name: KAFKA_ZOOKEEPER_CONNECT
value: ykszktest-n1:2181,ykszktest-n2:2181,ykszktest-n3:2181/kafka
# 必须要有,kafka工作端口
- name: KAFKA_ADVERTISED_PORT
value: "9092"
# 可选
- name: KAFKA_ADVERTISED_HOST_NAME
value: ykskafkatest-n1
# 可选
- name: KAFKA_HEAP_OPTS
value: "-Xmx4G -Xms4G"
# JMX相关,可选
- name: KAFKA_JMX_OPTS
value: "-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.rmi.port=1099"
# JMX相关,可选
- name: JMX_PORT
value: "1099"

数据和志通过hostpath方式挂载出来。完整manifest yaml文件如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ykskafkatest-n1
spec:
replicas: 1
selector:
matchLabels:
app: ykskafkatest-n1
template:
metadata:
labels:
app: ykskafkatest-n1
spec:
hostname: ykskafkatest-n1
volumes:
- name: ykskafkatest-data
hostPath:
path: /data/ykskafkatest-cluster/ykskafkatest-data-n1
- name: ykskafkatest-logs
hostPath:
path: /data/ykskafkatest-cluster/ykskafkatest-logs-n1
dnsPolicy: ClusterFirst
containers:
- name: ykskafkatest-n1
image: wurstmeister/kafka:1.0.1
imagePullPolicy: Always
volumeMounts:
- name: ykskafkatest-data
readOnly: false
mountPath: "/kafka"
- name: ykskafkatest-logs
readOnly: false
mountPath: "/opt/kafka/logs"
ports:
- containerPort: 9092
- containerPort: 1099
env:
# broker ID,必须要有,各节点不能一致
- name: KAFKA_BROKER_ID
value: "1"
# 必须要有,zk集群
- name: KAFKA_ZOOKEEPER_CONNECT
value: ykszktest-n1:2181,ykszktest-n2:2181,ykszktest-n3:2181/kafka
# 必须要有,kafka工作端口
- name: KAFKA_ADVERTISED_PORT
value: "9092"
# 可选
- name: KAFKA_ADVERTISED_HOST_NAME
value: ykskafkatest-n1
# 可选
- name: KAFKA_HEAP_OPTS
value: "-Xmx4G -Xms4G"
# JMX相关,可选
- name: KAFKA_JMX_OPTS
value: "-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.rmi.port=1099"
# JMX相关,可选
- name: JMX_PORT
value: "1099"

---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ykskafkatest-n2
spec:
replicas: 1
selector:
matchLabels:
app: ykskafkatest-n2
template:
metadata:
labels:
app: ykskafkatest-n2
spec:
hostname: ykskafkatest-n2
volumes:
- name: ykskafkatest-data
hostPath:
path: /data/ykskafkatest-cluster/ykskafkatest-data-n2
- name: ykskafkatest-logs
hostPath:
path: /data/ykskafkatest-cluster/ykskafkatest-logs-n2
dnsPolicy: ClusterFirst
containers:
- name: ykskafkatest-n1
image: wurstmeister/kafka:1.0.1
imagePullPolicy: Always
volumeMounts:
- name: ykskafkatest-data
readOnly: false
mountPath: "/kafka"
- name: ykskafkatest-logs
readOnly: false
mountPath: "/opt/kafka/logs"
ports:
- containerPort: 9092
- containerPort: 1099
env:
- name: KAFKA_BROKER_ID
value: "2"
- name: KAFKA_ZOOKEEPER_CONNECT
value: ykszktest-n1:2181,ykszktest-n2:2181,ykszktest-n3:2181/kafka
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: ykskafkatest-n2
- name: KAFKA_HEAP_OPTS
value: "-Xmx4G -Xms4G"
- name: KAFKA_JMX_OPTS
value: "-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.rmi.port=1099"
- name: JMX_PORT
value: "1099"
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ykskafkatest-n3
spec:
replicas: 1
selector:
matchLabels:
app: ykskafkatest-n3
template:
metadata:
labels:
app: ykskafkatest-n3
spec:
hostname: ykskafkatest-n3
volumes:
- name: ykskafkatest-data
hostPath:
path: /data/ykskafkatest-cluster/ykskafkatest-data-n3
- name: ykskafkatest-logs
hostPath:
path: /data/ykskafkatest-cluster/ykskafkatest-logs-n3
dnsPolicy: ClusterFirst
containers:
- name: ykskafkatest-n1
image: wurstmeister/kafka:1.0.1
imagePullPolicy: Always
volumeMounts:
- name: ykskafkatest-data
readOnly: false
mountPath: "/kafka"
- name: ykskafkatest-logs
readOnly: false
mountPath: "/opt/kafka/logs"
ports:
- containerPort: 9092
- containerPort: 1099
env:
- name: KAFKA_BROKER_ID
value: "3"
- name: KAFKA_ZOOKEEPER_CONNECT
value: ykszktest-n1:2181,ykszktest-n2:2181,ykszktest-n3:2181/kafka
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: ykskafkatest-n3
- name: KAFKA_HEAP_OPTS
value: "-Xmx4G -Xms4G"
- name: KAFKA_JMX_OPTS
value: "-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.rmi.port=1099"
- name: JMX_PORT
value: "1099"

---
apiVersion: v1
kind: Service
metadata:
labels:
app: ykskafkatest-n1
name: ykskafkatest-n1
namespace: default
spec:
clusterIP: None
ports:
- port: 9092
protocol: TCP
targetPort: 9092
name: kafka
- port: 1099
protocol: TCP
targetPort: 1099
name: jmx
selector:
app: ykskafkatest-n1
sessionAffinity: None
type: ClusterIP

---
apiVersion: v1
kind: Service
metadata:
labels:
app: ykskafkatest-n2
name: ykskafkatest-n2
namespace: default
spec:
clusterIP: None
ports:
- port: 9092
protocol: TCP
targetPort: 9092
name: kafka
- port: 1099
protocol: TCP
targetPort: 1099
name: jmx
selector:
app: ykskafkatest-n2
sessionAffinity: None
type: ClusterIP

---
apiVersion: v1
kind: Service
metadata:
labels:
app: ykskafkatest-n3
name: ykskafkatest-n3
namespace: default
spec:
clusterIP: None
selector:
ports:
- port: 9092
protocol: TCP
targetPort: 9092
name: kafka
- port: 1099
protocol: TCP
targetPort: 1099
name: jmx
selector:
app: ykskafkatest-n3
sessionAffinity: None
type: ClusterIP

部署完成后,查看pod状态:

1
2
3
4
root@yksv001238:~/test/kafka# kubectl get pods -o wide | grep kafka
ykskafkatest-n1-5b78b89fb-srnft 1/1 Running 0 2h 10.100.1.45 yksv001239
ykskafkatest-n2-5f57ccb9c4-9ghsd 1/1 Running 0 2h 10.100.0.93 yksv001238
ykskafkatest-n3-66ccfcbd96-dg2ch 1/1 Running 0 6m 10.100.1.49 yksv001239

分别进入两台kafka容器内,分别使用消费者/生产者脚本进行测试,能正常生产/消费消息,则kafka集群部署成功:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# n1
bash-4.4# kafka-topics.sh --create \
> --topic test \
> --zookeeper ykszktest-n1:2181,ykszktest-n2:2181,ykszktest-n3:2181/kafka \
> --partitions 3 \
> --replication-factor 2
Created topic "test2".

# 在n1 开启生产者,输入数据,可以在消费者端接收到数据
bash-4.4# kafka-console-producer.sh --topic test2 --broker-list localhost:9092
>hello world?
>hello world!

# n2 开启消费者,接收到数据
bash-4.4# kafka-console-consumer.sh --topic test2 --bootstrap-server localhost:9092
>hello world?
>hello world!

注意:
直接进入容器内运行kafka-console-consumer.sh或者kafka-console-producer.sh会报错:

1
Error: JMX connector server communication error: service:jmx:rmi://9dcb21ce1644:1099

这是因为这两个脚本在运行时会执行/opt/kafka_2.12-1.0.1/bin/kafka-run-class.sh脚本,这个脚本在启动时会获取JMX_PORT环境变量并运行一个带有jmx的jvm,这时就会与系统当前的jmx端口冲突,因此,可以使用unset JMX_PORT取消环境变量解决这个问题。

详情参考:
https://github.com/wurstmeister/kafka-docker/wiki#why-do-kafka-tools-fail-when-jmx-is-enabled

Problem: Tools such as kafka-topics.sh and kafka-console-producer.sh fail when JMX is enabled. This is caused because of the JMX_PORT environment variable. The Kafka helper script /opt/kafka/bin/kafka-run-class.sh will try to invoke the required command in a new JVM with JMX bound to the specified port. As the broker JVM that is already running in the container has this port bound, the process fails and exits with error.

Solution: Although we’d recommend not running operation tools inside of your running brokers, this may sometimes be desirable when performing local development and testing.

赏一瓶快乐回宅水吧~
-------------本文结束感谢您的阅读-------------