Kafka Burrow stopped after running for a while - apache-kafka

I try to monitor consumer lag in Kafka with Burrow. I could get the result from HTTP endpoint, but it's only for a while. After about one minute, I couldn't get any response from burrow and port 8000 is closed.
I have my zookeeper installed in same host with kafka instance. Here my configuration and error log.
burrow.cfg
[general]
logdir=log
logconfig=config/logging.cfg
pidfile=burrow.pid
client-id=burrow-lagchecker
group-blacklist=^(console-consumer-|python-kafka-consumer-).*$
[zookeeper]
hostname=kafka01
hostname=kafka02
hostname=kafka03
port=2181
timeout=6
lock-path=/burrow/notifier
[kafka "TestEnvironment"]
broker=kafka01
broker=kafka02
broker=kafka03
broker-port=6667
zookeeper=kafka01
zookeeper=kafka02
zookeeper=kafka03
zookeeper-port=2181
zookeeper-path=/kafka-cluster
offsets-topic=__consumer_offsets
[tickers]
broker-offsets=60
[lagcheck]
intervals=10
expire-group=604800
[httpserver]
server=on
port=8000
[smtp]
server=mailserver.example.com
port=25
from=burrow-noreply#example.com
template=config/default-email.tmpl
[email "bofh#example.com"]
group=local,critical-consumer-group
group=local,other-consumer-group
interval=60
[httpnotifier]
url=http://notification.server.example.com:9000/v1/alert
interval=60
extra=app=burrow
extra=tier=STG
template-post=config/default-http-post.tmpl
template-delete=config/default-http-delete.tmpl
burrow.log
2015-09-16 06:02:28 [INFO] Starting Zookeeper client
2015-09-16 06:02:28 [INFO] Starting Offsets Storage module
2015-09-16 06:02:28 [INFO] Starting HTTP server
2015-09-16 06:02:28 [INFO] Starting Zookeeper client for cluster TestEnvironment
2015-09-16 06:02:28 [INFO] Starting Kafka client for cluster TestEnvironment
2015-09-16 06:02:28 [INFO] Starting consumers for 1 partitions of __consumer_offsets in cluster TestEnvironment
2015-09-16 06:02:28 [INFO] Configuring Email notifier
2015-09-16 06:02:28 [INFO] Configuring HTTP notifier
2015-09-16 06:02:28 [INFO] Acquired Zookeeper notifier lock
2015-09-16 06:02:28 [INFO] Starting Email notifier
2015-09-16 06:02:28 [INFO] Starting HTTP notifier
burrow.out
Started Burrow at September 16, 2015 at 6:02am (UTC)
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x18 pc=0x4172e2]
goroutine 183 [running]:
main.(*OffsetStorage).evaluateGroup(0xc8201d95c0, 0xc8200e00c0, 0x5, 0xc8200e00c6, 0x17, 0xc8204545a0)
/home/dwirawan/work/src/github.com/linkedin/burrow/offsets_store.go:337 +0x182
created by main.NewOffsetStorage.func1
/home/dwirawan/work/src/github.com/linkedin/burrow/offsets_store.go:188 +0x43f
goroutine 1 [chan receive]:
main.burrowMain(0x0)
/home/dwirawan/work/src/github.com/linkedin/burrow/main.go:194 +0x1c2b
main.main()
/home/dwirawan/work/src/github.com/linkedin/burrow/main.go:200 +0x33
goroutine 17 [syscall, 1 minutes, locked to thread]:
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1696 +0x1
goroutine 5 [semacquire, 1 minutes]:
sync.runtime_Syncsemacquire(0xc820019150)
/usr/local/go/src/runtime/sema.go:237 +0x201
sync.(*Cond).Wait(0xc820019140)
/usr/local/go/src/sync/cond.go:62 +0x9b
github.com/cihub/seelog.(*asyncLoopLogger).processItem(0xc82001c600, 0x0)
/home/dwirawan/work/src/github.com/cihub/seelog/behavior_asynclooplogger.go:50 +0xc7
github.com/cihub/seelog.(*asyncLoopLogger).processQueue(0xc82001c600)
/home/dwirawan/work/src/github.com/cihub/seelog/behavior_asynclooplogger.go:63 +0x2a
created by github.com/cihub/seelog.newAsyncLoopLogger
/home/dwirawan/work/src/github.com/cihub/seelog/behavior_asynclooplogger.go:40 +0x91
goroutine 6 [semacquire, 1 minutes]:
sync.runtime_Syncsemacquire(0xc8200192d0)
/usr/local/go/src/runtime/sema.go:237 +0x201
sync.(*Cond).Wait(0xc8200192c0)
/usr/local/go/src/sync/cond.go:62 +0x9b
github.com/cihub/seelog.(*asyncLoopLogger).processItem(0xc82001c720, 0x0)
/home/dwirawan/work/src/github.com/cihub/seelog/behavior_asynclooplogger.go:50 +0xc7
github.com/cihub/seelog.(*asyncLoopLogger).processQueue(0xc82001c720)
/home/dwirawan/work/src/github.com/cihub/seelog/behavior_asynclooplogger.go:63 +0x2a
created by github.com/cihub/seelog.newAsyncLoopLogger
/home/dwirawan/work/src/github.com/cihub/seelog/behavior_asynclooplogger.go:40 +0x91
goroutine 7 [syscall, 1 minutes]:
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:22 +0x18
created by os/signal.init.1
/usr/local/go/src/os/signal/signal_unix.go:28 +0x37
goroutine 8 [semacquire]:
sync.runtime_Syncsemacquire(0xc820316710)
/usr/local/go/src/runtime/sema.go:237 +0x201
sync.(*Cond).Wait(0xc820316700)
/usr/local/go/src/sync/cond.go:62 +0x9b
github.com/cihub/seelog.(*asyncLoopLogger).processItem(0xc8200dd800, 0x0)
/home/dwirawan/work/src/github.com/cihub/seelog/behavior_asynclooplogger.go:50 +0xc7
github.com/cihub/seelog.(*asyncLoopLogger).processQueue(0xc8200dd800)
/home/dwirawan/work/src/github.com/cihub/seelog/behavior_asynclooplogger.go:63 +0x2a
created by github.com/cihub/seelog.newAsyncLoopLogger
/home/dwirawan/work/src/github.com/cihub/seelog/behavior_asynclooplogger.go:40 +0x91
goroutine 9 [semacquire, 1 minutes]:
sync.runtime_Semacquire(0xc82079201c)
/usr/local/go/src/runtime/sema.go:43 +0x26
sync.(*WaitGroup).Wait(0xc820792010)
/usr/local/go/src/sync/waitgroup.go:126 +0xb4
github.com/samuel/go-zookeeper/zk.(*Conn).loop(0xc820069e10)
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:227 +0x671
github.com/samuel/go-zookeeper/zk.ConnectWithDialer.func1(0xc820069e10)
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:145 +0x21
created by github.com/samuel/go-zookeeper/zk.ConnectWithDialer
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:149 +0x452
goroutine 10 [select]:
main.NewOffsetStorage.func1(0xc8201d95c0)
/home/dwirawan/work/src/github.com/linkedin/burrow/offsets_store.go:168 +0x4a8
created by main.NewOffsetStorage
/home/dwirawan/work/src/github.com/linkedin/burrow/offsets_store.go:199 +0x4b7
goroutine 11 [IO wait]:
net.runtime_pollWait(0x7f17dc7d8fb0, 0x72, 0xc820010190)
/usr/local/go/src/runtime/netpoll.go:157 +0x60
net.(*pollDesc).Wait(0xc8203b6060, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8203b6060, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).accept(0xc8203b6000, 0x0, 0x7f17dc7d90a8, 0xc8200e1c80)
/usr/local/go/src/net/fd_unix.go:408 +0x27c
net.(*TCPListener).AcceptTCP(0xc8203d8000, 0x46e890, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:254 +0x4d
net/http.tcpKeepAliveListener.Accept(0xc8203d8000, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/server.go:2135 +0x41
net/http.(*Server).Serve(0xc82038a000, 0x7f17dc7d9070, 0xc8203d8000, 0x0, 0x0)
/usr/local/go/src/net/http/server.go:1887 +0xb3
net/http.(*Server).ListenAndServe(0xc82038a000, 0x0, 0x0)
/usr/local/go/src/net/http/server.go:1877 +0x136
net/http.ListenAndServe(0xc820338750, 0x5, 0x7f17db9a02e8, 0xc8201d97a0, 0x0, 0x0)
/usr/local/go/src/net/http/server.go:1967 +0x8f
created by main.NewHttpServer
/home/dwirawan/work/src/github.com/linkedin/burrow/http_server.go:49 +0x4f7
goroutine 12 [semacquire, 1 minutes]:
sync.runtime_Semacquire(0xc8204460cc)
/usr/local/go/src/runtime/sema.go:43 +0x26
sync.(*WaitGroup).Wait(0xc8204460c0)
/usr/local/go/src/sync/waitgroup.go:126 +0xb4
github.com/samuel/go-zookeeper/zk.(*Conn).loop(0xc82034a000)
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:227 +0x671
github.com/samuel/go-zookeeper/zk.ConnectWithDialer.func1(0xc82034a000)
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:145 +0x21
created by github.com/samuel/go-zookeeper/zk.ConnectWithDialer
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:149 +0x452
goroutine 35 [runnable]:
github.com/Shopify/sarama.decode(0xc82035e2a0, 0x8, 0x8, 0x7f17db9a4428, 0xc8200ca3d0, 0x0, 0x0)
/home/dwirawan/work/src/github.com/Shopify/sarama/encoder_decoder.go:51 +0x69
github.com/Shopify/sarama.(*Broker).responseReceiver(0xc820318930)
/home/dwirawan/work/src/github.com/Shopify/sarama/broker.go:354 +0x3e0
github.com/Shopify/sarama.(*Broker).(github.com/Shopify/sarama.responseReceiver)-fm()
/home/dwirawan/work/src/github.com/Shopify/sarama/broker.go:93 +0x20
github.com/Shopify/sarama.withRecover(0xc8204220c0)
/home/dwirawan/work/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*Broker).Open.func1
/home/dwirawan/work/src/github.com/Shopify/sarama/broker.go:93 +0x59b
goroutine 16 [select, 1 minutes]:
github.com/Shopify/sarama.(*client).backgroundMetadataUpdater(0xc8200b1600)
/home/dwirawan/work/src/github.com/Shopify/sarama/client.go:553 +0x322
github.com/Shopify/sarama.(*client).(github.com/Shopify/sarama.backgroundMetadataUpdater)-fm()
/home/dwirawan/work/src/github.com/Shopify/sarama/client.go:142 +0x20
github.com/Shopify/sarama.withRecover(0xc82041d470)
/home/dwirawan/work/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.NewClient
/home/dwirawan/work/src/github.com/Shopify/sarama/client.go:142 +0x754
goroutine 34 [chan receive, 1 minutes]:
github.com/Shopify/sarama.(*Broker).responseReceiver(0xc820318690)
/home/dwirawan/work/src/github.com/Shopify/sarama/broker.go:340 +0xf6
github.com/Shopify/sarama.(*Broker).(github.com/Shopify/sarama.responseReceiver)-fm()
/home/dwirawan/work/src/github.com/Shopify/sarama/broker.go:93 +0x20
github.com/Shopify/sarama.withRecover(0xc820422050)
/home/dwirawan/work/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*Broker).Open.func1
/home/dwirawan/work/src/github.com/Shopify/sarama/broker.go:93 +0x59b
goroutine 50 [chan receive, 1 minutes]:
main.NewKafkaClient.func1(0xc8200166e0)
/home/dwirawan/work/src/github.com/linkedin/burrow/kafka_client.go:78 +0x8f
created by main.NewKafkaClient
/home/dwirawan/work/src/github.com/linkedin/burrow/kafka_client.go:81 +0x43a
goroutine 51 [chan receive, 1 minutes]:
main.NewKafkaClient.func2(0xc8200166e0)
/home/dwirawan/work/src/github.com/linkedin/burrow/kafka_client.go:84 +0x95
created by main.NewKafkaClient
/home/dwirawan/work/src/github.com/linkedin/burrow/kafka_client.go:87 +0x45c
goroutine 52 [chan receive, 1 minutes]:
main.NewKafkaClient.func3(0xc8200166e0)
/home/dwirawan/work/src/github.com/linkedin/burrow/kafka_client.go:92 +0x4e
created by main.NewKafkaClient
/home/dwirawan/work/src/github.com/linkedin/burrow/kafka_client.go:95 +0x48c
goroutine 38 [select]:
github.com/Shopify/sarama.(*brokerConsumer).subscriptionManager(0xc820432550)
/home/dwirawan/work/src/github.com/Shopify/sarama/consumer.go:547 +0x3e7
github.com/Shopify/sarama.(*brokerConsumer).(github.com/Shopify/sarama.subscriptionManager)-fm()
/home/dwirawan/work/src/github.com/Shopify/sarama/consumer.go:520 +0x20
github.com/Shopify/sarama.withRecover(0xc8204222c0)
/home/dwirawan/work/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*consumer).newBrokerConsumer
/home/dwirawan/work/src/github.com/Shopify/sarama/consumer.go:520 +0x200
goroutine 21 [select]:
github.com/samuel/go-zookeeper/zk.(*Conn).sendLoop(0xc82034a000, 0x7f17db9a43a0, 0xc8203d8008, 0xc820448300, 0x0, 0x0)
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:412 +0xd8b
github.com/samuel/go-zookeeper/zk.(*Conn).loop.func1(0xc82034a000, 0xc820448300, 0xc8204460c0)
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:212 +0x48
created by github.com/samuel/go-zookeeper/zk.(*Conn).loop
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:215 +0x609
goroutine 23 [semacquire]:
sync.runtime_Semacquire(0xc8202e821c)
/usr/local/go/src/runtime/sema.go:43 +0x26
sync.(*WaitGroup).Wait(0xc8202e8210)
/usr/local/go/src/sync/waitgroup.go:126 +0xb4
main.(*KafkaClient).getOffsets(0xc8200166e0, 0x0, 0x0)
/home/dwirawan/work/src/github.com/linkedin/burrow/kafka_client.go:228 +0x7a5
main.NewKafkaClient.func4(0xc8200166e0)
/home/dwirawan/work/src/github.com/linkedin/burrow/kafka_client.go:102 +0x75
created by main.NewKafkaClient
/home/dwirawan/work/src/github.com/linkedin/burrow/kafka_client.go:104 +0x508
goroutine 36 [chan receive, 1 minutes]:
github.com/Shopify/sarama.(*partitionConsumer).dispatcher(0xc820778000)
/home/dwirawan/work/src/github.com/Shopify/sarama/consumer.go:295 +0x57
github.com/Shopify/sarama.(*partitionConsumer).(github.com/Shopify/sarama.dispatcher)-fm()
/home/dwirawan/work/src/github.com/Shopify/sarama/consumer.go:151 +0x20
github.com/Shopify/sarama.withRecover(0xc8204222a0)
/home/dwirawan/work/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*consumer).ConsumePartition
/home/dwirawan/work/src/github.com/Shopify/sarama/consumer.go:151 +0x454
goroutine 37 [chan receive]:
github.com/Shopify/sarama.(*partitionConsumer).responseFeeder(0xc820778000)
/home/dwirawan/work/src/github.com/Shopify/sarama/consumer.go:403 +0x5d
github.com/Shopify/sarama.(*partitionConsumer).(github.com/Shopify/sarama.responseFeeder)-fm()
/home/dwirawan/work/src/github.com/Shopify/sarama/consumer.go:152 +0x20
github.com/Shopify/sarama.withRecover(0xc8204222b0)
/home/dwirawan/work/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*consumer).ConsumePartition
/home/dwirawan/work/src/github.com/Shopify/sarama/consumer.go:152 +0x4ab
goroutine 67 [chan receive]:
github.com/Shopify/sarama.(*Broker).responseReceiver(0xc8203188c0)
/home/dwirawan/work/src/github.com/Shopify/sarama/broker.go:340 +0xf6
github.com/Shopify/sarama.(*Broker).(github.com/Shopify/sarama.responseReceiver)-fm()
/home/dwirawan/work/src/github.com/Shopify/sarama/broker.go:93 +0x20
github.com/Shopify/sarama.withRecover(0xc8203ac0a0)
/home/dwirawan/work/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*Broker).Open.func1
/home/dwirawan/work/src/github.com/Shopify/sarama/broker.go:93 +0x59b
goroutine 20 [chan receive]:
github.com/Shopify/sarama.(*Broker).responseReceiver(0xc820318850)
/home/dwirawan/work/src/github.com/Shopify/sarama/broker.go:340 +0xf6
github.com/Shopify/sarama.(*Broker).(github.com/Shopify/sarama.responseReceiver)-fm()
/home/dwirawan/work/src/github.com/Shopify/sarama/broker.go:93 +0x20
github.com/Shopify/sarama.withRecover(0xc820446060)
/home/dwirawan/work/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*Broker).Open.func1
/home/dwirawan/work/src/github.com/Shopify/sarama/broker.go:93 +0x59b
goroutine 22 [IO wait]:
net.runtime_pollWait(0x7f17dc7d8e30, 0x72, 0xc820010190)
/usr/local/go/src/runtime/netpoll.go:157 +0x60
net.(*pollDesc).Wait(0xc8203b60d0, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8203b60d0, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc8203b6070, 0xc82045e000, 0x4, 0x180000, 0x0, 0x7f17db997050, 0xc820010190)
/usr/local/go/src/net/fd_unix.go:232 +0x23a
net.(*conn).Read(0xc8203d8008, 0xc82045e000, 0x4, 0x180000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
io.ReadAtLeast(0x7f17d8150160, 0xc8203d8008, 0xc82045e000, 0x4, 0x180000, 0x4, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:298 +0xe6
io.ReadFull(0x7f17d8150160, 0xc8203d8008, 0xc82045e000, 0x4, 0x180000, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:316 +0x62
github.com/samuel/go-zookeeper/zk.(*Conn).recvLoop(0xc82034a000, 0x7f17db9a43a0, 0xc8203d8008, 0x0, 0x0)
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:476 +0x231
github.com/samuel/go-zookeeper/zk.(*Conn).loop.func2(0xc8203ac030, 0xc82034a000, 0xc820448300, 0xc8204460c0)
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:219 +0x46
created by github.com/samuel/go-zookeeper/zk.(*Conn).loop
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:225 +0x663
goroutine 100 [chan receive]:
main.NewKafkaClient.func6(0xc8200166e0, 0x7f17d80d1000, 0xc820778000)
/home/dwirawan/work/src/github.com/linkedin/burrow/kafka_client.go:130 +0x9f
created by main.NewKafkaClient
/home/dwirawan/work/src/github.com/linkedin/burrow/kafka_client.go:133 +0xae8
goroutine 99 [chan receive]:
main.NewKafkaClient.func5(0xc8200166e0, 0x7f17d80d1000, 0xc820778000)
/home/dwirawan/work/src/github.com/linkedin/burrow/kafka_client.go:124 +0x9f
created by main.NewKafkaClient
/home/dwirawan/work/src/github.com/linkedin/burrow/kafka_client.go:127 +0xaac
goroutine 107 [select]:
github.com/Shopify/sarama.(*Broker).sendAndReceive(0xc820318930, 0x7f17db9a46a0, 0xc8203b8390, 0x7f17db9a46e0, 0xc8203d8270, 0x0, 0x0)
/home/dwirawan/work/src/github.com/Shopify/sarama/broker.go:286 +0x23f
github.com/Shopify/sarama.(*Broker).GetAvailableOffsets(0xc820318930, 0xc8203b8390, 0xc800000002, 0x0, 0x0)
/home/dwirawan/work/src/github.com/Shopify/sarama/broker.go:174 +0xc1
main.(*KafkaClient).getOffsets.func1(0xc800000002, 0xc8203b8390)
/home/dwirawan/work/src/github.com/linkedin/burrow/kafka_client.go:198 +0xa1
created by main.(*KafkaClient).getOffsets
/home/dwirawan/work/src/github.com/linkedin/burrow/kafka_client.go:225 +0x770
goroutine 144 [select, locked to thread]:
runtime.gopark(0x9ea3c8, 0xc8203be728, 0x913530, 0x6, 0x18, 0x2)
/usr/local/go/src/runtime/proc.go:185 +0x163
runtime.selectgoImpl(0xc8203be728, 0x0, 0x18)
/usr/local/go/src/runtime/select.go:392 +0xa64
runtime.selectgo(0xc8203be728)
/usr/local/go/src/runtime/select.go:212 +0x12
runtime.ensureSigM.func1()
/usr/local/go/src/runtime/signal1_unix.go:227 +0x353
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1696 +0x1
goroutine 145 [chan receive]:
main.(*Emailer).sendEmailNotifications(0xc820416420, 0xc8200e0080, 0x10, 0x910828, 0x7, 0xc8200e01a0, 0x2, 0x2, 0xc820796720)
/home/dwirawan/work/src/github.com/linkedin/burrow/emailer.go:116 +0x45e
created by main.(*Emailer).Start
/home/dwirawan/work/src/github.com/linkedin/burrow/emailer.go:59 +0x1da
goroutine 162 [select]:
main.(*HttpNotifier).Start.func1(0xc820316940)
/home/dwirawan/work/src/github.com/linkedin/burrow/http_notifier.go:197 +0x19b
created by main.(*HttpNotifier).Start
/home/dwirawan/work/src/github.com/linkedin/burrow/http_notifier.go:207 +0x7a
goroutine 146 [select]:
github.com/samuel/go-zookeeper/zk.(*Conn).sendLoop(0xc820069e10, 0x7f17db9a43a0, 0xc8200322b8, 0xc820796000, 0x0, 0x0)
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:412 +0xd8b
github.com/samuel/go-zookeeper/zk.(*Conn).loop.func1(0xc820069e10, 0xc820796000, 0xc820792010)
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:212 +0x48
created by github.com/samuel/go-zookeeper/zk.(*Conn).loop
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:215 +0x609
goroutine 147 [IO wait]:
net.runtime_pollWait(0x7f17dc7d8ef0, 0x72, 0xc820010190)
/usr/local/go/src/runtime/netpoll.go:157 +0x60
net.(*pollDesc).Wait(0xc820318840, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc820318840, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc8203187e0, 0xc8207a6000, 0x4, 0x180000, 0x0, 0x7f17db997050, 0xc820010190)
/usr/local/go/src/net/fd_unix.go:232 +0x23a
net.(*conn).Read(0xc8200322b8, 0xc8207a6000, 0x4, 0x180000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
io.ReadAtLeast(0x7f17d8150160, 0xc8200322b8, 0xc8207a6000, 0x4, 0x180000, 0x4, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:298 +0xe6
io.ReadFull(0x7f17d8150160, 0xc8200322b8, 0xc8207a6000, 0x4, 0x180000, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:316 +0x62
github.com/samuel/go-zookeeper/zk.(*Conn).recvLoop(0xc820069e10, 0x7f17db9a43a0, 0xc8200322b8, 0x0, 0x0)
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:476 +0x231
github.com/samuel/go-zookeeper/zk.(*Conn).loop.func2(0xc82041cb80, 0xc820069e10, 0xc820796000, 0xc820792010)
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:219 +0x46
created by github.com/samuel/go-zookeeper/zk.(*Conn).loop
/home/dwirawan/work/src/github.com/samuel/go-zookeeper/zk/conn.go:225 +0x663
goroutine 39 [select]:
github.com/Shopify/sarama.(*Broker).sendAndReceive(0xc820318930, 0x7f17d8090100, 0xc820338380, 0x7f17d8090140, 0xc820382360, 0x0, 0x0)
/home/dwirawan/work/src/github.com/Shopify/sarama/broker.go:286 +0x23f
github.com/Shopify/sarama.(*Broker).Fetch(0xc820318930, 0xc820338380, 0xc82003fd94, 0x0, 0x0)
/home/dwirawan/work/src/github.com/Shopify/sarama/broker.go:204 +0xc1
github.com/Shopify/sarama.(*brokerConsumer).fetchNewMessages(0xc820432550, 0x0, 0x0, 0x0)
/home/dwirawan/work/src/github.com/Shopify/sarama/consumer.go:646 +0x34e
github.com/Shopify/sarama.(*brokerConsumer).subscriptionConsumer(0xc820432550)
/home/dwirawan/work/src/github.com/Shopify/sarama/consumer.go:580 +0x144
github.com/Shopify/sarama.(*brokerConsumer).(github.com/Shopify/sarama.subscriptionConsumer)-fm()
/home/dwirawan/work/src/github.com/Shopify/sarama/consumer.go:521 +0x20
github.com/Shopify/sarama.withRecover(0xc8204222d0)
/home/dwirawan/work/src/github.com/Shopify/sarama/utils.go:42 +0x3a
created by github.com/Shopify/sarama.(*consumer).newBrokerConsumer
/home/dwirawan/work/src/github.com/Shopify/sarama/consumer.go:521 +0x253
goroutine 184 [runnable]:
main.(*OffsetStorage).evaluateGroup(0xc8201d95c0, 0xc8200e0160, 0x5, 0xc8200e0166, 0x14, 0xc8204545a0)
/home/dwirawan/work/src/github.com/linkedin/burrow/offsets_store.go:337 +0x182
created by main.NewOffsetStorage.func1
/home/dwirawan/work/src/github.com/linkedin/burrow/offsets_store.go:188 +0x43f
Is there something wrong with my configuration?
Thanks.

This issues is due to kafka cluster. Check your kafka cluster was properly running or Not .The NPE is because Burrow is unable to start the consumers for the __consumer_offsets topic. This could be due to ACL issues, or because the topic doesn't exist yet (it's only created after the first consumer group is started up).

Related

Pymodbus Asynchronous Server with RTU framer not working

I'm trying to implement a modbus asynchronous serial server with the library Pymodbus. I'm using a ModbusRtuFramer as a frame template. Unfortunately, when I send a command, it looks like the bytes are splitted and the frame is not recognized. I post both the code and the log.
StartSerialServer(context, identity=identity,
framer=ModbusRtuFramer,
port='/dev/ttyS2',
timeout=0.1,
baudrate=9600,
bytesize=8,
parity='N',
stopbits=1)
I send the command 03102c0002 (read 2 holding registers starting from address 102c. The log says:
DEBUG:pymodbus.server.asynchronous:Data Received: 0x1
DEBUG:pymodbus.framer.rtu_framer:Frame - [b'\x01'] not ready
DEBUG:pymodbus.server.asynchronous:Data Received: 0x3 0x10 0x2c
DEBUG:pymodbus.framer.rtu_framer:Frame check failed, ignoring!!
DEBUG:pymodbus.framer.rtu_framer:Resetting frame - Current Frame in buffer - 0x1 0x3 0x10 0x2c
DEBUG:pymodbus.server.asynchronous:Data Received: 0x0 0x2 0x1 0x2
DEBUG:pymodbus.framer.rtu_framer:Frame check failed, ignoring!!
DEBUG:pymodbus.framer.rtu_framer:Resetting frame - Current Frame in buffer - 0x0 0x2 0x1 0x2
DEBUG:pymodbus.server.asynchronous:Data Received: 0x1
DEBUG:pymodbus.framer.rtu_framer:Frame - [b'\x01'] not ready
DEBUG:pymodbus.server.asynchronous:Data Received: 0x3 0x10 0x2c
DEBUG:pymodbus.framer.rtu_framer:Frame check failed, ignoring!!
DEBUG:pymodbus.framer.rtu_framer:Resetting frame - Current Frame in buffer - 0x1 0x3 0x10 0x2c
DEBUG:pymodbus.server.asynchronous:Data Received: 0x0 0x2 0x1
DEBUG:pymodbus.framer.rtu_framer:Frame check failed, ignoring!!
DEBUG:pymodbus.framer.rtu_framer:Resetting frame - Current Frame in buffer - 0x0 0x2 0x1
DEBUG:pymodbus.server.asynchronous:Data Received: 0x2
DEBUG:pymodbus.framer.rtu_framer:Frame - [b'\x02'] not ready
DEBUG:pymodbus.server.asynchronous:Data Received: 0x1
DEBUG:pymodbus.framer.rtu_framer:Frame check failed, ignoring!!
DEBUG:pymodbus.framer.rtu_framer:Resetting frame - Current Frame in buffer - 0x2 0x1
DEBUG:pymodbus.server.asynchronous:Data Received: 0x3 0x10 0x2c 0x0
DEBUG:pymodbus.framer.rtu_framer:Frame check failed, ignoring!!
DEBUG:pymodbus.framer.rtu_framer:Resetting frame - Current Frame in buffer - 0x3 0x10 0x2c 0x0
DEBUG:pymodbus.server.asynchronous:Data Received: 0x2 0x1 0x2
DEBUG:pymodbus.framer.rtu_framer:Frame check failed, ignoring!!
DEBUG:pymodbus.framer.rtu_framer:Resetting frame - Current Frame in buffer - 0x2 0x1 0x2
DEBUG:pymodbus.server.asynchronous:Data Received: 0x1
DEBUG:pymodbus.framer.rtu_framer:Frame - [b'\x01'] not ready
DEBUG:pymodbus.server.asynchronous:Data Received: 0x3 0x10 0x2c
DEBUG:pymodbus.framer.rtu_framer:Frame check failed, ignoring!!
DEBUG:pymodbus.framer.rtu_framer:Resetting frame - Current Frame in buffer - 0x1 0x3 0x10 0x2c
DEBUG:pymodbus.server.asynchronous:Data Received: 0x0 0x2 0x1 0x2
DEBUG:pymodbus.framer.rtu_framer:Frame check failed, ignoring!!
DEBUG:pymodbus.framer.rtu_framer:Resetting frame - Current Frame in buffer - 0x0 0x2 0x1 0x2

Connection between CLI and Peer/Orderer not working properly (Kubernetes setup)

I'm running a network in a Kubernetes Cluster and have a CLI, a Peer and an Orderer of the same organization each running in it's own Pod.
I can do channel creation, chaincode installation, approvement and committing without problems. However, when it comes to chaincode invocation, the CLI outputs that the chaincode might not be installed, while the Peer logs a failed connection to the CLI.
So here's the CLI command (update: with -o org1-orderer:30011):
$ export CORE_PEER_MSPCONFIGPATH=/config/admin/msp
$ peer chaincode invoke -C channel1 -n cc-abac -c '{"Args":["invoke","a","b","10"]}' -o org1-orderer:30011 --clientauth --tls --cafile /config/peer/tls-msp/tlscacerts/ca-cert.pem --keyfile /config/peer/tls-msp/keystore/key.pem --certfile /config/peer/tls-msp/signcerts
/cert.pem
CLI Output:
2020-07-07 16:47:20.918 UTC [msp] loadCertificateAt -> WARN 001 Failed loading ClientOU certificate at [/config/admin/msp]: [could not read file /config/admin/msp: read /config/admin/msp: is a directory]
2020-07-07 16:47:20.919 UTC [msp] loadCertificateAt -> WARN 002 Failed loading PeerOU certificate at [/config/admin/msp]: [could not read file /config/admin/msp: read /config/admin/msp: is a directory]
2020-07-07 16:47:20.919 UTC [msp] loadCertificateAt -> WARN 003 Failed loading AdminOU certificate at [/config/admin/msp]: [could not read file /config/admin/msp: read /config/admin/msp: is a directory]
2020-07-07 16:47:20.919 UTC [msp] loadCertificateAt -> WARN 004 Failed loading OrdererOU certificate at [/config/admin/msp]: [could not read file /config/admin/msp: read /config/admin/msp: is a directory]
2020-07-07 16:47:20.928 UTC [grpc] Infof -> DEBU 005 parsed scheme: ""
2020-07-07 16:47:20.928 UTC [grpc] Infof -> DEBU 006 scheme "" not registered, fallback to default scheme
2020-07-07 16:47:20.928 UTC [grpc] Infof -> DEBU 007 ccResolverWrapper: sending update to cc: {[{org1-peer1:30151 <nil> 0 <nil>}] <nil> <nil>}
2020-07-07 16:47:20.928 UTC [grpc] Infof -> DEBU 008 ClientConn switching balancer to "pick_first"
2020-07-07 16:47:20.928 UTC [grpc] Infof -> DEBU 009 Channel switches to new LB policy "pick_first"
2020-07-07 16:47:20.928 UTC [grpc] Infof -> DEBU 00a Subchannel Connectivity change to CONNECTING
2020-07-07 16:47:20.928 UTC [grpc] Infof -> DEBU 00b Subchannel picks a new address "org1-peer1:30151" to connect
2020-07-07 16:47:20.928 UTC [grpc] UpdateSubConnState -> DEBU 00c pickfirstBalancer: HandleSubConnStateChange: 0xc000114450, {CONNECTING <nil>}
2020-07-07 16:47:20.928 UTC [grpc] Infof -> DEBU 00d Channel Connectivity change to CONNECTING
2020-07-07 16:47:20.935 UTC [grpc] Infof -> DEBU 00e Subchannel Connectivity change to READY
2020-07-07 16:47:20.935 UTC [grpc] UpdateSubConnState -> DEBU 00f pickfirstBalancer: HandleSubConnStateChange: 0xc000114450, {READY <nil>}
2020-07-07 16:47:20.935 UTC [grpc] Infof -> DEBU 010 Channel Connectivity change to READY
2020-07-07 16:47:20.948 UTC [grpc] Infof -> DEBU 011 parsed scheme: ""
2020-07-07 16:47:20.948 UTC [grpc] Infof -> DEBU 012 scheme "" not registered, fallback to default scheme
2020-07-07 16:47:20.948 UTC [grpc] Infof -> DEBU 013 ccResolverWrapper: sending update to cc: {[{org1-peer1:30151 <nil> 0 <nil>}] <nil> <nil>}
2020-07-07 16:47:20.948 UTC [grpc] Infof -> DEBU 014 ClientConn switching balancer to "pick_first"
2020-07-07 16:47:20.948 UTC [grpc] Infof -> DEBU 015 Channel switches to new LB policy "pick_first"
2020-07-07 16:47:20.948 UTC [grpc] Infof -> DEBU 016 Subchannel Connectivity change to CONNECTING
2020-07-07 16:47:20.948 UTC [grpc] Infof -> DEBU 017 Subchannel picks a new address "org1-peer1:30151" to connect
2020-07-07 16:47:20.948 UTC [grpc] UpdateSubConnState -> DEBU 018 pickfirstBalancer: HandleSubConnStateChange: 0xc000496070, {CONNECTING <nil>}
2020-07-07 16:47:20.948 UTC [grpc] Infof -> DEBU 019 Channel Connectivity change to CONNECTING
2020-07-07 16:47:20.954 UTC [grpc] Infof -> DEBU 01a Subchannel Connectivity change to READY
2020-07-07 16:47:20.955 UTC [grpc] UpdateSubConnState -> DEBU 01b pickfirstBalancer: HandleSubConnStateChange: 0xc000496070, {READY <nil>}
2020-07-07 16:47:20.955 UTC [grpc] Infof -> DEBU 01c Channel Connectivity change to READY
2020-07-07 16:47:20.987 UTC [chaincodeCmd] InitCmdFactory -> INFO 01d Retrieved channel (channel1) orderer endpoint: org1-orderer:30011
2020-07-07 16:47:20.991 UTC [grpc] WithKeepaliveParams -> DEBU 01e Adjusting keepalive ping interval to minimum period of 10s
2020-07-07 16:47:20.991 UTC [grpc] Infof -> DEBU 01f parsed scheme: ""
2020-07-07 16:47:20.991 UTC [grpc] Infof -> DEBU 020 scheme "" not registered, fallback to default scheme
2020-07-07 16:47:20.991 UTC [grpc] Infof -> DEBU 021 ccResolverWrapper: sending update to cc: {[{org1-orderer:30011 <nil> 0 <nil>}] <nil> <nil>}
2020-07-07 16:47:20.991 UTC [grpc] Infof -> DEBU 022 ClientConn switching balancer to "pick_first"
2020-07-07 16:47:20.991 UTC [grpc] Infof -> DEBU 023 Channel switches to new LB policy "pick_first"
2020-07-07 16:47:20.991 UTC [grpc] Infof -> DEBU 024 Subchannel Connectivity change to CONNECTING
2020-07-07 16:47:20.991 UTC [grpc] Infof -> DEBU 025 Subchannel picks a new address "org1-orderer:30011" to connect
2020-07-07 16:47:20.991 UTC [grpc] UpdateSubConnState -> DEBU 026 pickfirstBalancer: HandleSubConnStateChange: 0xc000205a60, {CONNECTING <nil>}
2020-07-07 16:47:20.991 UTC [grpc] Infof -> DEBU 027 Channel Connectivity change to CONNECTING
2020-07-07 16:47:21.000 UTC [grpc] Infof -> DEBU 028 Subchannel Connectivity change to READY
2020-07-07 16:47:21.000 UTC [grpc] UpdateSubConnState -> DEBU 029 pickfirstBalancer: HandleSubConnStateChange: 0xc000205a60, {READY <nil>}
2020-07-07 16:47:21.000 UTC [grpc] Infof -> DEBU 02a Channel Connectivity change to READY
Error: endorsement failure during invoke. response: status:500 message:"make sure the chaincode cc-abac has been successfully defined on channel channel1 and try again: chaincode definition for 'cc-abac' exists, but chaincode is not installed"
I'm sure it's installed on channel1 (the only channel in existence, except sys-channel):
$ peer lifecycle chaincode queryinstalled
Installed chaincodes on peer:
Package ID: cc-abac:4992a37bf5c7b48f91f5062d9700a58a4129599c53d759e8282fdeffc8836c72, Label: cc-abac
On the Peer's side, I get the following in the log (updated):
[36m2020-07-09 06:45:55.976 UTC [gossip.discovery] periodicalSendAlive -> DEBU 194c[0m Sleeping 5s
[36m2020-07-09 06:45:56.182 UTC [endorser] ProcessProposal -> DEBU 194d[0m request from 10.129.1.229:60184
[36m2020-07-09 06:45:56.182 UTC [endorser] Validate -> DEBU 194e[0m creator is valid channel=channel1 txID=a71312e4 mspID=Org1MSP
[36m2020-07-09 06:45:56.182 UTC [msp.identity] Verify -> DEBU 194f[0m Verify: digest = 00000000 87 29 a0 e5 96 b8 5f 5e 9b e0 fb e5 4d 5b 86 b2 |.)...._^....M[..|
00000010 bd 43 ee 30 59 d6 a9 55 e3 e9 77 7b fd a2 47 8f |.C.0Y..U..w{..G.|
[36m2020-07-09 06:45:56.182 UTC [msp.identity] Verify -> DEBU 1950[0m Verify: sig = 00000000 30 45 02 21 00 f0 6b 23 9d f6 ec f2 29 be 64 4e |0E.!..k#....).dN|
00000010 75 69 a7 05 7e 05 71 51 64 6c 52 59 83 be ea f9 |ui..~.qQdlRY....|
00000020 08 5e 07 09 f3 02 20 7a f7 b0 6c e0 bb 32 b9 0c |.^.... z..l..2..|
00000030 8c 41 be b8 ea 39 33 91 92 0b 08 9e c6 14 39 e8 |.A...93.......9.|
00000040 46 eb a5 80 7a 7d d1 |F...z}.|
[36m2020-07-09 06:45:56.182 UTC [endorser] Validate -> DEBU 1951[0m signature is valid channel=channel1 txID=a71312e4 mspID=Org1MSP
[36m2020-07-09 06:45:56.182 UTC [fsblkstorage] retrieveTransactionByID -> DEBU 1952[0m retrieveTransactionByID() - txId = [a71312e411a6b417a541112e2aeac73adc8d6f7fbbb3c62ffcad2348e0c91fac]
[36m2020-07-09 06:45:56.182 UTC [leveldbhelper] GetIterator -> DEBU 1953[0m Getting iterator for range [[]byte{0x63, 0x68, 0x61, 0x6e, 0x6e, 0x65, 0x6c, 0x31, 0x0, 0x74, 0x1, 0x40, 0x61, 0x37, 0x31, 0x33, 0x31, 0x32, 0x65, 0x34, 0x31, 0x31, 0x61, 0x36, 0x62, 0x34, 0x31, 0x37, 0x61, 0x35, 0x34, 0x31, 0x31, 0x31, 0x32, 0x65, 0x32, 0x61, 0x65, 0x61, 0x63, 0x37, 0x33, 0x61, 0x64, 0x63, 0x38, 0x64, 0x36, 0x66, 0x37, 0x66, 0x62, 0x62, 0x62, 0x33, 0x63, 0x36, 0x32, 0x66, 0x66, 0x63, 0x61, 0x64, 0x32, 0x33, 0x34, 0x38, 0x65, 0x30, 0x63, 0x39, 0x31, 0x66, 0x61, 0x63}] - [[]byte{0x63, 0x68, 0x61, 0x6e, 0x6e, 0x65, 0x6c, 0x31, 0x0, 0x74, 0x1, 0x40, 0x61, 0x37, 0x31, 0x33, 0x31, 0x32, 0x65, 0x34, 0x31, 0x31, 0x61, 0x36, 0x62, 0x34, 0x31, 0x37, 0x61, 0x35, 0x34, 0x31, 0x31, 0x31, 0x32, 0x65, 0x32, 0x61, 0x65, 0x61, 0x63, 0x37, 0x33, 0x61, 0x64, 0x63, 0x38, 0x64, 0x36, 0x66, 0x37, 0x66, 0x62, 0x62, 0x62, 0x33, 0x63, 0x36, 0x32, 0x66, 0x66, 0x63, 0x61, 0x64, 0x32, 0x33, 0x34, 0x38, 0x65, 0x30, 0x63, 0x39, 0x31, 0x66, 0x61, 0x63, 0xff}]
[36m2020-07-09 06:45:56.182 UTC [aclmgmt] CheckACL -> DEBU 1954[0m acl policy /Channel/Application/Writers found in config for resource peer/Propose
[36m2020-07-09 06:45:56.182 UTC [aclmgmt] CheckACL -> DEBU 1955[0m acl check(/Channel/Application/Writers)
[36m2020-07-09 06:45:56.183 UTC [policies] EvaluateSignedData -> DEBU 1956[0m == Evaluating *policies.ImplicitMetaPolicy Policy /Channel/Application/Writers ==
[36m2020-07-09 06:45:56.183 UTC [policies] EvaluateSignedData -> DEBU 1957[0m This is an implicit meta policy, it will trigger other policy evaluations, whose failures may be benign
[36m2020-07-09 06:45:56.183 UTC [policies] EvaluateSignedData -> DEBU 1958[0m == Evaluating *cauthdsl.policy Policy /Channel/Application/Org1/Writers ==
[36m2020-07-09 06:45:56.183 UTC [msp.identity] Verify -> DEBU 1959[0m Verify: digest = 00000000 87 29 a0 e5 96 b8 5f 5e 9b e0 fb e5 4d 5b 86 b2 |.)...._^....M[..|
00000010 bd 43 ee 30 59 d6 a9 55 e3 e9 77 7b fd a2 47 8f |.C.0Y..U..w{..G.|
[36m2020-07-09 06:45:56.183 UTC [msp.identity] Verify -> DEBU 195a[0m Verify: sig = 00000000 30 45 02 21 00 f0 6b 23 9d f6 ec f2 29 be 64 4e |0E.!..k#....).dN|
00000010 75 69 a7 05 7e 05 71 51 64 6c 52 59 83 be ea f9 |ui..~.qQdlRY....|
00000020 08 5e 07 09 f3 02 20 7a f7 b0 6c e0 bb 32 b9 0c |.^.... z..l..2..|
00000030 8c 41 be b8 ea 39 33 91 92 0b 08 9e c6 14 39 e8 |.A...93.......9.|
00000040 46 eb a5 80 7a 7d d1 |F...z}.|
[36m2020-07-09 06:45:56.183 UTC [policies] SignatureSetToValidIdentities -> DEBU 195b[0m signature for identity 0 validated
[36m2020-07-09 06:45:56.183 UTC [cauthdsl] func1 -> DEBU 195c[0m 0xc0006210e0 gate 1594277156183221199 evaluation starts
[36m2020-07-09 06:45:56.183 UTC [cauthdsl] func2 -> DEBU 195d[0m 0xc0006210e0 signed by 0 principal evaluation starts (used [false])
[36m2020-07-09 06:45:56.183 UTC [cauthdsl] func2 -> DEBU 195e[0m 0xc0006210e0 processing identity 0 - &{Org1MSP 0b33fd619da73c0915b76088b0678047f834593ea6a4f22f0772b36f3c6bd68f}
[36m2020-07-09 06:45:56.183 UTC [cauthdsl] func2 -> DEBU 195f[0m 0xc0006210e0 principal evaluation succeeds for identity 0
[36m2020-07-09 06:45:56.183 UTC [cauthdsl] func1 -> DEBU 1960[0m 0xc0006210e0 gate 1594277156183221199 evaluation succeeds
[36m2020-07-09 06:45:56.183 UTC [policies] EvaluateSignedData -> DEBU 1961[0m Signature set satisfies policy /Channel/Application/Org1/Writers
[36m2020-07-09 06:45:56.183 UTC [policies] EvaluateSignedData -> DEBU 1962[0m == Done Evaluating *cauthdsl.policy Policy /Channel/Application/Org1/Writers
[36m2020-07-09 06:45:56.183 UTC [policies] EvaluateSignedData -> DEBU 1963[0m Signature set satisfies policy /Channel/Application/Writers
[36m2020-07-09 06:45:56.183 UTC [policies] EvaluateSignedData -> DEBU 1964[0m == Done Evaluating *policies.ImplicitMetaPolicy Policy /Channel/Application/Writers
[36m2020-07-09 06:45:56.183 UTC [lockbasedtxmgr] NewTxSimulator -> DEBU 1965[0m constructing new tx simulator
[36m2020-07-09 06:45:56.183 UTC [lockbasedtxmgr] newLockBasedTxSimulator -> DEBU 1966[0m constructing new tx simulator txid = [a71312e411a6b417a541112e2aeac73adc8d6f7fbbb3c62ffcad2348e0c91fac]
[36m2020-07-09 06:45:56.183 UTC [stateleveldb] GetState -> DEBU 1967[0m GetState(). ns=_lifecycle, key=namespaces/fields/cc-abac/Sequence
[36m2020-07-09 06:45:56.183 UTC [lockbasedtxmgr] Done -> DEBU 1968[0m Done with transaction simulation / query execution [a71312e411a6b417a541112e2aeac73adc8d6f7fbbb3c62ffcad2348e0c91fac]
[34m2020-07-09 06:45:56.183 UTC [comm.grpc.server] 1 -> INFO 1969[0m unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=10.129.1.229:60184 grpc.peer_subject="CN=org1-peer1,OU=peer,O=Hyperledger,ST=North Carolina,C=US" grpc.code=OK grpc.call_duration=1.225382ms
[36m2020-07-09 06:45:56.186 UTC [grpc] warningf -> DEBU 196a[0m transport: http2Server.HandleStreams failed to read frame: read tcp 10.130.2.65:7051->10.129.1.229:60184: read: connection reset by peer
[36m2020-07-09 06:45:56.186 UTC [grpc] infof -> DEBU 196b[0m transport: loopyWriter.run returning. connection error: desc = "transport is closing"
[36m2020-07-09 06:45:56.186 UTC [grpc] infof -> DEBU 196c[0m transport: loopyWriter.run returning. connection error: desc = "transport is closing"
[update] The message unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=10.129.1.229:60184 grpc.peer_subject="CN=org1-peer1,OU=peer,O=Hyperledger,ST=North Carolina,C=US" grpc.code=OK grpc.call_duration=1.225382ms indicates that the Peer considers the CLI as another Peer, doesn't it? If so, it's clear why the connection is failing. Now the question is, why the Peer thinks so?
Peer: 10.130.2.65
CLI: 10.129.1.229
Kind regards
Unfortunately, all of the GRPC logs and k8s related issues appear to be a red herring. The connection is being correctly established, the term 'peer' is simply a little confusing in the GRPC logs, as GRPC always refers to 'the party on the other end of the line' as a 'peer'. This term is re-used with a different meaning in Fabric.
As the logs indicate, the chaincode has been successfully approved, and defined on the channel.
As the peer CLI output indicates, you have installed a chaincode with package-id cc-abac:4992a37bf5c7b48f91f5062d9700a58a4129599c53d759e8282fdeffc8836c72.
But, on invoke, you are seeing the error that:
chaincode definition for 'cc-abac' exists, but chaincode is not installed
This means when you did your chaincode approval, you either did not specify a package-id, or, you specified an incorrect package id.
If you are using a v2.2+ version of Fabric, you should be able to use the peer lifecycle queryapproved utility to see what package ID you have selected.
You can re-run the peer lifecycle approveformyorg with the correct package-id (cc-abac:4992a37bf5c7b48f91f5062d9700a58a4129599c53d759e8282fdeffc8836c72) and this should correct things.

Mongod service crashed after performing mongodump

I performed this query
query: mongodump --db=elastic --collection=q_moonx_notifications_2019-08-26 --out=/home/centos/mongo-dump/
on my mongo db to take a dump of one collection which is around
Collection name: q_moonx_notifications_2019-08-26
size : 80GB
storageSize : 23.6GB
Soon after doing this, mongod service crashed.
I went through /var/log/messages to find the problem.
I got to know it happened due to 'out-of-memory' issue.
Can someone help me how it happened and how can I take a dump of a single collection without effecting my running mongo service.
Machine has 32 gb memory and 0 swp.
/var/log/messages content
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbdb61e41>] dump_stack+0x19/0x1b
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbdb5c86a>] dump_header+0x90/0x229
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbd700bcb>] ? cred_has_capability+0x6b/0x120
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbd5ba4e4>] oom_kill_process+0x254/0x3d0
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbd700c9c>] ? selinux_capable+0x1c/0x40
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbd5bad26>] out_of_memory+0x4b6/0x4f0
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbdb5d36e>] __alloc_pages_slowpath+0x5d6/0x724
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbd5c1105>] __alloc_pages_nodemask+0x405/0x420
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbd60df68>] alloc_pages_current+0x98/0x110
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbd5b6347>] __page_cache_alloc+0x97/0xb0
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbd5b8fa8>] filemap_fault+0x298/0x490
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffc0400d0e>] __xfs_filemap_fault+0x7e/0x1d0 [xfs]
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffc0400f0c>] xfs_filemap_fault+0x2c/0x30 [xfs]
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbd5e444a>] __do_fault.isra.59+0x8a/0x100
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbd5e49fc>] do_read_fault.isra.61+0x4c/0x1b0
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbd5e93a4>] handle_pte_fault+0x2f4/0xd10
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbd50cbf8>] ? get_futex_key+0x1c8/0x2c0
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbd5ebedd>] handle_mm_fault+0x39d/0x9b0
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbdb6f5e3>] __do_page_fault+0x203/0x500
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbdb6f915>] do_page_fault+0x35/0x90
Oct 11 07:07:33 ip-1.23.345.678 kernel: [<ffffffffbdb6b758>] page_fault+0x28/0x30
Oct 11 07:07:33 ip-1.23.345.678 kernel: Mem-Info:
Oct 11 07:07:33 ip-1.23.345.678 kernel: active_anon:7972560 inactive_anon:30565 isolated_anon:0#012 active_file:2831 inactive_file:4651 isolated_file:0#012 unevictable:0 dirty:0 writeback:5 unstable:0#012 slab_reclaimable:42065 slab_unreclaimable:12016#012 mapped:18928 shmem:55424 pagetables:19349 bounce:0#012 free:49154 free_pcp:558 free_cma:0
Oct 11 07:07:33 ip-1.23.345.678 kernel: Node 0 DMA free:15904kB min:32kB low:40kB high:48kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15988kB managed:15904kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
Oct 11 07:07:33 ip-1.23.345.678 kernel: lowmem_reserve[]: 0 3597 31992 31992
Oct 11 07:07:33 ip-1.23.345.678 kernel: Node 0 DMA32 free:121020kB min:7596kB low:9492kB high:11392kB active_anon:3397760kB inactive_anon:11020kB active_file:284kB inactive_file:508kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3915776kB managed:3684320kB mlocked:0kB dirty:0kB writeback:0kB mapped:5892kB shmem:14892kB slab_reclaimable:131668kB slab_unreclaimable:5368kB kernel_stack:1472kB pagetables:7364kB unstable:0kB bounce:0kB free_pcp:1184kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:2829 all_unreclaimable? yes
Oct 11 07:07:33 ip-1.23.345.678 kernel: lowmem_reserve[]: 0 0 28394 28394
Oct 11 07:07:33 ip-1.23.345.678 kernel: Node 0 Normal free:66532kB min:59952kB low:74940kB high:89928kB active_anon:28492480kB inactive_anon:111240kB active_file:11040kB inactive_file:11684kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:29622272kB managed:29079004kB mlocked:0kB dirty:0kB writeback:20kB mapped:69820kB shmem:206804kB slab_reclaimable:36592kB slab_unreclaimable:42696kB kernel_stack:6720kB pagetables:70032kB unstable:0kB bounce:0kB free_pcp:3320kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:8574 all_unreclaimable? no
Oct 11 07:07:33 ip-1.23.345.678 kernel: lowmem_reserve[]: 0 0 0 0
Oct 11 07:07:33 ip-1.23.345.678 kernel: Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB
Oct 11 07:07:33 ip-1.23.345.678 kernel: Node 0 DMA32: 2724*4kB (UEM) 1319*8kB (UEM) 2734*16kB (UEM) 1065*32kB (UEM) 296*64kB (UEM) 32*128kB (UEM) 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 122312kB
Oct 11 07:07:33 ip-1.23.345.678 kernel: Node 0 Normal: 10268*4kB (UEM) 3615*8kB (UEM) 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 69992kB
Oct 11 07:07:33 ip-1.23.345.678 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Oct 11 07:07:33 ip-1.23.345.678 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Oct 11 07:07:33 ip-1.23.345.678 kernel: 61902 total pagecache pages
Oct 11 07:07:33 ip-1.23.345.678 kernel: 0 pages in swap cache
Oct 11 07:07:33 ip-1.23.345.678 kernel: Swap cache stats: add 0, delete 0, find 0/0
Oct 11 07:07:33 ip-1.23.345.678 kernel: Free swap = 0kB
Oct 11 07:07:33 ip-1.23.345.678 kernel: Total swap = 0kB
Oct 11 07:07:33 ip-1.23.345.678 kernel: 8388509 pages RAM
Oct 11 07:07:33 ip-1.23.345.678 kernel: 0 pages HighMem/MovableOnly
Oct 11 07:07:33 ip-1.23.345.678 kernel: 193702 pages reserved
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 2415] 0 2415 55590 32605 114 0 0 systemd-journal
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 2456] 0 2456 11953 611 25 0 -1000 systemd-udevd
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 2704] 0 2704 15511 170 29 0 -1000 auditd
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 4346] 32 4346 18412 189 38 0 0 rpcbind
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 4464] 81 4464 16600 204 34 0 -900 dbus-daemon
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 4532] 998 4532 29446 143 29 0 0 chronyd
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 4588] 0 4588 6652 156 19 0 0 systemd-logind
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 4589] 999 4589 153057 1381 63 0 0 polkitd
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 4592] 0 4592 5416 101 14 0 0 irqbalance
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 4596] 0 4596 50404 162 38 0 0 gssproxy
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 4940] 0 4940 26839 508 51 0 0 dhclient
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5035] 0 5035 143455 3309 99 0 0 tuned
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5102] 0 5102 31253 535 58 0 0 nginx
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5103] 995 5103 31375 641 59 0 0 nginx
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5104] 995 5104 31375 641 59 0 0 nginx
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5105] 995 5105 31375 641 59 0 0 nginx
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5106] 995 5106 31375 641 59 0 0 nginx
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5107] 995 5107 31375 641 59 0 0 nginx
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5108] 995 5108 31375 641 59 0 0 nginx
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5109] 995 5109 31375 641 59 0 0 nginx
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5110] 995 5110 31375 641 59 0 0 nginx
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5183] 0 5183 22603 310 42 0 0 master
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5189] 89 5189 22673 286 43 0 0 qmgr
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5231] 997 5231 4365796 4120011 8150 0 0 mongod
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5295] 0 5295 104225 16973 121 0 0 rsyslogd
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5297] 0 5297 28189 267 58 0 -1000 sshd
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5337] 0 5337 31580 194 18 0 0 crond
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5343] 0 5343 27523 50 10 0 0 agetty
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5347] 0 5347 27523 50 13 0 0 agetty
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 5662] 27 5662 691356 97174 303 0 0 mysqld
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 7361] 996 7361 12739660 2366239 5813 0 0 java
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 7462] 996 7462 17192 173 30 0 0 controller
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 7661] 1000 7661 3588101 1239871 2579 0 0 java
Oct 11 07:07:33 ip-1.23.345.678 kernel: [14974] 0 14974 371181 6509 107 0 0 metricbeat
Oct 11 07:07:33 ip-1.23.345.678 kernel: [ 6781] 994 6781 430375 60775 604 0 0 node
Oct 11 07:07:33 ip-1.23.345.678 kernel: [24008] 89 24008 22629 301 47 0 0 pickup
Oct 11 07:07:33 ip-1.23.345.678 kernel: [25163] 0 25163 39154 367 77 0 0 sshd
Oct 11 07:07:33 ip-1.23.345.678 kernel: [25167] 1000 25167 39154 366 74 0 0 sshd
Oct 11 07:07:33 ip-1.23.345.678 kernel: [25168] 1000 25168 28893 149 15 0 0 bash
Oct 11 07:07:33 ip-1.23.345.678 kernel: [28554] 1000 28554 260690 34619 132 0 0 mongodump
Oct 11 07:07:33 ip-1.23.345.678 kernel: Out of memory: Kill process 5231 (mongod) score 503 or sacrifice child
Oct 11 07:07:33 ip-1.23.345.678 kernel: Killed process 5231 (mongod) total-vm:17463184kB, anon-rss:16480044kB, file-rss:0kB, shmem-rss:0kB
Oct 11 07:07:34 ip-1.23.345.678 systemd: mongod.service: main process exited, code=killed, status=9/KILL
Oct 11 07:07:34 ip-1.23.345.678 systemd: Unit mongod.service entered failed state.
Oct 11 07:07:34 ip-1.23.345.678 systemd: mongod.service failed.
Oct 11 07:07:39 ip-1.23.345.678 systemd-logind: New session 1220 of user centos.
You ran into a "problem" called the "OOM-Killer".
A quote from the part of the MongoDB documentation aptly named Production notes:
Swap
Assign swap space for your systems. Allocating swap space can avoid issues with memory contention and can prevent the OOM Killer on Linux systems from killing mongod.
For the WiredTiger storage engine, given sufficient memory pressure, WiredTiger may store data in swap space.
(emphasis by me)
What basically happened is that the memory pressure on the system became critical, and the kernel decided to kill a process to free some RAM in order to ensure the system can still run. It does so by determining the "badness" of a task via
badness_for_task = total_vm_for_task / (sqrt(cpu_time_in_seconds) *
sqrt(sqrt(cpu_time_in_minutes)))
and killing the baddest task. See https://www.kernel.org/doc/gorman/html/understand/understand016.html
for details.
Gist: MongoDB presumably consumes several orders of magnitude more RAM than any other process on the server, hence it get's killed by the OOM Killer when the Kernel has no option to swap out some data and thereby ensure the basic system tasks still can run.
This behaviour can basically be prevented by allocating swap, which is why it is accordingly documented in the production notes.
From the official document -
mongodump reads data from a MongoDB database and creates high fidelity BSON files which the mongorestore tool can use to populate a MongoDB database. mongodump and mongorestore are simple and efficient tools for backing up and restoring small MongoDB deployments, but are not ideal for capturing backups of larger systems.
When connected to a MongoDB instance, mongodump can adversely affect mongod performance. If your data is larger than system memory, the queries will push the working set out of memory, causing page faults.
and for the above said result it is important to have swap memory ready for using these commands.
I would suggest to create swap memory as a first step you can check the required steps for your specific os you are using. This is the best I can recommend for creating swap space.
If that doesn't help please increase the memory.

Error CrashLoopBackOff Heapster with Statsd Sink Configuration

I got error with the latest heapster version: v1.5.1. I've described in detail in this github issue link: https://github.com/kubernetes/heapster/issues/1969
The error message:
Container Timestamp Message
heapster Mar 2, 2018, 11:03:20 AM /go/src/k8s.io/heapster/metrics/heapster.go:89 +0x458
heapster Mar 2, 2018, 11:03:20 AM main.main()
heapster Mar 2, 2018, 11:03:20 AM /go/src/k8s.io/heapster/metrics/heapster.go:194 +0x8d
heapster Mar 2, 2018, 11:03:20 AM main.createAndInitSinksOrDie(0xc42028f3b0, 0x1, 0x1, 0x0, 0x0, 0x4a817c800, 0x0, 0x0, 0xc420154740, 0x0, ...)
heapster Mar 2, 2018, 11:03:20 AM /go/src/k8s.io/heapster/metrics/sinks/factory.go:90 +0x563
heapster Mar 2, 2018, 11:03:20 AM k8s.io/heapster/metrics/sinks.(*SinkFactory).BuildAll(0xc4205e1cd8, 0xc42028f3b0, 0x1, 0x1, 0x0, 0x0, 0x7fb87afb8400, 0x0, 0x0, 0x0, ...)
heapster Mar 2, 2018, 11:03:20 AM goroutine 1 [running]:
heapster Mar 2, 2018, 11:03:20 AM
heapster Mar 2, 2018, 11:03:20 AM [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x15f77a3]
heapster Mar 2, 2018, 11:03:20 AM panic: runtime error: invalid memory address or nil pointer dereference
heapster Mar 2, 2018, 11:03:20 AM I0302 04:03:20.258025 1 configs.go:62] Using kubelet port 10255
heapster Mar 2, 2018, 11:03:20 AM I0302 04:03:20.258013 1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default" and version v1
heapster Mar 2, 2018, 11:03:20 AM I0302 04:03:20.257857 1 heapster.go:79] Heapster version v1.5.1
heapster Mar 2, 2018, 11:03:20 AM I0302 04:03:20.257820 1 heapster.go:78] /heapster --source=kubernetes:https://kubernetes.default --sink="statsd:udp://dd-agent-service.default:8125"
Anybody knows how to solve it? Perhaps someone who already successfully integrated the heapster to datadog statsd agent in Kubernetes?
Thanks before

Run kube-proxy on fedora server 23 panic: runtime error: invalid memory address or nil pointer dereference

fedora :
[root#host3 vagrant]# cat /etc/os-release
NAME=Fedora
VERSION="23 (Twenty Three)"
ID=fedora
VERSION_ID=23
PRETTY_NAME="Fedora 23 (Twenty Three)"
ANSI_COLOR="0;34"
CPE_NAME="cpe:/o:fedoraproject:fedora:23"
HOME_URL="https://fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=23
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=23
PRIVACY_POLICY_URL=https://fedoraproject.org/wiki/Legal:PrivacyPolicy
kube-proxy version:
[root#host3 vagrant]# kube-proxy --version=true
Kubernetes v1.1.2
Run command and error msg:
[root#host3 vagrant]# kube-proxy --logtostderr=true --v=0 --master=http://host1:8080 --proxy-mode=userspace --cleanup-iptables=true
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0xb1 pc=0x465e26]
goroutine 1 [running]:
k8s.io/kubernetes/cmd/kube-proxy/app.(*ProxyServer).Run(0xc2080d79d0, 0xc208046af0, 0x0, 0x5, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-proxy/app/server.go:309 +0x56
main.main()
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-proxy/proxy.go:53 +0x225
goroutine 7 [chan receive]:
github.com/golang/glog.(*loggingT).flushDaemon(0x12169e0)
/go/src/k8s.io/kubernetes/Godeps/_workspace/src/github.com/golang/glog/glog.go:879 +0x78
created by github.com/golang/glog.init·1
/go/src/k8s.io/kubernetes/Godeps/_workspace/src/github.com/golang/glog/glog.go:410 +0x2a7
goroutine 17 [syscall, locked to thread]:
runtime.goexit()
/usr/src/go/src/runtime/asm_amd64.s:2232 +0x1
goroutine 15 [chan receive]:
github.com/godbus/dbus.(*Conn).outWorker(0xc2080daa20)
/go/src/k8s.io/kubernetes/Godeps/_workspace/src/github.com/godbus/dbus/conn.go:367 +0x58
created by github.com/godbus/dbus.(*Conn).Auth
/go/src/k8s.io/kubernetes/Godeps/_workspace/src/github.com/godbus/dbus/auth.go:119 +0xea1
goroutine 12 [sleep]:
k8s.io/kubernetes/pkg/util.Until(0xda66f0, 0x12a05f200, 0xc20800a7e0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/util.go:127 +0x98
created by k8s.io/kubernetes/pkg/util.InitLogs
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/logs.go:49 +0xab
goroutine 14 [IO wait]:
net.(*pollDesc).Wait(0xc2080d6920, 0x72, 0x0, 0x0)
/usr/src/go/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc2080d6920, 0x0, 0x0)
/usr/src/go/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).readMsg(0xc2080d68c0, 0xc2080e28c0, 0x10, 0x10, 0xc20816d220, 0x1000, 0x1000, 0xffffffffffffffff, 0x0, 0x0, ...)
/usr/src/go/src/net/fd_unix.go:296 +0x54e
net.(*UnixConn).ReadMsgUnix(0xc20803a0f0, 0xc2080e28c0, 0x10, 0x10, 0xc20816d220, 0x1000, 0x1000, 0x0, 0xc2080e276c, 0x4, ...)
/usr/src/go/src/net/unixsock_posix.go:147 +0x167
github.com/godbus/dbus.(*oobReader).Read(0xc20816d200, 0xc2080e28c0, 0x10, 0x10, 0xc20816d200, 0x0, 0x0)
/go/src/k8s.io/kubernetes/Godeps/_workspace/src/github.com/godbus/dbus/transport_unix.go:21 +0xc5
io.ReadAtLeast(0x7f7eeae0df58, 0xc20816d200, 0xc2080e28c0, 0x10, 0x10, 0x10, 0x0, 0x0, 0x0)
/usr/src/go/src/io/io.go:298 +0xf1
io.ReadFull(0x7f7eeae0df58, 0xc20816d200, 0xc2080e28c0, 0x10, 0x10, 0x0, 0x0, 0x0)
/usr/src/go/src/io/io.go:316 +0x6d
github.com/godbus/dbus.(*unixTransport).ReadMessage(0xc2081250d0, 0xc208112660, 0x0, 0x0)
/go/src/k8s.io/kubernetes/Godeps/_workspace/src/github.com/godbus/dbus/transport_unix.go:85 +0x1bf
github.com/godbus/dbus.(*Conn).inWorker(0xc2080daa20)
/go/src/k8s.io/kubernetes/Godeps/_workspace/src/github.com/godbus/dbus/conn.go:241 +0x58
created by github.com/godbus/dbus.(*Conn).Auth
/go/src/k8s.io/kubernetes/Godeps/_workspace/src/github.com/godbus/dbus/auth.go:118 +0xe84
goroutine 16 [runnable]:
k8s.io/kubernetes/pkg/util/iptables.(*runner).dbusSignalHandler(0xc2080d6850, 0x7f7eeae0e028, 0xc20803a100)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/iptables/iptables.go:525
created by k8s.io/kubernetes/pkg/util/iptables.(*runner).connectToFirewallD
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/iptables/iptables.go:186 +0x7a7
anyone who can help me?
This looks like a bug when using the --cleanup-iptables=true flag in the 1.1.2 release, as I can reproduce a panic when running on a GCE node. I've created kubernetes#18197 on your behalf and this bug will be fixed in the upcoming 1.1.3 release.