Recovering Kafka Cluster from a disk full error - apache-kafka

We have a 3-node Kafka cluster. For data storage we have 2 mounted disks - /data/disk1 and /data/disk2 in each of the 3 nodes. The log.dirs setting in kafka.properties is:
log.dirs=/data/disk1/kafka-logs,/data/disk2/kafka-logs
It so happened that in one of nodes Node1, the disk partition /data/disk2/kafka-logs got 100% full.
The reason this happened is - we were replaying data from filebeat to a kafka topic and a lot of data got pushed in a very short time. I've temporarily changed the retention for that topic to 1 day from 7 days and so the topic size has become normal.
The problem is - in Node1 which has got /data/disk2/kafka-logs 100% full, kafka process just wouldn't start and emit the error:
Jul 08 12:03:29 broker01 kafka[23949]: [2019-07-08 12:03:29,093] INFO Recovering unflushed segment 0 in log my-topic-0. (kafka.log.Log)
Jul 08 12:03:29 broker01 kafka[23949]: [2019-07-08 12:03:29,094] INFO Completed load of log my-topic-0 with 1 log segments and log end offset 0 in 2 ms (kafka.log.Log)
Jul 08 12:03:29 broker01 kafka[23949]: [2019-07-08 12:03:29,095] ERROR There was an error in one of the threads during logs loading: java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code (kafka.log.LogManager)
Jul 08 12:03:29 broker01 kafka[23949]: [2019-07-08 12:03:29,101] FATAL [Kafka Server 1], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
Jul 08 12:03:29 broker01 kafka[23949]: java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code
Jul 08 12:03:29 broker01 kafka[23949]: at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
Jul 08 12:03:29 broker01 kafka[23949]: at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
Jul 08 12:03:29 broker01 kafka[23949]: at org.apache.kafka.common.record.FileLogInputStream$FileChannelLogEntry.loadRecord(FileLogInputStream.java:135)
Jul 08 12:03:29 broker01 kafka[23949]: at org.apache.kafka.common.record.FileLogInputStream$FileChannelLogEntry.record(FileLogInputStream.java:149)
Jul 08 12:03:29 broker01 kafka[23949]: at kafka.log.LogSegment.$anonfun$recover$1(LogSegment.scala:22
The replication factor for most topics is either 2 or 3. So, I'm wondering if I can do the following:
Change replication factor to 2 for all the topics (Node 2 and Node 3 are running fine)
delete some stuff from Node1.
Restart Node 1
Change replication factor back to 2 or 3 as was the case initially.
Does anyone know of a better way or a better suggestion?
Update: Step 1 and 4 not needed. Just 2 and 3 are enough if you have replicas.

Your problem (and solution accordingly) is similar to that described in this question: kafka 0.9.0.1 fails to start with fatal exception
The easiest and fastest way is to delete part of the data. When the broker started, the data is replicated with the new retention.
So, I'm wondering if I can do the following...
Answering your question specifically - yes, you can do the steps you described in sequence and this will help to return the cluster to a consistent state.
To prevent this from happening in the future, you can try using the parameter log.retention.bytes instead of log.retention.hours, although I believe that the use of size-based retention policy for logs is not the best choice, because as my practice shows in most cases it is necessary to know the time at least of which the topic will be stored in the cluster.

Related

Master turning into slave after redis sentinel failover

I am trying out redis master slave replication using sentinels.
I have 1 master and 2 slaves and 3 sentinels. All running as different pods.
My issue is:
1) When I delete the master pod, one of the slaves turns to master.
2) Ideally, there should be a new master now with only one slave. For some reason, the master IP that I deleted turns into a slave of the newly elected master.
3) Is this a desirable behaviour? Because when the sentinel shows there are 2 slaves to the newly elected master, in fact there exists only 1 slave pod because the master pod is deleted.
Below are the logs:
:M 29 May 2020 07:32:19.569 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
8:M 29 May 2020 07:32:19.569 # Server initialized
8:M 29 May 2020 07:32:19.569 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
8:M 29 May 2020 07:32:19.569 * Ready to accept connections
8:M 29 May 2020 07:33:22.329 * Replica 172.16.2.12:6379 asks for synchronization
8:M 29 May 2020 07:33:22.329 * Full resync requested by replica 172.16.2.12:6379
8:M 29 May 2020 07:33:22.329 * Starting BGSAVE for SYNC with target: disk
8:M 29 May 2020 07:33:22.330 * Background saving started by pid 12
12:C 29 May 2020 07:33:22.333 * DB saved on disk
12:C 29 May 2020 07:33:22.334 * RDB: 2 MB of memory used by copy-on-write
8:M 29 May 2020 07:33:22.355 * Background saving terminated with success
8:M 29 May 2020 07:33:22.356 * Synchronization with replica 172.16.2.12:6379 succeeded
8:M 29 May 2020 07:33:23.092 * Replica 172.16.4.48:6379 asks for synchronization
8:M 29 May 2020 07:33:23.092 * Full resync requested by replica 172.16.4.48:6379
8:M 29 May 2020 07:33:23.092 * Starting BGSAVE for SYNC with target: disk
8:M 29 May 2020 07:33:23.092 * Background saving started by pid 13
13:C 29 May 2020 07:33:23.097 * DB saved on disk
13:C 29 May 2020 07:33:23.097 * RDB: 2 MB of memory used by copy-on-write
8:M 29 May 2020 07:33:23.158 * Background saving terminated with success
8:M 29 May 2020 07:33:23.158 * Synchronization with replica 172.16.4.48:6379 succeeded
8:M 29 May 2020 07:36:26.866 # Connection with replica 172.16.2.12:6379 lost.
8:M 29 May 2020 07:36:27.871 # Connection with replica 172.16.4.48:6379 lost.
8:S 29 May 2020 07:36:37.926 * Before turning into a replica, using my master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
8:S 29 May 2020 07:36:37.927 * REPLICAOF 172.16.2.12:6379 enabled (user request from 'id=21 addr=172.16.3.135:56721 fd=9 name=sentinel-5261eb21-cmd age=10 idle=0 flags=x db=0 sub=0 psub=0 multi=3 qbuf=151 qbuf-free=32617 obl=36 oll=0 omem=0 events=r cmd=exec')
8:S 29 May 2020 07:36:37.933 # CONFIG REWRITE executed with success.
8:S 29 May 2020 07:36:38.284 * Connecting to MASTER 172.16.2.12:6379
8:S 29 May 2020 07:36:38.284 * MASTER <-> REPLICA sync started
8:S 29 May 2020 07:36:38.284 * Non blocking connect for SYNC fired the event.
8:S 29 May 2020 07:36:38.285 * Master replied to PING, replication can continue...
8:S 29 May 2020 07:36:38.285 * Trying a partial resynchronization (request 563ca4b5f67f1e24c129729eaa74800b108902a3:52568).
8:S 29 May 2020 07:36:38.321 * Full resync from master: f21b8c35187b109b621605b375ef62e61b301834:52901
8:S 29 May 2020 07:36:38.321 * Discarding previously cached master state.
8:S 29 May 2020 07:36:38.356 * MASTER <-> REPLICA sync: receiving 178 bytes from master
8:S 29 May 2020 07:36:38.356 * MASTER <-> REPLICA sync: Flushing old data
8:S 29 May 2020 07:36:38.356 * MASTER <-> REPLICA sync: Loading DB in memory
8:S 29 May 2020 07:36:38.356 * MASTER <-> REPLICA sync: Finished with success
I am using redis 5.0. Earlier I was using redis 4.0 but I did not face such issue.

Kafka: deleting messages from topics with retention "compact"

I am trying to implement a minimal working example on compacted topics in Kafka with Java. I got the compaction working well, but cannot see deletes happening when I write messages with a key and a null value as described in the kafka documentation.
Version of library used: kafka-clients-0.10.0.0.jar
Here is a gist of a Java class reproducing the behaviour:
https://gist.github.com/anonymous/f78184eaeec3ee82b15182aec24a432a
Furthermore, consulting the documentation, I used the following configuration on a topic level for compaction to kick in as quickly as possible:
min.cleanable.dirty.ratio=0.01
cleanup.policy=compact
segment.ms=100
delete.retention.ms=100
On the server.properties side, just to be sure:
log.retention.check.interval.ms=100
log.cleaner.delete.retention.ms=100
log.cleaner.enable=true
log.cleaner.min.cleanable.ratio=0.01
When run, this class shows that compaction works - there is only ever one message with the same key on the topic. However, I still see the message with the "null" value, that should have been deleted in my opinion.
I can see the cleaner threads running, producing output like:
[2016-08-11 12:30:21,032] INFO Cleaner 0: Cleaning segment 15 in log compaction-test-0 (last modified Thu Aug 11 12:29:52 CEST 2016) into 0, retaining deletes. (kafka.log.LogCleaner)
Does anyone know why it's "retaining deletes"? Am I missing any relevant configuration option? Am I writing "null" in the correct way?
Any ideas are greatly appreciated. Thanks in advance!
UPDATE: After investigating helpful comments, I upgraded to 0.10.0.1 and found the following output in the cleaner log:
[2016-08-15 12:44:57,412] INFO Cleaner 0: Cleaning log compaction-test-0 (discarding tombstones prior to Mon Aug 15 12:44:40 CEST 2016)... (kafka.log.LogCleaner)
[2016-08-15 12:44:57,412] INFO Cleaner 0: Cleaning segment 0 in log compaction-test-0 (last modified Mon Aug 15 12:44:41 CEST 2016) into 0, retaining deletes. (kafka.log.LogCleaner)
[2016-08-15 12:44:57,412] INFO Cleaner 0: Cleaning segment 15 in log compaction-test-0 (last modified Mon Aug 15 12:44:41 CEST 2016) into 0, retaining deletes. (kafka.log.LogCleaner)
[2016-08-15 12:44:57,413] INFO Cleaner 0: Cleaning segment 16 in log compaction-test-0 (last modified Mon Aug 15 12:44:56 CEST 2016) into 0, retaining deletes. (kafka.log.LogCleaner)
As "retaining deletes" is set by
val retainDeletes = old.lastModified > deleteHorizonMs
and the last modification date of the segment in question always seems slightly later than the delete horizon, deleting doesn't happen in my minimal example.
Just wondering how to adjust settings or test to deal with this now...
This problem has been fixed in 0.10.1. See this JIRA: https://issues.apache.org/jira/browse/KAFKA-4015

Kubernetes scheduler: watch of *api.Pod ended with error: unexpected end of JSON input

Yesterday service worked fine. But today when i checked service's state i saw:
Mar 11 14:03:16 coreos-1 systemd[1]: scheduler.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Mar 11 14:03:16 coreos-1 systemd[1]: Unit scheduler.service entered failed state.
Mar 11 14:03:16 coreos-1 systemd[1]: scheduler.service failed.
Mar 11 14:03:16 coreos-1 systemd[1]: Starting Kubernetes Scheduler...
Mar 11 14:03:16 coreos-1 systemd[1]: Started Kubernetes Scheduler.
Mar 11 14:08:16 coreos-1 kube-scheduler[4659]: E0311 14:08:16.808349 4659 reflector.go:118] watch of *api.Service ended with error: very short watch
Mar 11 14:08:16 coreos-1 kube-scheduler[4659]: E0311 14:08:16.811434 4659 reflector.go:118] watch of *api.Pod ended with error: unexpected end of JSON input
Mar 11 14:08:16 coreos-1 kube-scheduler[4659]: E0311 14:08:16.847595 4659 reflector.go:118] watch of *api.Pod ended with error: unexpected end of JSON input
It's really confused 'cause etcd, flannel and apiserver work fine.
Only some strange logs are for etcd:
Mar 11 20:22:21 coreos-1 etcd[472]: [etcd] Mar 11 20:22:21.572 INFO | aba44aa0670b4b2e8437c03a0286d779: warning: heartbeat time out peer="6f4934635b6b4291bf29763add9bf4c7" missed=1 backoff="2s"
Mar 11 20:22:48 coreos-1 etcd[472]: [etcd] Mar 11 20:22:48.269 INFO | aba44aa0670b4b2e8437c03a0286d779: warning: heartbeat time out peer="6f4934635b6b4291bf29763add9bf4c7" missed=1 backoff="2s"
Mar 11 20:48:12 coreos-1 etcd[472]: [etcd] Mar 11 20:48:12.070 INFO | aba44aa0670b4b2e8437c03a0286d779: warning: heartbeat time out peer="6f4934635b6b4291bf29763add9bf4c7" missed=1 backoff="2s"
So, I'm really stuck and don't know what's wrong. How can i resolve this problem? Or, how can i check details log for scheduler.
journalctl give me same logs like systemd status
Please see: https://github.com/GoogleCloudPlatform/kubernetes/issues/5311
It means apiserver accepted the watch request but then immediately terminated the connection.
If you see it occasionally, it implies a transient error and is not alarming. If you see it repeatedly, it implies that apiserver (or etcd) is sick.
Is something actually not working for you?

Couchbase: 20k items stuck in Tap Queue

We are currently evaluating couchbase as a memcached replacement in the first place. Our setup looks like this:
php -> localhost moxi -> couchbase bucket (Total bucket size = 10240 MB (2048 MB x 5 nodes with replica count 1))
The Servers have 16GB RAM and are SSD backed.
We were inserting at about 400 ops/s and had no problem for a few days. When we reached about 13 million items. We found out that we forgot to implement the delete function in our testsetup and a lot of keys had no expiration set.
To start over again we flushed the bucket through the webinterface. This where our problems began.
We started to see that we had temp ooms, back-offs, and tap queue was filled with 20k items. the drain and fill rate was nearly the same. See attached screenshot
What also catched our eye was that node 4 had only 220k items, where everyone else had around 1.39M
Somehow it looks like the replication messed up something, but im relatively new to couchbase. Any hints, suggestions? - See more at: http://www.couchbase.com/communities/q-and-a/20k-items-stuck-tap-queue#sthash.v9MxNnTk.dpuf
The problem was solved for a short time, after removing the failing node from the cluster.
So now with this four nodes left in the cluster, after some hours the same happend again with another node. We tried setting the now failing node into FailOver state. That fixed the problem again, but after Re-Adding the node, the same phenomenon happened again on that node.
Other things we realized are:
* Three out of four nodes have thousands of items in their TAP replication queue, but one
("the failing one") has 0.
* Also three out of four nodes have a back-off rate of around 400, but one ("the failing one") has 0.
* Only the failing one has a massive amount of "Temp OOMs per second", but the other three have 0.
The phenomenon seems to disappear, if we lower the load to the servers by disabling the couchbase-writes for one out of two software project writing to couchbase.
But if we enable the writes again, after around 10 minutes we can see this in the memcached.log on the failing node:
Tue Dec 17 12:29:05.010547 CET 3: (CENSORED) Received error[86] from mccouch for unknown
Tue Dec 17 12:29:05.010576 CET 3: (CENSORED) Retry notify CouchDB of update, vbucket=277 rev=522
Tue Dec 17 12:29:08.748103 CET 3: (CENSORED) Received error[86] from mccouch for unknown
Tue Dec 17 12:29:08.748257 CET 3: (CENSORED) Retry notify CouchDB of update, vbucket=321 rev=948
Tue Dec 17 12:40:17.354448 CET 3: (CENSORED) Received error[86] from mccouch for unknown
Tue Dec 17 12:40:17.354476 CET 3: (CENSORED) Retry notify CouchDB of update, vbucket=303 rev=491
This error then happens around 5 times within four hours:
Tue Dec 17 14:19:32.145071 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
And after these four hours it starts spamming this instantly (Maybe, because the load increased heavily, because in the evening our page generates much more load than in the morning/noon) together with this "error from mccouch":
Tue Dec 17 16:42:30.875343 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:42:36.493317 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:25.239876 CET 3: (CENSORED) Received error[86] from mccouch for unknown
Tue Dec 17 16:43:25.240052 CET 3: (CENSORED) Retry notify CouchDB of update, vbucket=296 rev=483
Tue Dec 17 16:43:25.903997 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:31.906178 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:36.913045 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:42.919114 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:48.920354 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:54.924017 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:44:00.928572 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
We have no clue what is happening here, why this failing node seems to reject every replication and throwing this error.
Do you have any idea?
Thanks for all your help and greetings from Cologne,
Andy!
Seeing as you just want to delete all items in the Bucket have you tried just deleting and re-creating the bucket?
This will be much faster than flush, as flush actually needs to send a delete request for every document in the bucket.
I can't find it in the docs at the moment, but I think Flush is not really recommended with the latest versions.
you are not writing what is your operating system. If it's Linux try to check maximum amount of open sockets for user running the Couchbase. Check the file /etc/security/limits.conf.
the command for check on Linux is: ulimit -Hn.
Hope that helps.
Daniel
I think you should try these settings:
http://docs.couchbase.com/couchbase-manual-2.1/#specifying-backoff-for-replication

MongoDB: How to remove an index on a replicaset?

I see that the MongoDB documentation says that removing index is by calling db.accounts.dropIndex( { "tax-id": 1 } ). But it does not say whether the node needs to be removed from the replicaset or not.
I tried to take a secondary node in a replicaset offline and restart as a standalone node (in a different port) and tried to drop the index.
But after bringing back the node in the replica set with regular process sudo service mongod start, the mongod process is dying saying the index got corrupted.
Thu Oct 31 19:52:38.098 [repl writer worker 1] Assertion: 15898:error in index possibly corruption consider repairing 382
0xdddd81 0xd9f55b 0xd9fa9c 0x7edb83 0x7fb332 0x7fdc08 0x9d3b50 0x9c796e 0x9deb64 0xac45dd 0xac58df 0xa903fa 0xa924c7 0xa71f6c 0xc273d3 0xc26b18 0xdab721 0xe26609 0x7ff4d05f0c6b 0x7ff4cf9965ed
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xdddd81]
/usr/bin/mongod(_ZN5mongo11msgassertedEiPKc+0x9b) [0xd9f55b]
/usr/bin/mongod() [0xd9fa9c]
/usr/bin/mongod(_ZN5mongo11checkFailedEj+0x143) [0x7edb83]
/usr/bin/mongod(_ZNK5mongo12BucketBasicsINS_12BtreeData_V1EE11basicInsertENS_7DiskLocERiS3_RKNS_5KeyV1ERKNS_8OrderingE+0x222) [0x7fb332]
/usr/bin/mongod(_ZNK5mongo11BtreeBucketINS_12BtreeData_V1EE10insertHereENS_7DiskLocEiS3_RKNS_5KeyV1ERKNS_8OrderingES3_S3_RNS_12IndexDetailsE+0x68) [0x7fdc08]
/usr/bin/mongod(_ZNK5mongo30IndexInsertionContinuationImplINS_12BtreeData_V1EE22doIndexInsertionWritesEv+0xa0) [0x9d3b50]
/usr/bin/mongod(_ZN5mongo14IndexInterface13IndexInserter19finishAllInsertionsEv+0x1e) [0x9c796e]
/usr/bin/mongod(_ZN5mongo24indexRecordUsingTwoStepsEPKcPNS_16NamespaceDetailsENS_7BSONObjENS_7DiskLocEb+0x754) [0x9deb64]
/usr/bin/mongod(_ZN5mongo11DataFileMgr6insertEPKcPKvibbbPb+0x123d) [0xac45dd]
/usr/bin/mongod(_ZN5mongo11DataFileMgr16insertWithObjModEPKcRNS_7BSONObjEbb+0x4f) [0xac58df]
/usr/bin/mongod(_ZN5mongo14_updateObjectsEbPKcRKNS_7BSONObjES4_bbbRNS_7OpDebugEPNS_11RemoveSaverEbRKNS_24QueryPlanSelectionPolicyEb+0x2eda) [0xa903fa]
/usr/bin/mongod(_ZN5mongo27updateObjectsForReplicationEPKcRKNS_7BSONObjES4_bbbRNS_7OpDebugEbRKNS_24QueryPlanSelectionPolicyE+0xb7) [0xa924c7]
/usr/bin/mongod(_ZN5mongo21applyOperation_inlockERKNS_7BSONObjEbb+0x65c) [0xa71f6c]
/usr/bin/mongod(_ZN5mongo7replset8SyncTail9syncApplyERKNS_7BSONObjEb+0x713) [0xc273d3]
/usr/bin/mongod(_ZN5mongo7replset14multiSyncApplyERKSt6vectorINS_7BSONObjESaIS2_EEPNS0_8SyncTailE+0x48) [0xc26b18]
/usr/bin/mongod(_ZN5mongo10threadpool6Worker4loopEv+0x281) [0xdab721]
/usr/bin/mongod() [0xe26609]
/lib64/libpthread.so.0(+0x7c6b) [0x7ff4d05f0c6b]
/lib64/libc.so.6(clone+0x6d) [0x7ff4cf9965ed]
Thu Oct 31 19:52:38.106 [repl writer worker 1] ERROR: writer worker caught exception: error in index possibly corruption consider repairing 382 on:
xxxxxxxx--deleted content related to the data...xxxxxxxxxxxxx
Thu Oct 31 19:52:38.106 [repl writer worker 1] Fatal Assertion 16360
0xdddd81 0xd9dc13 0xc26bfc 0xdab721 0xe26609 0x7ff4d05f0c6b 0x7ff4cf9965ed
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xdddd81]
/usr/bin/mongod(_ZN5mongo13fassertFailedEi+0xa3) [0xd9dc13]
/usr/bin/mongod(_ZN5mongo7replset14multiSyncApplyERKSt6vectorINS_7BSONObjESaIS2_EEPNS0_8SyncTailE+0x12c) [0xc26bfc]
/usr/bin/mongod(_ZN5mongo10threadpool6Worker4loopEv+0x281) [0xdab721]
/usr/bin/mongod() [0xe26609]
/lib64/libpthread.so.0(+0x7c6b) [0x7ff4d05f0c6b]
/lib64/libc.so.6(clone+0x6d) [0x7ff4cf9965ed]
Thu Oct 31 19:52:38.108 [repl writer worker 1]
***aborting after fassert() failure
Thu Oct 31 19:52:38.108 Got signal: 6 (Aborted).
Is this due to dropping the index in the offline mode on the secondary? Any suggestions on the proper way to drop the index is highly appreciated.
The proper way to remove index from replica set is to drop it on primary. The idea of replica is having the same copy of data (with small time lags). So whenever you do something on primary is copied to the secondaries. So if you start doing anything on the primary, right after it finishes this process, the process propagates to secondaries.
If you are removing index from primary - the index will be removed on the secondary as well.