Unable to send Kafka Message through port forwarding - centos

I have an issue about sending message to Kafka using port forwarding. We are using port forwarding for Kafka like this:
| Hostname | IP | Port | Port Forwarding |
|----------|---------------|------|-----------------------|
| kafka01 | 192.168.0.100 | 9092 | 106.107.118.119:30124 |
| kafka02 | 192.168.0.101 | 9092 | 106.107.118.119:30125 |
| kafka03 | 192.168.0.102 | 9092 | 106.107.118.119:30126 |
To connect to kafka broker from my localhost, I have to use VPN to connect to these addresses 192.168.0.X:9092 and it sent message to Kafka from my localhost (my laptop) successfully. But when deploying my application to real servers which has to use port forwarding to connect to Kafka brokers, it sent no message to kafka without any error display.
From real servers, it still can connect to kafka brokers (telnet 106.107.118.119 30124 successfully). I have no idea why. This is the producer config:
INFO | jvm 1 | 2017/07/29 16:34:36 | [2017-07-29 04:34:36] INFO -
ProducerConfig values:
INFO | jvm 1 | 2017/07/29 16:34:36 | compression.type = none
INFO | jvm 1 | 2017/07/29 16:34:36 | metric.reporters = []
INFO | jvm 1 | 2017/07/29 16:34:36 | metadata.max.age.ms = 300000
INFO | jvm 1 | 2017/07/29 16:34:36 | metadata.fetch.timeout.ms =
60000
INFO | jvm 1 | 2017/07/29 16:34:36 | acks = 1
INFO | jvm 1 | 2017/07/29 16:34:36 | batch.size = 16384
INFO | jvm 1 | 2017/07/29 16:34:36 | reconnect.backoff.ms = 10
INFO | jvm 1 | 2017/07/29 16:34:36 | bootstrap.servers =
[106.107.118.119:30124, 106.107.118.119:30125, 106.107.118.119:30126]
INFO | jvm 1 | 2017/07/29 16:34:36 | receive.buffer.bytes = 32768
INFO | jvm 1 | 2017/07/29 16:34:36 | retry.backoff.ms = 100
INFO | jvm 1 | 2017/07/29 16:34:36 | buffer.memory = 33554432
INFO | jvm 1 | 2017/07/29 16:34:36 | timeout.ms = 30000
INFO | jvm 1 | 2017/07/29 16:34:36 | key.serializer = class
org.apache.kafka.common.serialization.StringSerializer
INFO | jvm 1 | 2017/07/29 16:34:36 | retries = 0
INFO | jvm 1 | 2017/07/29 16:34:36 | max.request.size = 1048576
INFO | jvm 1 | 2017/07/29 16:34:36 | block.on.buffer.full = true
INFO | jvm 1 | 2017/07/29 16:34:36 | value.serializer = class
org.apache.kafka.common.serialization.StringSerializer
INFO | jvm 1 | 2017/07/29 16:34:36 | metrics.sample.window.ms = 30000
INFO | jvm 1 | 2017/07/29 16:34:36 | send.buffer.bytes = 131072
INFO | jvm 1 | 2017/07/29 16:34:36 |
max.in.flight.requests.per.connection = 5
INFO | jvm 1 | 2017/07/29 16:34:36 | metrics.num.samples = 2
INFO | jvm 1 | 2017/07/29 16:34:36 | linger.ms = 1
INFO | jvm 1 | 2017/07/29 16:34:36 | client.id =
INFO | jvm 1 | 2017/07/29 16:34:36 | - in logAll() at line 113 of
class org.apache.kafka.common.config.AbstractConfig
It does not show any error but when using simple Kafka consumer application, there is no message in kafka, while it has message when sending from my computer (bootstrap.servers=[192.168.0.100:9092, 192.168.0.101:9092, 192.168.0.102:9092]). Any one knows about this issue?

Related

PG_WAL is very big size

I have a Postgres cluster with 3 nodes: ETCD+Patroni+Postgres13.
Now there was a problem of constantly growing pg_wal folder. It now contains 5127 files. After searching the internet, I found an article advising you to pay attention to the following database parameters (their meaning at the time of the case is this):
archive_mode off;
wal_level replica;
max_wal_size 1G;
SELECT * FROM pg_replication_slots;
postgres=# SELECT * FROM pg_replication_slots;
-[ RECORD 1 ]-------+------------
slot_name | db2
plugin |
slot_type | physical
datoid |
database |
temporary | f
active | t
active_pid | 2247228
xmin |
catalog_xmin |
restart_lsn | 2D/D0ADC308
confirmed_flush_lsn |
wal_status | reserved
safe_wal_size |
-[ RECORD 2 ]-------+------------
slot_name | db1
plugin |
slot_type | physical
datoid |
database |
temporary | f
active | t
active_pid | 2247227
xmin |
catalog_xmin |
restart_lsn | 2D/D0ADC308
confirmed_flush_lsn |
wal_status | reserved
safe_wal_size |
All other functionality of the Patroni cluster works (switchover, reinit, replication);
root#srvdb3:~# patronictl -c /etc/patroni/patroni.yml list
+ Cluster: mobile (7173650272103321745) --+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+--------+------------+---------+---------+----+-----------+
| db1 | 10.01.1.01 | Replica | running | 17 | 0 |
| db2 | 10.01.1.02 | Replica | running | 17 | 0 |
| db3 | 10.01.1.03 | Leader | running | 17 | |
+--------+------------+---------+---------+----+-----------+
Patroni patroni-edit:
loop_wait: 10
maximum_lag_on_failover: 1048576
postgresql:
parameters:
checkpoint_timeout: 30
hot_standby: 'on'
max_connections: '1100'
max_replication_slots: 5
max_wal_senders: 5
shared_buffers: 2048MB
wal_keep_segments: 5120
wal_level: replica
use_pg_rewind: true
use_slots: true
retry_timeout: 10
ttl: 100
Help please, what could be the matter?
This is what I see in pg_stat_archiver:
postgres=# select * from pg_stat_archiver;
-[ RECORD 1 ]------+------------------------------
archived_count | 0
last_archived_wal |
last_archived_time |
failed_count | 0
last_failed_wal |
last_failed_time |
stats_reset | 2023-01-06 10:21:45.615312+00
If you have wal_keep_segments set to 5120, it is completely normal if you have 5127 WAL segments in pg_wal, because PostgreSQL will always retain at least 5120 old WAL segments. If that is too many for you, reduce the parameter. If you are using replication slots, the only disadvantage is that you might only be able to pg_rewind soon after a failover.

Issue in postgresql HA mode switching of Master node

I am new in postgresqlDB configuration. I am trying to configure postgresDB in HA mode with the help of pgpool and Elastic IP. Full setup is in AWS RHEL 8 servers.
pgpool version : 4.1.2
postgres version - 12
Below links I have followed during the configuration:
https://www.pgpool.net/docs/pgpool-II-4.1.2/en/html/example-cluster.html#EXAMPLE-CLUSTER-STRUCTURE
https://www.pgpool.net/docs/42/en/html/example-aws.html
https://www.enterprisedb.com/docs/pgpool/latest/03_configuring_connection_pooling/
Currently the postgres and pgpool services are up in all 3 component nodes. But if I am stopping master postgres service/server whole setup is going down and standby node is not taking the place of master. Please find the status of the pool nodes when master is down:
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change
---------+--------------+------+--------+-----------+---------+------------+-------------------+-------------------+-------------------+------------------------+---------------------
0 | server1 | 5432 | down | 0.333333 | standby | 0 | false | 0 | | | 2022-10-12 12:10:13
1 | server2 | 5432 | up | 0.333333 | standby | 0 | true | 0 | | | 2022-10-13 09:16:07
2 | server3 | 5432 | up | 0.333333 | standby | 0 | false | 0 | | | 2022-10-13 09:16:07
Any help would be appreciated. Thanks in advance.

OrientDB distributed mode : data not getting distributed across various nodes

I have started an OrientDB Enterprise 2.7 with two nodes. Here is how my setup look.
CONFIGURED SERVERS
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
|# |Name |Status|Connections|StartedOn |Binary |HTTP |UsedMemory |FreeMemory |MaxMemory|
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
|0 |Batman|ONLINE|3 |2016-08-16 15:28:23|10.0.0.195:2424|10.0.0.195:2480|480.98MB (94.49%)|28.02MB (5.51%) |509.00MB |
|1 |Robin |ONLINE|3 |2016-08-16 15:29:40|10.0.0.37:2424 |10.0.0.37:2480 |403.50MB (79.35%)|105.00MB (20.65%)|508.50MB |
+----+------+------+-----------+-------------------+---------------+---------------+-----------------+-----------------+---------+
orientdb {db=SocialPosts3}> clusters
Now I have two Vertex classes User and Notes. With an edge type Posted. All Vertex and Edges have properties. There are also unique index on both the Vertex class.
I started pushing data using Java API :
while (retry++ != MAX_RETRY) {
try {
properties.put(uniqueIndexname, uniqueIndexValue);
Iterable<Vertex> resultset = graph.getVertices(className, new String[] { uniqueIndexname },
new Object[] { uniqueIndexValue });
if (resultset != null) {
vertex = resultset.iterator().hasNext() ? resultset.iterator().next() : null;
}
if (vertex == null) {
vertex = graph.addVertex("class:" + className, properties);
graph.commit();
return vertex;
} else {
for (String key : properties.keySet()) {
vertex.setProperty(key, properties.get(key));
}
}
logger.info("Completed upserting vertex " + uniqueIndexValue);
graph.commit();
break;
} catch (ONeedRetryException ex) {
logger.warn("Retry for exception - " + uniqueIndexValue);
} catch (Exception e) {
logger.error("Can not create vertex - " + e.getMessage());
graph.rollback();
break;
}
}
Similarly for the Notes and edges.
I populate around 200k user and 3.5M Notes. Now I notice that all the data is going only to one node.
On running "clusters" command I see that all the clusters are created on the same node, and hence all data is present only on one node.
|22 |note | 26|Note | | 75| Robin | [Batman] | true |
|23 |note_1 | 27|Note | |1750902| Batman | [Robin] | true |
|24 |note_2 | 28|Note | |1750789| Batman | [Robin] | true |
|25 |note_3 | 29|Note | | 75| Robin | [Batman] | true |
|26 |posted | 34|Posted | | 0| Robin | [Batman] | true |
|27 |posted_1 | 35|Posted | | 1| Robin | [Batman] | true |
|28 |posted_2 | 36|Posted | |1739823| Batman | [Robin] | true |
|29 |posted_3 | 37|Posted | |1749250| Batman | [Robin] | true |
|30 |user | 30|User | | 102059| Batman | [Robin] | true |
|31 |user_1 | 31|User | | 1| Robin | [Batman] | true |
|32 |user_2 | 32|User | | 0| Robin | [Batman] | true |
|33 |user_3 | 33|User | | 102127| Batman | [Robin] | true |
I see the CPU of one node is like 99% and other is <1%.
How can I make sure that data is uniformly distributed across all nodes in the cluster?
Update:
Database is propagated to both the nodes. I can login to both the node studio and see the listed database. Also querying any node gives same results, so nodes are in sync.
Server Log from one of the node, and it is almost same on the other node.
2016-08-18 19:28:49:668 INFO [Robin]<-[Batman] Received new status Batman.SocialPosts3=SYNCHRONIZING [OHazelcastPlugin]
2016-08-18 19:28:49:670 INFO [Robin] Current node started as MASTER for database 'SocialPosts3' [OHazelcastPlugin]
2016-08-18 19:28:49:671 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=2)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+-----------+----------+-------------+
| | | | MASTER |
| | | |SYNCHRONIZING|
+--------+-----------+----------+-------------+
|CLUSTER |writeQuorum|readQuorum| Batman |
+--------+-----------+----------+-------------+
|* | 1 | 1 | X |
|internal| 1 | 1 | |
+--------+-----------+----------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:49:671 INFO [Robin] Saving distributed configuration file for database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3/distributed-config.json [OHazelcastPlugin]
2016-08-18 19:28:49:766 INFO [Robin] Adding node 'Robin' in partition: SocialPosts3 db=[*] v=3 [ODistributedDatabaseImpl$1]
2016-08-18 19:28:49:767 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=3)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+-----------+----------+-------------+-------------+
| | | | MASTER | MASTER |
| | | |SYNCHRONIZING|SYNCHRONIZING|
+--------+-----------+----------+-------------+-------------+
|CLUSTER |writeQuorum|readQuorum| Batman | Robin |
+--------+-----------+----------+-------------+-------------+
|* | 2 | 1 | X | o |
|internal| 2 | 1 | | |
+--------+-----------+----------+-------------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:49:767 INFO [Robin] Saving distributed configuration file for database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3/distributed-config.json [OHazelcastPlugin]
2016-08-18 19:28:49:769 WARNI [Robin]->[[Batman]] Requesting deploy of database 'SocialPosts3' on local server... [OHazelcastPlugin]
2016-08-18 19:28:52:192 INFO [Robin]<-[Batman] Copying remote database 'SocialPosts3' to: /tmp/orientdb/install_SocialPosts3.zip [OHazelcastPlugin]
2016-08-18 19:28:52:193 INFO [Robin]<-[Batman] Installing database 'SocialPosts3' to: /mnt/ebs/orientdb/orientdb-enterprise-2.2.7/databases/SocialPosts3... [OHazelcastPlugin]
2016-08-18 19:28:52:193 INFO [Robin] - writing chunk #1 offset=0 size=43.38KB [OHazelcastPlugin]
2016-08-18 19:28:52:194 INFO [Robin] Database copied correctly, size=43.38KB [ODistributedAbstractPlugin$3]
2016-08-18 19:28:52:279 WARNI {db=SocialPosts3} Storage 'SocialPosts3' was not closed properly. Will try to recover from write ahead log [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:279 SEVER {db=SocialPosts3} Restore is not possible because write ahead log is empty. [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:279 INFO {db=SocialPosts3} Storage data recover was completed [OEnterpriseLocalPaginatedStorage]
2016-08-18 19:28:52:294 INFO {db=SocialPosts3} [Robin] Installed database 'SocialPosts3' (LSN=OLogSequenceNumber{segment=0, position=24}) [OHazelcastPlugin]
2016-08-18 19:28:52:304 INFO [Robin] Reassigning cluster ownership for database SocialPosts3 [OHazelcastPlugin]
2016-08-18 19:28:52:305 INFO [Robin] New distributed configuration for database: SocialPosts3 (version=3)
CLUSTER CONFIGURATION (LEGEND: X = Owner, o = Copy)
+--------+----+-----------+----------+-------------+-------------+
| | | | | MASTER | MASTER |
| | | | |SYNCHRONIZING|SYNCHRONIZING|
+--------+----+-----------+----------+-------------+-------------+
|CLUSTER | id|writeQuorum|readQuorum| Batman | Robin |
+--------+----+-----------+----------+-------------+-------------+
|* | | 2 | 1 | X | o |
|internal| 0| 2 | 1 | | |
+--------+----+-----------+----------+-------------+-------------+
[OHazelcastPlugin]
2016-08-18 19:28:52:305 INFO [Robin] Distributed servers status:
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+
|Name |Status|Databases |Conns|StartedOn |Binary |HTTP |UsedMemory |
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+
|Batman|ONLINE|GoodBoys=ONLINE (MASTER) |5 |2016-08-16 15:28:23|10.0.0.195:2424|10.0.0.195:2480|426.47MB/509.00MB (83.79%)|
| | |SocialPosts=ONLINE (MASTER) | | | | | |
| | |GratefulDeadConcerts=ONLINE (MASTER)| | | | | |
|Robin*|ONLINE|GoodBoys=ONLINE (MASTER) |3 |2016-08-16 15:29:40|10.0.0.37:2424 |10.0.0.37:2480 |353.77MB/507.50MB (69.71%)|
| | |SocialPosts=ONLINE (MASTER) | | | | | |
| | |GratefulDeadConcerts=ONLINE (MASTER)| | | | | |
| | |SocialPosts3=SYNCHRONIZING (MASTER) | | | | | |
| | |SocialPosts2=ONLINE (MASTER) | | | | | |
+------+------+------------------------------------+-----+-------------------+---------------+---------------+--------------------------+

Getting NullPointerException from Play framework

I have been using play framework 2.2.4 version. While executing particular action getting NullPointerException, If its from my code I can fix it but it has come from play library.
could someone help on this??
# action #
public static Result getOverViewPage(final String errorMsg){
String autoBillStatusMsg = "";
//business logic is here
Logger.info("Final Auto-pay status message : "+autoBillStatusMsg);
return ok(rave_customer_index.render());
}
We are able to see the above logger statement. But after that we are getting NullPointerException
Exception StackTrace
INFO | jvm 1 | 2015/09/18 15:15:24 | [info] application - Final
Auto-pay status message :
INFO | jvm 1 | 2015/09/18 15:15:24 | [error] play - Cannot invoke
the action, eventually got an error: java.lang.NullPointerException
INFO | jvm 1 | 2015/09/18 15:15:24 | [error] application - INFO
| jvm 1 | 2015/09/18 15:15:24 | INFO | jvm 1 | 2015/09/18
15:15:24 | ! #6nfm0a093 - Internal server error, for (GET) [/] ->
INFO | jvm 1 | 2015/09/18 15:15:24 |
INFO | jvm 1 | 2015/09/18 15:15:24 |
play.api.Application$$anon$1: Execution
exception[[NullPointerException: null]]
INFO | jvm 1 | 2015/09/18 15:15:24 | at
play.api.Application$class.handleError(Application.scala:296)
~[com.typesafe.play.play_2.11-2.3.2.jar:2.3.2]
INFO | jvm 1 | 2015/09/18 15:15:24 | at
play.api.DefaultApplication.handleError(Application.scala:402)
[com.typesafe.play.play_2.11-2.3.2.jar:2.3.2]
INFO | jvm 1 | 2015/09/18 15:15:24 | at
play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320)
[com.typesafe.play.play_2.11-2.3.2.jar:2.3.2]
INFO | jvm 1 | 2015/09/18 15:15:24 | at
play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320)
[com.typesafe.play.play_2.11-2.3.2.jar:2.3.2]
INFO | jvm 1 | 2015/09/18 15:15:24 | at
scala.Option.map(Option.scala:145)
[org.scala-lang.scala-library-2.11.1.jar:na] INFO | jvm 1 |
2015/09/18 15:15:24 |
Caused by: java.lang.NullPointerException: null
INFO | jvm 1 | 2015/09/18 15:15:24 | at
java.net.URLEncoder.encode(URLEncoder.java:205) ~[na:1.7.0_21]
INFO | jvm 1 | 2015/09/18 15:15:24 | at
play.api.mvc.CookieBaker$$anonfun$3.apply(Http.scala:427)
~[com.typesafe.play.play_2.11-2.3.2.jar:2.3.2]
INFO | jvm 1 | 2015/09/18 15:15:24 | at
play.api.mvc.CookieBaker$$anonfun$3.apply(Http.scala:426)
~[com.typesafe.play.play_2.11-2.3.2.jar:2.3.2]
INFO | jvm 1 | 2015/09/18 15:15:24 | at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
~[org.scala-lang.scala-library-2.11.1.jar:na]
INFO | jvm 1 | 2015/09/18 15:15:24 | at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
~[org.scala-lang.scala-library-2.11.1.jar:na]
DEBUG | wrapperp | 2015/09/18 15:15:25 | send a packet PING : ping
INFO | jvm 1 | 2015/09/18 15:15:26 | WrapperManager Debug:
Received a packet PING : ping
INFO | jvm 1 | 2015/09/18 15:15:26 | WrapperManager Debug: Send a
packet PING : ping

Unable to start HandlerSocket with mariadb

For some reason, I cannot get HandlerSocket to start listening when I start mariadb (version
10.0.14). I am using Cent OS 6.5.
my.cnf has the following settings:
handlersocket_port = 9998
handlersocket_port_wr = 9999
handlersocket_address = 127.0.0.1
Calling "SHOW GLOBAL VARIABLES LIKE 'handlersocket%'" from the mariaDb prompt shows:
+-------------------------------+-----------+
| Variable_name | Value |
+-------------------------------+-----------+
| handlersocket_accept_balance | 0 |
| handlersocket_address | 127.0.0.1 |
| handlersocket_backlog | 32768 |
| handlersocket_epoll | 1 |
| handlersocket_plain_secret | |
| handlersocket_plain_secret_wr | |
| handlersocket_port | 9998 |
| handlersocket_port_wr | 9999 |
| handlersocket_rcvbuf | 0 |
| handlersocket_readsize | 0 |
| handlersocket_sndbuf | 0 |
| handlersocket_threads | 16 |
| handlersocket_threads_wr | 1 |
| handlersocket_timeout | 300 |
| handlersocket_verbose | 10 |
| handlersocket_wrlock_timeout | 12 |
+-------------------------------+-----------+
I can start mariadb successfully, but when I check to see which ports are actively listening,
neither 9998 nor 9999 show up. I've checked the mysqld.log file, but no errors seem to be occurring.
Answering my own question here -
SELINUX needed to be set to permissive mode to get HandlerSocket started.