While adding a document using solr cloud server I am getting zookeeper connection exception.
How to resolve this?
Are you sure you're connected to ZooKeeper?
If you run Solr with embedded ZK the ZK port is Solr port - 1000, for example 8983-1000 => 7973
If you are sure that you use the correct connection details, then you can always increase the ZK connection time out in you CloudSolrServer object setZkConnectTimeout(int) - if you are using Solrj client.
This will probably fix the problem but you should investigate why you are getting ZK timeout.
In the new Solr versions the default timeout increased to 3000ms from 1800ms, so if you are using old version of Solr change it.
Related
While creating new connection on Mongo Compass the UI try to discover the entire replica set topology and connect to the primary/secondary IP.
Is there a way to create a DIRECT connection to an HOST/IP+Port just like the clients api and disable the topology discovery step?
From pymongo documentation:
directConnection (optional): if True, forces this client to
connect directly to the specified MongoDB host as a standalone. If false, the client connects to the entire replica set of which the given MongoDB host(s) is a part
If you don't want to connect to a replicaset, use a mongodb connection string (as opposed to mongodb+srv), use a host IP / port, and drop the &replicaSet= parameter.
A bug ticket was opened to Mongo on Jira
https://jira.mongodb.org/browse/COMPASS-4534
The fix should be in version 1.42.2
I set up a development environment in docker swarm environment, which consists of 2 nodes, a few networks and a few microservices. Following gives an example how it looks in cluster.
Service Network Node Image Version
nginx reverse proxy backend, frontend node 1 latest stable-alpine
Service A backend, database node 2 8-jre-alpine
Service B backend, database node 2 8-jre-alpine
Service C backend, database node 1 8-jre-alpine
Database postgresql database node 1 latest alpine
Services are spring boot 2.1.7 applications with boot-data-jpa. All the services above contain database connection to postgresql instance. For database I configured only following properties in application.properties:
spring.datasource.url
spring.datasource.username
spring.datasource.password
spring.datasource.driver-class-name
spring.jpa.hibernate.ddl-auto=
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation=true
After some time I see that the connection limit in postgresql exceeds, which does not allow to create a new connection.
2019-09-21 13:01:07.031 1 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Cannot acquire connection from data source org.postgresql.util.PSQLException: FATAL: sorry, too many clients already
A similar error is shown also when I try to connect over ssh to database.
psql: FATAL: sorry, too many clients already
What I tried till now:
spring.datasource.hikari.leak-detection-threshold=20000
which didnt help.
I found several answers to this problem like:
increase connection limit in postgresql
No I dont want to do this. It is just a temporary solution. It will pollute the connection again but a bit later maybe.
add idle timeout in hikaripCP configuration
Default configuration of hikariCP has already a default value of 10mins, which doesnt help
add max life time to hikariCP configuration
Default configuration of hikariCP has already a default value of 30mins, which doesnt help
reduce number of idle connection in hikariCP configuration
Default configuration of hikariCP has already a default value of 10, which doesnt help
set min idle in hikariCP configuration
Default is 10 and I am fine with it. b
I am expecting a connection around 30 for the services but I find nearly 100 connections. Restarting the services or stopping them does not close the idle connections neither. What are your suggestions? Is it a docker specific problem? Did someone experience the same problem?
Q1. I have a crate cluster of version 1.0.2 and I am using older version of crate JDBC driver to connect to it from java program. I have specified all nodes of crate in the JDBC driver URL by separating them with comma. When I fire queries from my java program to crate, I can see memory and CPU usage of only 1 crate node increases and this node is 1st in the comma separated list given in the connection URL. After some time, that node runs out of memory . Could someone please explain why this happens ? I remember reading documentation of crate driver which indicated that crate driver load balances the queries across all specified client nodes. All my nodes are client enabled.
Q2. I tried same experiment with Crate 2.1.6 and JDBC driver 2.1.7 and I can see same behavior. I have verified that all the queries are getting fired on the data which is spread across multiple nodes. In latest documentation, I can see a new property got added viz loadBalanceHosts https://crate.io/docs/clients/jdbc/en/latest/connecting.html#jdbc-url-format
Right now I do not have this property added. Was this property present and required on JDBC driver version 2.1.7 ? Why do developer have to do worry about load balancing when crate cluster and JDBC drivers are supposed to provide that ?
FYI, most of my queries have group by clause and I have few billions of records to experiment with. Memory configured is 30GB per node.
This has been fixed in the lastest driver: https://github.com/crate/crate-jdbc/blob/master/docs/connecting.txt#L62
Our mongodb cluster in production, is a sharded cluster with 3 replica sets with 3 server each one and, of course, another 3 config servers.
We also have 14 webservers that connect directly to mongoDb throw the mongos process that are running in each of this webservers (clients).
The entire cluster receive 5000 inserts per minute.
Sometimes, we start getting exceptions from our java applications when it wants to perform operations to the mongoDb.
This is the stackTrace:
caused by com.mongodb.MongoException: writeback
com.mongodb.CommandResult.getException(CommandResult.java:100)
com.mongodb.CommandResult.throwOnError(CommandResult.java:134)
com.mongodb.DBTCPConnector._checkWriteError(DBTCPConnector.java:142)
com.mongodb.DBTCPConnector.say(DBTCPConnector.java:183)
com.mongodb.DBTCPConnector.say(DBTCPConnector.java:155)
com.mongodb.DBApiLayer$MyCollection.insert(DBApiLayer.java:270)
com.mongodb.DBApiLayer$MyCollection.insert(DBApiLayer.java:226)
com.mongodb.DBCollection.insert(DBCollection.java:147)
com.mongodb.DBCollection.insert(DBCollection.java:90)
com.mongodb.DBCollection$insert$0.call(Unknown Source)
If I check the mongos process throw the rest _status command that it provides, it returns a 200OK. We could fix the problem restarting the tomcat that we are using and restarting the mongos process but I would like to find a final solution to this problem. It's not a happy solution to have to restart everything in the middle of the night.
When this error happens, maybe 2 or 3 another webservers got the same error at the same time, so I imagine that there is a problem in the entire mongoDb cluster, no a problem in a single isolated webserver.
Does anyone know why mongo returns a writeback error? and how to fix it?
I'm using mongoDb 2.2.0.
Thanks in advance.
Fer
I believe you are seeing the Writeback error "leak" into the getLastError output and then continue to be reported even when the operation in question had not errored. This was an issue in the earlier versions of MongoDB 2.2, and has since been fixed, see:
https://jira.mongodb.org/browse/SERVER-7958
https://jira.mongodb.org/browse/SERVER-7369
https://jira.mongodb.org/browse/SERVER-4532
As of writing this answer, I would recommend 2.2.4, but basically whatever the latest 2.2 branch is, to resolve your problem.
I have used NORM driver in production. New year holidays - it is pretty cool, so my project get high loading and i want to set up a replication set, but have a problem - Norm does not support replication set :( . as far as i understand sharding too?
Help me :) Who did use mongodb csharp or official 10gen driver with replset? Is there any problem on production? If i choose another driver I'll have to rewrite the repository, but I do not want it to be in vain. Is there some issues?
Sharding should not depend on driver-specific support. When you shard, you connect to a router application mongos and this router behaves exactly like mongod.
So you should be able to shard. But you will probably need to change the "connection string". The suggested setup is to have one mongos per application server (instead of your current single mongod).