NPGSQL connection pool issue - postgresql

we are planning to migrate from EDB PostgreSQL to the community PostgreSQL. So we have started using the NPGSQL library instead EDB library.
we configured the same connection string settings as what we have in EDB connection strings.
<add name="ConnectionString"
connectionString="Host='XXXXX';port=9999;username='XXXX';password='XXXX';Database='dev_xxxx';Timeout=300;**ConnectionLifetime=2;MinPoolSize=5;MaxPoolSize=25;**CommandTimeout=300;" />
<add name="NPGSQLConnectionString"
connectionString="Host='XXXX';port=9999;username='XXXX';password='XXXX';Database='dev_xxxx';Timeout=300;Connection Idle Lifetime=300;Connection Pruning Interval = 10;**MinPoolSize=5;MaxPoolSize=25;**CommandTimeout=300;**ConnectionLifetime=2;**" />
but when I change to use NPGSQL library, we are getting the below issues.
Npgsql.NpgsqlException (0x80004005): The connection pool has been exhausted, either raise MaxPoolSize (currently 25) or Timeout (currently 300 seconds) ---> System.TimeoutException: The operation has timed out.
I understand, I have to increase the max pool size, but the same configuration is working in EDB ?
I tried to increase maxpool size to 50, but it makes no difference. so I'm expecting a connection leak, but have no way to test or monitor it.
Npgsql version: 5.0.1.1
PostgreSQL version: 9.5

Related

In WebSphere data source definitions, does JDBC driver property connectionTimeout override data source settings

we are using WebSphere datasources to manage our database connections. A sample datasource definition from our server.xml looks like this:
<dataSource id="dev_ate" jndiName="database/dev_ate"
jdbcDriverRef="db2_driver" type="javax.sql.ConnectionPoolDataSource">
<connectionManager maxIdleTime="30m"
**connectionTimeout="30s"** />
<properties.db2.jcc databaseName="xxx"
serverName="aaa.bbb.ccc" portNumber="yyy"
securityMechanism="7" user="uuu"
password="ppp"
retrieveMessagesFromServerOnGetMessage="true" sslConnection="true"
clientProgramName="abc" driverType="4" encryptionAlgorithm="2"
**connectionTimeout="60s"**
sslTrustStoreLocation="${DB2CERTS}" />
</dataSource>
Does the connectionTimeout attribute in properties.db2.jcc has the same effect as the one in connectionManager? If so, which one is used? If not, what is the difference?
Any response would be appreciated!
Thx
Christian
The two connectionTimeout properties do not override each other. They actually have different meanings. The DB2 JCC driver property connectionTimeout, which is documented here, and relates to connecting to the database. The connectionTimeout property of connectionManager is a timeout on a connection becoming available from the connection pool. It should be noted that it is possible for you to wait for a portion of the connectionTimeout of the connectionManager, after which a non-matching connection becomes available in the pool. The connection manager will close the non-matching connection and request a new one from the DB2 JCC driver, after which the full connectionTimeout of the DB2 JCC driver will apply, even if that exceeds the remaining amount of time on the connectionTimeout of the connectionManager. The two timeouts are independent of each other.

is there a way to limit kafka connect heap space when debezium connector is fetching data from your sql server

i am trying to set up a connector that fetches data from an SQL server to use with apache kafka. I've set up all of the kafka services with a docker-compose file, however the SQL server is on another server.
This is the configuration of my debezium connector in ksqldb:
create source connector sql_connector with
('connector.class'='io.debezium.connector.sqlserver.SqlServerConnector',
'database.server.name'='sqlserver',
'database.hostname'= 'xxxxx',
'database.port'='1433',
'database.user'= 'xxxx',
'database.password'= 'xxxxxx',
'database.dbname'='xxxxxxxxx',
'database.history.kafka.bootstrap.servers'='broker:29092',
'database.history.kafka.topic'='dbz_dbhistory.sqlserver.asgard-01');
When i do this, i get a response that the connector is succesfully created however when i query ksqldb by using 'show connectors' for my connectors i get the following error message:
io.confluent.ksql.util.KsqlServerException: org.apache.http.conn.HttpHostConnectException: Connect to connect:8083 [connect/172.18.0.6] failed: Connection refused (Connection refused)
Caused by: org.apache.http.conn.HttpHostConnectException: Connect to
connect:8083 [connect/172.18.0.6] failed: Connection refused (Connection
refused)
Caused by: Could not connect to the server.
Caused by: Could not connect to the server.
When i inspect my kafka connect logs i can see that its issueing select statements to the server but after a while i get the following error and my kafka connect shuts down:
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000b5400000, 118489088, 0) failed; error='Not enough space' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 118489088 bytes for committing reserved memory.
# An error report file with more information is saved as:
any ideas on how to fix this? Other then just giving my server more ram.
Your machine has less than ~118MB of free memory:
Native memory allocation (mmap) failed to map 118489088 bytes for committing reserved memory
You will need to increase or free up memory on the machine to get the JVM to start. If it's running, you can change the heap memory settings of the JVM using the following environment variable:
KAFKA_HEAP_OPTS="-Xms256M -Xmx2G"

Entity framework with db mode cannot connect to server

I am trying to connect to by db instance using db first, I created a connection
<add name="Entities"
connectionString="metadata=res://*/Models.ModelCmarket.csdl|res://*/Models.ModelCmarket.ssdl|res://*/Models.ModelCmarket.msl;provider=System.Data.SqlClient;provider connection string="data source=(localdb)\v12.0;initial catalog=Cevaheer;integrated security=True;trustservercertificate=False;multisubnetfailover=True;MultipleActiveResultSets=True;multipleactiveresultsets=True;App=EntityFramework""
providerName="System.Data.EntityClient" />
And I always get an error -
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 50 - Local Database Runtime error occurred. The specified LocalDB instance does not exist.)
But if i try
<add name="CevhermarketEntities"
connectionString="Data source=(localdb)\v12.0;initial catalog=Cevaheer;integrated security=True;trustservercertificate=False;multisubnetfailover=True;MultipleActiveResultSets=True;user id=dbuser;password=flexsin#123!;multipleactiveresultsets=True;"
providerName="System.Data.SqlClient" />
I can connect, and also can connect from VS sql server explorer and SSMS.
To begin - there are 4 issues that could be causing the common LocalDb SqlExpress Sql Server connectivity errors SQL Network Interfaces, error: 50 - Local Database Runtime error occurred, before you begin you need to rename the v11 or v12 to (localdb)\mssqllocaldb
You dont have the services running
You don't have the firelwall ports here
configured
Your install has and issue/corrupt (the steps below help give you a nice clean start)
You did not rename the V11 or 12 to mssqllocaldb
I found that the simplest is to do the below - I have attached the pics and steps for help.
First verify which instance you have installed, you can do this by checking the registry and by running cmd
cmd> Sqllocaldb.exe i
cmd> Sqllocaldb.exe s "whicheverVersionYouWantFromListBefore"
if this step fails, you can delete with option d cmd> Sqllocaldb.exe d "someDb"
cmd> Sqllocaldb.exe c "createSomeNewDbIfyouWantDb" create - the pic is error
cmd> Sqllocaldb.exe start "createSomeNewDbIfyouWantDb"

Create db2 dataSource in WAS liberty profile

I am developing for Websphere 8.5 (for z/OS), but i would like to use Liberty for local development on my Windows machine. I can't get the data source to work.
I created the following entry in the Server.xml to define the data source.
<library id="DB2JCC2Lib">
<fileset dir="C:\Program Files\IBM\SQLLIB\java"/><!--includes="db2jcc.jar db2jcc_license_cu.jar db2jcc_license_cisuz.jar"-->
</library>
<dataSource id="xxdb" jndiName="jdbc/xxxx" type="javax.sql.ConnectionPoolDataSource">
<jdbcDriver libraryRef="DB2JCC2Lib" id="db2-driver" javax.sql.ConnectionPoolDataSource="com.ibm.db2.jcc.DB2ConnectionPoolDataSource"/>
<properties.db2.jcc driverType="2" databaseName="xxxx" portNumber="50000" user="xxxx" password="{aes}xxxx"/>
</dataSource>
When my application initializes i get the following error message:
[jcc][4038][12241][3.61.65] T2LUW exception: SQL30081N Kommunikationsfehler. Verwendetes Kommunikationsprotokoll: "TCP/IP". Verwendete Kommunikations-API: "SOCKETS". Position, an der der Fehler erkannt wurde: "127.0.0.1". Übertragungsfunktion, die den Fehler festgestellt hat: "connect". Protokollspezifische(r) Fehlercode(s): "10061", "", "". SQLSTATE=08001
I think this message comes from the db2 Driver, unfortunately i didn't find a way yet to change it to english; but i think it's understandable for english speakers.
I have an ODBC System datasource that connects to DB2 v10 maintenance level 015 for z/OS. My local DB2 Connect Installation is v9.7.300.3885.
In my regular Websphere my working datasource has driver type 2, database Name set to the odbc-name and port number 50000. Server name is not set (empty). Classpath and implementation class is the same that i provided in the server.xml
I have tried everything i could find, any ideas?
Note: I can't make changes on the db2 server and there is no issue connecting to the database with other tools and the regular WebSphere.
Also the server name in the websphere configuration is empty, only database name is set. When i tried to set the servername in the server.xml to localhost or the db2 server i got the same result.
Any help is appreciated!
Edit: updated with correct Version Information
Edit 2: As long as it works i dont care what type (2 or 4) of the jdbc driver is used. I just want to point out again that type 2 is currently working on my machine. I tried it with type 4 and got the following message:
[jcc][t4][2043][11550][3.61.65] Exception java.net.ConnectException: Error opening socket to server xxx/xxx.30.3.34 on port 50,000 with message: Connection refused: connect. ERRORCODE=-4499, SQLSTATE=08001 DSRA0010E: SQL State = 08001, Error Code = -4,499
Sorry, previous post ate my xml. Trying again:
You will need a type 4 datasource to connect to a remote database server, i.e.,
<dataSource id="xxdb" jndiName="jdbc/xxxx" type="javax.sql.XADataSource">
<properties.db2.jcc driverType="4" serverName="the.db2.host.com" portNumber="50000" user="xxxx" password="xxxx" databaseName="LOC1" currentSQLID="SYSA"/>
<jdbcDriver libraryRef="DB2JCC2Lib">
</dataSource>
Type 2 is only for a local z/OS connection to a database resource. Your Windows, being remote from z/OS, requires you to use a type 4 connection. Type 4 requires both the serverName and portNumber to be specified. These are not applicable on a type 2 connection.

Apache solr 5.3.1 out of memory

i'm new to solr, though i'm struggling for a few days to run full indexing on a postgreSQL 9.4 DB on a entity with about 117.000.000 entries.
I'm using solr 5.3.1 on Windows 7 x64 with 16 GB of RAM. I'm not intending to use this machine as a server, it's just some kind of prototyping i'm at.
I kept getting this error on JDK x86 with just starting solr as solr start without any options. Then i tried:
solr start -m 2g which results in solr not coming up at all
solr start -m 1g makes solr start, but after indexing about 87.000.000 entries it dies with an out of memory error.
It is exactly the same point at which it dies without any options, though at the admin dashboard I see JVM heap is full.
So, since solr warns me anyway to use a x64 JDK i did and now use 8u65. I started solr with 4g Heap and started full import again. Again after 87.000.000 entries it threw the same exception. But the heap isn't even full (42%), neither is RAM or SWAP.
Does anyone have an idea what could be the reason for this behaviour?
Here is my data-config
<dataConfig>
<dataSource
type="JdbcDataSource"
driver="org.postgresql.Driver"
url="jdbc:postgresql://localhost:5432/dbname"
user="user"
password="secret"
readOnly="true"
autoCommit="false"
transactionIsolation="TRANSACTION_READ_COMMITTED"
holdability="CLOSE_CURSORS_AT_COMMIT" />
<entity name="hotel"
query="select * from someview;"
deltaImportQuery = "select * someview where solr_id = '${dataimporter.delta.id}'"
deltaQuery="select * from someview where changed > '${dataimporter.last_index_time}';">
<field name="id" column="id"/>
... etc for all 84 columns
in solrconfig.xml i have defined a RequestProcessorChain to generate a unique key while indexing, which seems to work.
in schema.xml there again are 84 columns with type, indexed and other attributes.
Here is the exception i'm getting, they are in german but the first one is saying "error 48" and the other "out of memory"
getNext() failed for query 'select * from someview;':org.apache.solr.handler.dataimport.DataImportHandlerException: org.postgresql.util.PSQLException: FEHLER: Speicher aufgebraucht
Detail: Fehler bei Anfrage mit Größe 48.
at org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:62)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:416)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.access$500(JdbcDataSource.java:296)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:331)
at org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:132)
at org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:74)
at org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:243)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
at org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
at org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:461)
Caused by: org.postgresql.util.PSQLException: FEHLER: Speicher aufgebraucht
Detail: Fehler bei Anfrage mit Größe 48.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2182)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1911)
at org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:2113)
at org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1964)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:408)
... 12 more
Thank you in advance
As pointed by MatsLindh it was a JDBC error. Meanwhile i worked with hibernate search and experienced the same error at exactly the same time (near 87.000.000 indexed entities). The trick was to commit more often.
So at this case i tried several things at one time and it worked (don't know which option exactly did the trick):
1. set maxDocs for autoCommit in solrconfig.xml to 100.000. I believe that the default setting for committing is something at 15 seconds if no new documents are added, what actually happens all the time, until heap space runs full.
2. Set batchSize for the postrgreSQL JDBC Driver at 100 (Default is 500).
3. Changed the evil 'select * from table' to 'select c1, c2, ..., c85 from table'
4. Updated the JDBC Driver from 9.4.1203 to 9.4.1207
5. Updated Java to 1.8u74
I think it worked due to 1. and/or 3., I will do some further testing and update my post.
While i was trying the indexing with hibernate search I could see that the allocated RAM for PostgreSQL Server was freed at commit, so the RAM never was an issue again. It didn't happen here and the DB Server was at 85 GB RAM in the end, but kept on working.