I use the latest version of ActiveMQ Artemis with the MySQL database as message store. After 8h my server times out client db connections. The AbstractJDBCDriver implementation in Artemis does not recognize this and throws an exception.
What to do? I see no possibility for a DB connection pool with this implementation.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) [mysql-connector-java.jar:8.0.19]
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) [mysql-connector-java.jar:8.0.19]
at com.mysql.cj.jdbc.ConnectionImpl.setAutoCommit(ConnectionImpl.java:2056) [mysql-connector-java.jar:8.0.19]
at org.apache.activemq.artemis.jdbc.store.drivers.AbstractJDBCDriver.stop(AbstractJDBCDriver.java:108) [artemis-jdbc-store-2.10.0.redhat-00004.jar:2.10.0.redhat-00004]
Same question here.
At this point the best thing to do is to restart the broker when this happens. There currently is no DB connection pool implementation, as you note.
In the latest Artemis release (2.16.0) there is support for connection-pooling.
Basic example with MySQL:
Create a new broker instance:
$ARTEMIS_HOME/bin/artemis create hosts/host0 --name host0 --user admin --password admin --require-login
Start the database and create a dedicated instance:
CREATE DATABASE activemq CHARACTER SET utf8mb4;
CREATE USER 'activemq'#'%' IDENTIFIED WITH mysql_native_password BY 'activemq';
GRANT CREATE,SELECT,INSERT,UPDATE,DELETE,INDEX ON activemq.* TO 'activemq'#'%';
FLUSH PRIVILEGES;
Download and copy the specific JDBC driver:
curl -sL https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-8.0.22.tar.gz -o driver.tar.gz
tar xf driver.tar.gz && cp mysql-connector-java-8.0.22/mysql-connector-java-8.0.22.jar hosts/host0/lib/
Add JDBC storage with connection-pooling configuration:
<store>
<database-store>
<data-source-properties>
<!-- All configuration options: https://commons.apache.org/proper/commons-dbcp/configuration.html -->
<data-source-property key="driverClassName" value="com.mysql.cj.jdbc.Driver" />
<data-source-property key="url" value="jdbc:mysql://localhost:3306/activemq" />
<data-source-property key="username" value="activemq" />
<data-source-property key="password" value="activemq" />
<data-source-property key="poolPreparedStatements" value="true" />
<!-- Avoid MySQL default behavior of killing long lived connections after 1 hour -->
<data-source-property key="maxConnLifetimeMillis" value="1800000" />
</data-source-properties>
<bindings-table-name>BINDINGS</bindings-table-name>
<message-table-name>MESSAGES</message-table-name>
<large-message-table-name>LARGE_MESSAGES</large-message-table-name>
<page-store-table-name>PAGE_STORE</page-store-table-name>
<node-manager-store-table-name>NODE_MANAGER_STORE</node-manager-store-table-name>
</database-store>
</store>
Related
I can use Fluentd to transfer data to mongoDB built on AWS EC2, but I can't transfer data to DocumentDB, which is a managed service compatible with mongoDB.
The following is the td-agent.conf for transferring the json file saved in /var/log/test/bulk/ to mongoDB.
<source>
#type tail
path /var/log/test/bulk/*
tag bulk.*
format json
time_key time
time_format '%F %T.%N %z %Z'
pos_file /var/log/test/run/log-json.pos
read_from_head true
refresh_interval 5s
</source>
<match bulk.**>
#type record_reformer
tag test.${tag_parts[-3]}.${tag_parts[-2]}
</match>
<match test.**>
#type copy
<store>
#type forest
subtype mongo_replset
<template>
host hostname1:27017,hostname2:27017,hostname3:27017
replica_set rs0
database ${tag_parts[-2]}
collection ${tag_parts[-1]}
user ********
password ********
replace_dot_in_key_with __dot__
<buffer>
#type file
path /var/log/test/buffer-mongo/${tag_parts[-2..-1]}
chunk_limit_size 8m
queued_chunks_limit_size 64
flush_interval 1s
</buffer>
</template>
</store>
</match>
When transferring to DocumentDB, I changed the host in the conf file above to the cluster endpoint, but the following error occurred.
[warn]: #0 failed to flush the buffer. retry_time=0 next_retry_seconds=2021-09-14 10:26:56 +0900 chunk="5cbea78155b58ec0810e9fde94aa2355" error_class=Mongo::Error::NoServerAvailable error="No server is available matching preference: #<Mongo::ServerSelector::Primary:0x70233136498300 tag_sets=[] max_staleness=nil> using server_selection_timeout=30 and local_threshold=0.015"
Since TLS is enabled in DocumentDB, I wonder if I need to specify rds-combined-ca-bundle.pem to enable TLS. I think so, but I don't know how to do that.
(When I tested writing to DocumentDB in Python using the link, the above error occurred when TLS was disabled.)
Can you please tell me how to write data to DocumentDB with TLS enabled?
Amazon DocumentDB clusters are deployed within an Amazon Virtual Private Cloud (Amazon VPC). They can be accessed directly by Amazon EC2 instances or other AWS services that are deployed in the same Amazon VPC. Is the client application running in the same VPC and security group as the DocumentDB cluster?
I'm trying to generate some data for DB2 10.5 LUW using HammerDB v3.1 which is running on a Windows remote host. There is no ability to run HammerDB on the same host with DB2.
According to the HammerDB documentation I need to set up IBM Data Server Driver for ODBC and CLI.
What I did:
Downloaded and set up the driver on HammerDB host - v10.5fp10_ntx64_odbc_cli.zip as described here
Configure db2dsdriver.cfg file
<configuration>
<dsncollection>
<dsn alias="TPCC" name="<my database name>" host="<my host name>" port="50000"/>
<!-- Long aliases are supported -->
<dsn alias="longaliasname2" name="name2" host="server2.net1.com" port="55551">
<parameter name="Authentication" value="SERVER_ENCRYPT"/>
</dsn>
</dsncollection>
<databases>
<database name="<my database name>" host="<my host name>" port="50000">
<parameter name="CurrentSchema" value="OWNER1"/>
.......
Add environment variable DB2DSDRIVER_CFG_PATH
set DB2DSDRIVER_CFG_PATH=C:\ProgramData\IBM\DB2\C_IBMDB2_CLIDRIVER_clidriver\cfg
Run HammerDB GUI, try to build a schema and receive
Error in Virtual User 1: [IBM][CLI Driver][DB2/LINUXX8664] SQL0206N "GLOBAL_VAR1" is not valid in the context where it is used. SQLSTATE=42703```
This error is happening because the db2dsdriver.cfg has excess information for your DSN on a Db2-client-node.
To recover, you can either rename and recreate your db2dsdriver.cfg/db2cli.ini files, or you can can edit the db2dsdriver.cfg file and remove the following stanza where it occurs for your DSN / database (take a backup as a precaution):
<sessionglobalvariables>
<parameter name="global_var1" value="abc"/>
</sessionglobalvariables>
I usually discard the default db2dsdriver.cfg/db2cli.ini, and use a script to populate them. This is possible by using the command line tool "db2cli", which has a variety of command lines parameters to let you write the cfg file stanzas for both DSN and databases. Documentation here.
I currently use the CLI method of registering a node and then a database for that node (with CATALOG NODE / CATALOG DATABASE) to configure my DB2 CLI client to access our database server.
With a single registration of a database I can effectively register the default database but then when I connect in my application using SQLDriverConnect and use the "DATABASE=" option I can connect to other databases available on my server.
I would like to switch to the much easier to manage db2dsdriver.cfg configuration file, however I have been unable to configure it to allow a single to access multiple databases.
Some code to help clarify. My DB2 server instance has two databases defined like:
CREATE DATABASE DB_1 ON /opt/data/DB_1 USING CODESET UTF-8 TERRITORY US COLLATE USING SYSTEM
CREATE DATABASE DB_2 ON /opt/data/DB_2 USING CODESET UTF-8 TERRITORY US COLLATE USING SYSTEM
I register this server with my client CLI using these commands:
CATALOG TCPIP NODE DB_NODE remote example.server.com server 50000
CATALOG DB DB_1 as DB_1 at node DB_NODE
With that setup I can perform the following from my CLI application:
rc = SQLDriverConnect(hdbc, NULL, "DSN=DB_1;UID=dbtest1;PWD=zebco5;DATABASE=DB_1",
SQL_NTS, outStr, 128, &outSize, SQL_DRIVER_NOPROMPT);
or if I want to use the DB_2 database:
rc = SQLDriverConnect(hdbc, NULL, "DSN=DB_1;UID=dbtest1;PWD=zebco5;DATABASE=DB_2",
SQL_NTS, outStr, 128, &outSize, SQL_DRIVER_NOPROMPT);
Note I did not need to change the DSN, merely the "DATABASE" connection option.
Recently I found the db2dsdriver.cfg configuration file which I would rather use. To that end I created this and uncataloged my node and db from the cli:
<?xml version="1.0" encoding="UTF-8" standalone="no" ?>
<configuration>
<dsncollection>
<dsn alias="DB_1" name="DB_1" host="server.example.com" port="50000"/>
</dsncollection>
<databases>
<database name="DB_1" host="server.example.com" port="50000"/>
<database name="DB_2" host="server.example.com" port="50000"/>
</databases>
</configuration>
I can connect with this:
rc = SQLDriverConnect(hdbc, NULL, "DSN=DB_1;UID=dbtest1;PWD=zebco5;DATABASE=DB_1",
SQL_NTS, outStr, 128, &outSize, SQL_DRIVER_NOPROMPT);
just fine, but now using this to connect to DB_2:
rc = SQLDriverConnect(hdbc, NULL, "DSN=DB_1;UID=dbtest1;PWD=zebco5;DATABASE=DB_2",
SQL_NTS, outStr, 128, &outSize, SQL_DRIVER_NOPROMPT);
results in this error:
SQL0843N The server name does not specify an existing connection. SQLSTATE=08003
Which I understand, but seems like a regression in functionality from the old node/db registration mechanism.
I'm attempting to determine if the functionality I was using is supported with the configuration file and I am doing something wrong or if it just doesn't work that way?
Thank you for your help.
I am trying to index the data of size 70 million row. I am using data import handler to fetch the data form postgreSQL database. When I execute the full import command, after indexing around 3-5 million row, I see the error "Connection Lost". When I check the status of the solr instance in the server, I see that it is not active, it is dead, port is not open.
# service solr status`
Found 1 Solr nodes:
Solr process 45397 from /var/solr/solr-8983.pid not found.`
Then I need to restart the solr instance again every time it stops
# service solr start
Waiting up to 30 seconds to see Solr running on port 8983 [/]
Started Solr server on port 8983 (pid=47740). Happy searching!
I added the socket timeout parameters to jdbc driver parameters, and also increased all timeout parameters in solr.xml
Below is the my db-data-config.xml file.
`<dataConfig>
<dataSource type="JdbcDataSource" driver="org.postgresql.Driver" url="jdbc:postgresql://ourdbserveripaddress:5432/dbname" user="username" socketTimeout="0" defaultRowFetchSize="10000" />
<document >
<entity name="solr_70m" query="SELECT id, myrecord from solr_70m">
<field column="id" name="id" />
<field column="myrecord" name="myrecord" />
</entity>
</document>
</dataConfig>`
After connection loss, we can search commited data.
Could you tell me what I am missing to load/index data of over 50million records?
Thanks in advance,
I am developing for Websphere 8.5 (for z/OS), but i would like to use Liberty for local development on my Windows machine. I can't get the data source to work.
I created the following entry in the Server.xml to define the data source.
<library id="DB2JCC2Lib">
<fileset dir="C:\Program Files\IBM\SQLLIB\java"/><!--includes="db2jcc.jar db2jcc_license_cu.jar db2jcc_license_cisuz.jar"-->
</library>
<dataSource id="xxdb" jndiName="jdbc/xxxx" type="javax.sql.ConnectionPoolDataSource">
<jdbcDriver libraryRef="DB2JCC2Lib" id="db2-driver" javax.sql.ConnectionPoolDataSource="com.ibm.db2.jcc.DB2ConnectionPoolDataSource"/>
<properties.db2.jcc driverType="2" databaseName="xxxx" portNumber="50000" user="xxxx" password="{aes}xxxx"/>
</dataSource>
When my application initializes i get the following error message:
[jcc][4038][12241][3.61.65] T2LUW exception: SQL30081N Kommunikationsfehler. Verwendetes Kommunikationsprotokoll: "TCP/IP". Verwendete Kommunikations-API: "SOCKETS". Position, an der der Fehler erkannt wurde: "127.0.0.1". Übertragungsfunktion, die den Fehler festgestellt hat: "connect". Protokollspezifische(r) Fehlercode(s): "10061", "", "". SQLSTATE=08001
I think this message comes from the db2 Driver, unfortunately i didn't find a way yet to change it to english; but i think it's understandable for english speakers.
I have an ODBC System datasource that connects to DB2 v10 maintenance level 015 for z/OS. My local DB2 Connect Installation is v9.7.300.3885.
In my regular Websphere my working datasource has driver type 2, database Name set to the odbc-name and port number 50000. Server name is not set (empty). Classpath and implementation class is the same that i provided in the server.xml
I have tried everything i could find, any ideas?
Note: I can't make changes on the db2 server and there is no issue connecting to the database with other tools and the regular WebSphere.
Also the server name in the websphere configuration is empty, only database name is set. When i tried to set the servername in the server.xml to localhost or the db2 server i got the same result.
Any help is appreciated!
Edit: updated with correct Version Information
Edit 2: As long as it works i dont care what type (2 or 4) of the jdbc driver is used. I just want to point out again that type 2 is currently working on my machine. I tried it with type 4 and got the following message:
[jcc][t4][2043][11550][3.61.65] Exception java.net.ConnectException: Error opening socket to server xxx/xxx.30.3.34 on port 50,000 with message: Connection refused: connect. ERRORCODE=-4499, SQLSTATE=08001 DSRA0010E: SQL State = 08001, Error Code = -4,499
Sorry, previous post ate my xml. Trying again:
You will need a type 4 datasource to connect to a remote database server, i.e.,
<dataSource id="xxdb" jndiName="jdbc/xxxx" type="javax.sql.XADataSource">
<properties.db2.jcc driverType="4" serverName="the.db2.host.com" portNumber="50000" user="xxxx" password="xxxx" databaseName="LOC1" currentSQLID="SYSA"/>
<jdbcDriver libraryRef="DB2JCC2Lib">
</dataSource>
Type 2 is only for a local z/OS connection to a database resource. Your Windows, being remote from z/OS, requires you to use a type 4 connection. Type 4 requires both the serverName and portNumber to be specified. These are not applicable on a type 2 connection.