I made a ktor application using exposed for db stuff and it works perfectly fine on my desktop, however when I deploy it on an AWS EC2 instance I get following error
Exposed - Transaction attempt #0 failed: No suitable driver found for
jdbc:postgresql://com.com:5432/DBName. Statement(s): null
java.sql.SQLException: No suitable driver found for jdbc:postgresql://
at
java.sql/java.sql.DriverManager.getConnection(DriverManager.java:702)
java.sql/java.sql.DriverManager.getConnection(DriverManager.java:228)
org.jetbrains.exposed.sql.Database$Companion$connect$10.invoke(Database.kt:206)
org.jetbrains.exposed.sql.Database$Companion$connect$10.invoke(Database.kt:206)
org.jetbrains.exposed.sql.Database$Companion$doConnect$3.invoke(Database.kt:127)
org.jetbrains.exposed.sql.Database$Companion$doConnect$3.invoke(Database.kt:128)
and so on.
Here's the connection:
Database.connect(DB_URL, driver = "org.postgresql.Driver", user = DB_USER, password = B_PW)
I've tried it with both, but no luck.
implementation("com.impossibl.pgjdbc-ng:pgjdbc-ng:0.8.9")
implementation("org.postgresql:postgresql:42.3.3")
I found potential solutions for Spring Boot (e.g. setting SPRING_DATASOURCE_DRIVER_CLASS_NAME) but I have no clue how I can relate this to ktor/exposed if even possible.
nvfm
works now. aws magic, idk
edit:
com.impossibl.postgres.jdbc.PGDriver did not work at all so I tried to switch it but org.postgresql.Driver also did nothing at first. looked at the logs, same error as before.
after a while AWS' health thingy switched to Ok and it seems to work just fine now.
Related
I'm getting this error while using Monstache:
Unable to create Elasticsearch client: health check timeout: no Elasticsearch node available
I applied these lines to Monstache configuration:
elasticsearch-validate-pem-file = false
elasticsearch-healthcheck-timeout-startup = 200
elasticsearch-healthcheck-timeout = 200
However, I still encounter the mentioned error. When I searched about it, I found that the problem is due to sniffing in elasticsearch client. But I don't know where and how exactly I must change it?
I should denote that I studied this tutorial for this problem, but I'm still full of ambiguities.
The problem has been solved when I installed Monstache on the same local server on which the ELK stack was installed. Also, the MongoDB database on the remote server has been changed to a single node replica set to be able to connect to Monstache.
let's try to use
elastic.SetSniff(false)
I am running into a very strange issue with Spring Boot and Spring Data: after I manually close a connection, the formerly working application seems to "forget" which schema it's using and complains about missing relations.
Here's the code snippet in question:
try (Connection connection = this.dataSource.getConnection()) {
ScriptUtils.executeSqlScript(connection, new ClassPathResource("/script.sql"));
}
This code works fine, but after it executes, the application immediately starts throwing errors like the following:
org.postgresql.util.PSQLException: ERROR: relation "some_table" does not exist
Prior to executing the code above, the application works fine (including referencing the table it later complains about). If I remove the try-resource block, and do not close the Connection, everything also works fine, except that I've now created a resource leak. I have also tried explicitly setting the default schema (public) in the following ways:
In the JDBC URL with the currentSchema parameter
With the the spring.datasource.hikari.schema parameter
With the spring.datasource.jpa.properties.hibernate.default_schema property
The last does alleviate the issue with respect to Hibernate managed classes, but the issue persists with native queries. I could, of course, make the schema explicit in those queries, but that doesn't seem to address the root issue. Why would closing a connection trigger this behavior?
My environment:
Spring Boot 2.5.1
PostgreSQL 12.7
Thanks to several users above who immediately saw what I did not. The script, adapted from an older pg_dump run, was indeed mucking with the search_path:
SELECT pg_catalog.set_config('search_path', '', false);
Removing that line, and some other unnecessary ones, resolved the problem. Big duh on my part.
I have a play scala app and i have an atlas cluster which i am trying to connect. According to the ReactiveMongo this is possible. I can add my connection string gotten from Atlas to my app via
mongodb.uri
In my application.conf file. I have tried everything based on the instructions from reactivemongo and atlas db but i am still unable to connect to the cluster. using my mongoshell however, i am able to connect and have access to my db but it simply refuses to connect via my app.
Mongo simply returns an error "MongoError['No primary node is available! (Supervisor-13/Connection-14)']" } and logs a warning in my console Some options were ignored because they are not supported (yet): w, retryWrites. I am using scala version 2.12 and reactivemongo 0.12.6 with play 2.6.
My connection string is mongodb+srv://<username>:<password>#my-cluster.abo25.mongodb.net/my-db?retryWrites=true&w=majority
Any info or help would be greatly appreciated.
Solved my problem. It turns out the +srv string format works seamlessly from reactivemongo version 0.17 and i was initially on 0.16. After i upgraded (and also upgraded my code), i was able to connect to my cluster. I also found out one of the user credentials i was using was wrong so that plus the upgrade got me up and running.
Can I get assistance with the error codes coming from eclipse when I try to deploy enterprise application on websphere. I followed craig st jean, I also face another problem with configuration i.e websphere data sources using postgresql. i am using a windows machine, 64bit arch. the error codes are the topic of this question. i hope this question can be seen as relevant, since not much solutions exist for the first issue concerning com.ibm.ws.ffdc.FFDCFilter, thus if one doesn't overcome the first, how can one press on and attempt to solve the second. thanks.
Webspere logs
The test connection operation failed for data source AppDb on server server1 at node Lenovo-PCNode01 with the following exception: java.sql.SQLException: FATAL: password authentication failed for user "listmanagerremote" DSRA0010E: SQL State = 28P01, Error Code = 0. View JVM logs for further details.
I have fixed the issues with deployment in the eclipse neon IDE. I think it is either as a result of the installation of the IBM WebSphere Application Server Traditional v8.0x Developer tools for Neon, and IBM jre.
Eclipse console final message
00000063 CompositionUn A WSVR0191I: Composition unit WebSphere:cuname=ListManager in BLA WebSphere:blaname=ListManager started.
Postgre documents the 28P01 SQL State as an invalid password:
"28P01 INVALID PASSWORD invalid_password"
https://www.postgresql.org/docs/9.0/static/errcodes-appendix.html
Check your data source configuration to ensure that you have specified the correct password, or if using an authentication alias for your data source, confirm that the authentication data configuration contains the correct password, and that you have configured the data source and/or resource reference to use that authentication data.
I have successfully setup ndbcluster version 7.1.26.
This contains 2 data nodes[NDBD], 2 mysql [MYSQLD] nodes and one management [MGMD] node.
Replication works successfully.
My Web application is deployed in JBoss-5.0.1 and using JNDI for connection resources which are specified in application specific ds.xml file in load balanced url forms e.g. jbdc:mysql:loadbalance:host1:port1,host2:port2/databaseName.
host1 : refers to first mysqld node and port1 refers the port it is running on.
host2 : refers to second mysqld node and port2 refers the port it is running on.
When both of the [MySQLD] nodes are up and running everything works fine and cluster responds well, replicates data, and data retrieval operations also work properly.
But issues are raised when any of the [MySQLD] nodes goes down. Data gets inserted/updated/replicated but the application is unable to retrieve data from cluster and web page remains busy working which means busy retrieving data. As soon as the node which was down goes up it responds properly and application goes forward and shows up data retrieved from cluster.
At JBoss 5.0.1 startup it showed up a NullPointerException in class LoadBalancingConnectionProxy.invoke(LoadBalancingConnectionProxy.java:439). Tell me if the above Exception plays any role in the above explained issues.
If anyone had faced issues like above and if has any solution regarding the issues please let me know.
Thanks and regards.
I have resolved the issue as it was a bug in the connectorJ's version.
As The project I am working on was already using both the buggy jar mysql-connector-java-5.0.8.jar and the jar version in which the issue is already resolved i.e. mysql-connector-java-5.1.13-bin.jar.
After all the search when I removed the jar mysql-connector-java-5.0.8.jar my issues got resolved.
All that was problematic was that the ConnectorJ/Driver was getting referred from the buggy jar.
The bug id and url which refers to this issue is:
http://bugs.mysql.com/bug.php?id=31053
.
Thanks for considerations.
Are you using different userids and passwords for each of the hosts(host1, host2) specified in the tag ? (Either directly or using tag) ?