How to connect Mongodb Atlas to Spring - mongodb

I have a web application which is using Spring Boot to handle the backend logics. I'm trying to integrate mongodb to track some information about the users of this webapp.
I created a database on mongodb Atlas and through the Mongo Shell the connection goes fine. The problem comes when I try to connect with Spring. Let me show you all the details
Inside Atlas, I added this IP Address (0.0.0.0/0 (includes your current IP address)) into Security > Network Address. In theory this should allow me to connect to the database from any IP address.
I then created a collection called "test".
If I click on my cluster and then on the connect button, it ask me with which modality I want to connect. I choose "Connect your application", and then I have to select the Driver and the Version. I choose respectively "Java" and "3.6 or later" (I'm not sure if it's the correct version, the alternatives are 3.4 or 3.3). And finally it shows me the connection string which is:
mongodb+srv://admin:<password>#umadit-obxpb.mongodb.net/test?retryWrites=true&w=majority
To connect to Atlas with Spring I'm using this dependency
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-mongodb</artifactId>
</dependency>
Inside the application.properties file I have these two lines to configure mongo.
spring.data.mongodb.host=mongodb+srv://admin:<password>#umadit-obxpb.mongodb.net/test?retryWrites=true&w=majority
spring.data.mongodb.port=27017
Instead of the password I put for obvious reasons.
The only problem is that when I start Spring Boot I continue to receive this error message:
2020-02-25 16:31:25.605 INFO 41162 --- [=majority:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server mongodb+srv://admin:<password>#umadit-obxpb.mongodb.net/test?retrywrites=true&w=majority:27017
com.mongodb.MongoSocketException: mongodb+srv://admin:<password>#umadit-obxpb.mongodb.net/test?retrywrites=true&w=majority: nodename nor servname provided, or not known
at com.mongodb.ServerAddress.getSocketAddress(ServerAddress.java:188) ~[mongo-java-driver-3.6.4.jar:na]
at com.mongodb.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:59) ~[mongo-java-driver-3.6.4.jar:na]
at com.mongodb.connection.SocketStream.open(SocketStream.java:57) ~[mongo-java-driver-3.6.4.jar:na]
at com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:126) ~[mongo-java-driver-3.6.4.jar:na]
at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:114) ~[mongo-java-driver-3.6.4.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.net.UnknownHostException: mongodb+srv://admin:<password>#umadit-obxpb.mongodb.net/test?retrywrites=true&w=majority: nodename nor servname provided, or not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.8.0_111]
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) ~[na:1.8.0_111]
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) ~[na:1.8.0_111]
at java.net.InetAddress.getAllByName0(InetAddress.java:1276) ~[na:1.8.0_111]
at java.net.InetAddress.getAllByName(InetAddress.java:1192) ~[na:1.8.0_111]
at java.net.InetAddress.getAllByName(InetAddress.java:1126) ~[na:1.8.0_111]
at java.net.InetAddress.getByName(InetAddress.java:1076) ~[na:1.8.0_111]
at com.mongodb.ServerAddress.getSocketAddress(ServerAddress.java:186) ~[mongo-java-driver-3.6.4.jar:na]
... 5 common frames omitted
I don't know what to do in order to make it work. Am I missing something?
SOLUTION
As #barrypicker suggested, the problem was inside the properties file. Instead of using spring.data.mongodb.host I used spring.data.mongodb.uri. Now it works perfectly.
spring.data.mongodb.uri=mongodb+srv://admin:<password>#umadit-obxpb.mongodb.net/test?retryWrites=true&w=majority
even without spring.data.mongodb.port

Well, I think your to connect to Mongo Atlas your application.properties should have spring.data.mongodb.uri instead of spring.data.mongodb.host.spring.data.mongodb.uri: mongodb://<user>:<passwd>#<host>:<port>/<dbname>
I think this may work
`

Another issue with the atlas is that your IP is not allowed, make sure you add your IP at the network access tab.

Use this dependency inside "pom.xml" file
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
Inside the "application.properties"
use "spring.data.mongodb.uri"
spring.data.mongodb.uri = mongodb+srv://<user>:<password>#<cluster_name>.mogodb.net/<dbname>

Related

Quarkus Datasource using Unix Socket is ignored

I am trying to connect quarkus reactive datasource using (Google Cloud SQLproxy).
You are able to run the cloud_sql_proxy as unix socket available connection.In this way:
./cloud_sql_proxy -dir=/tmp/cloudsql -instances=proyectA:europe-west1:cloudsql-sandbox-ephemeral -credential_file=../security/proyectA-credentials.json &
2020/06/30 12:39:27 Listening on /tmp/cloudsql/proyectA:europe-west1:cloudsql-sandbox-ephemeral/.s.PGSQL.5432 for proyectA:europe-west1:cloudsql-sandbox-ephemeral
2020/06/30 12:39:27 Ready for new connections
Then try to connect with this application.properties:
quarkus.datasource.db-kind=postgresql
quarkus.datasource.username=postgres
quarkus.datasource.password=123456
quarkus.datasource.reactive.url=postgresql:///postgres?cloudSqlInstance=/tmp/cloudsql/proyectA:europe-west1:cloudsql-sandbox-ephemeral/.s.PGSQL.5432
quarkus.datasource.jdbc=false
quarkus.datasource.reactive=true
running the Quarkus APP, get the error:
ion: Conexión refused: localhost/127.0.0.1:5432
Caused by: java.net.ConnectException: Conection refused
But, the message appears to try connect using TCP instead Unix socket. It is posible connect quarkus datasource via unix socket?
Java code is so simple
#Inject
private io.vertx.mutiny.pgclient.PgPool client;
(I can connect to cloud_sql_proxy using TCP without problems, but I need to configure unix socket to be able to deploy my quarkus app to cloud run with the cloud_sql_proxy)
To connect using a UNIX domain socket, the Reactive Pg client needs Netty native support.
First add the native transport jars to your project:
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-native-epoll</artifactId>
<classifier>linux-x86_64</classifier>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-native-kqueue</artifactId>
<classifier>osx-x86_64</classifier>
</dependency>
Then enable native transport in the Quarkus config file:
quarkus.vertx.prefer-native-transport=true
Finally, fix the reactive db url:
quarkus.datasource.reactive.url=postgresql://:5432/postgres?host=/tmp/cloudsql/proyectA:europe-west1:cloudsql-sandbox-ephemeral

java connection refused error with spring boot data geode remote locator

Per my question Apache Geode Web framework I've checked through various spring guides from here and spring data geode samples from here and written a short spring data geode application but it cannot connect to the remote GFSH started Geode locator. The Application class is:
package cm;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.data.gemfire.config.annotation.ClientCacheApplication;
import org.springframework.data.gemfire.config.annotation.ClientCacheApplication.Locator;
import org.springframework.data.gemfire.config.annotation.EnablePdx;
import org.springframework.data.gemfire.repository.config.EnableGemfireRepositories;
#SpringBootApplication
#ClientCacheApplication(name = "CmWeb", locators = #Locator, subscriptionEnabled = true)
#EnableGemfireRepositories(basePackageClasses= {CmRequest.class})
#EnablePdx
public class CmWeb {
public static void main(String[] args) {
SpringApplication.run(CmWeb.class, args);
}
}
and in the resources directory application.properties I've set up the remote locator:
# Configure the client's connection Pool to the servers in the cluster
spring.data.gemfire.pool.locators=1.2.3.4[10334]
Build and run the application and it discovers the locator (which it returns as the server name)
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : AutoConnectionSource discovered new locators [UAT:10334]
A couple of seconds later it throws the error:
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : locator UAT:10334 is not running.
and
java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method) ~[na:1.8.0_232]
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85) ~[na:1.8.0_232]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[na:1.8.0_232]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:204) ~[na:1.8.0_232]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[na:1.8.0_232]
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172) ~[na:1.8.0_232]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_232]
at java.net.Socket.connect(Socket.java:607) ~[na:1.8.0_232]
at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:958) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:899) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.internal.net.SocketCreator.connect(SocketCreator.java:888) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.distributed.internal.tcpserver.TcpClient.getServerVersion(TcpClient.java:290) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.distributed.internal.tcpserver.TcpClient.requestToServer(TcpClient.java:184) ~[geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.queryOneLocatorUsingConnection(AutoConnectionSourceImpl.java:209) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.queryOneLocator(AutoConnectionSourceImpl.java:199) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.queryLocators(AutoConnectionSourceImpl.java:287) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl$UpdateLocatorListTask.run2(AutoConnectionSourceImpl.java:500) [geode-core-1.9.2.jar:na]
at org.apache.geode.cache.client.internal.PoolImpl$PoolTask.run(PoolImpl.java:1371) [geode-core-1.9.2.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_232]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_232]
at org.apache.geode.internal.ScheduledThreadPoolExecutorWithKeepAlive$DelegatingScheduledFuture.run(ScheduledThreadPoolExecutorWithKeepAlive.java:276) [geode-core-1.9.2.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_232]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_232]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_232]
After a lot of investigation I thought it was that the spring data geode client expects a spring boot geode server according to Connecting GemFire using Spring Boot and Spring Data GemFire and so I downloaded the ListRegionsOnServerFunction jar and deployed it on the GFSH server get the same result (have not yet restarted the server...) but that causes the same error condition.
If by Spring-Data-Gemfire - Unable to contact a Locator service. Operation either timed out or Locator does not exist I try and change the application.properties from
spring.data.gemfire.pool.locators=1.2.3.4[10334]
to
spring.gemfire.locators=1.2.3.4[10334]
or other variations then the app can't find the remote locator and throws:
[Timer-DEFAULT-3] o.a.g.c.c.i.AutoConnectionSourceImpl : locator localhost/127.0.0.1:10334 is not running.
Writing this question I've finally found How to connect a remote-locator in Geode and also can't PING the GFSH server from the SPRING app. However, the server bind address is setup properly for remote locator clients and various other services and UI using a locally built Geode Native Client for Geode v 1.10 can connect. I suspect PING may be disabled across this (semi-internal) network by default. I also disabled the firewall rules for ports 10334, 1099, 40404 to allow all traffic but still get the same error condition.
It turns out that from repeated INFO messages in the Spring Boot app after the connection refused:
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : updateLocatorInLocatorList changing locator list: loc form: LocatorAddress [socketInetAddress=UAT:10334, hostname=UAT, isIpString=false] ,loc to: UAT:10334
[Timer-DEFAULT-2] o.a.g.c.c.i.AutoConnectionSourceImpl : updateLocatorInLocatorList locator list from:[UAT:10334, /1.2.3.4:10334] to: [LocatorAddress [socketInetAddress=UAT:10334, hostname=UAT, isIpString=false], LocatorAddress [socketInetAddress=/1.2.3.4:10334, hostname=1.2.3.4, isIpString=true]]
and then running list clients on the server, the connection from the Spring Boot app to the Geode server v 1.10 is in fact established. Arrrgh!
It means the locator logic is working but this doesn't explain why after the first connection there's a java.net.ConnectException: Connection refused: connect error. Any ideas?
1 quick note about your Spring Boot application class...
#SpringBootApplication
#ClientCacheApplication(name = "CmWeb", locators = #Locator, subscriptionEnabled = true)
#EnableGemfireRepositories(basePackageClasses= {CmRequest.class})
#EnablePdx
public class CmWeb {
public static void main(String[] args) {
SpringApplication.run(CmWeb.class, args);
}
}
The following statements are true iff you are using Spring Boot for Apache Geode (or Pivotal GemFire), which is highly recommended.
When using SBDG (by declaring the correct org.springframework.geode:spring-geode-starter dependency on your application classpath), then you do not need to explicitly declare the #ClientCacheApplication, #EnableGemfireRepositories or the #EnablePdx annotations since SBDG auto-configures a ClientCache instance by default, auto-configures SD Repositories particularly when all entity classes are in the same package or sub-package as the Spring Boot app and SBDG auto-configures PDX by default, as well.
The locator = #Locator just specifies that the "DEFAULT" GemFire/Geode Pool when configured via the ClientCacheFactory should connect to the cluster via Locators, on localhost using the default Locator port, 10334. Therefore, this attribute is mostly useless and I would recommend the new #EnableClusterAware annotation from SBDG (see here).
The other attributes can be configured via Spring Boot application.properties, like so:
spring.application.name=CmWeb
spring.data.gemfire.pool.subscription-enabled=true
TIP: You can configure subscription on individually "named" Pools, even via properties, if you are using more than 1 Pool (of connections) in your application, perhaps to route different payloads based on workflows to different "grouped" servers in your cluster, etc.
You started to configure the "DEFAULT" Pool in application.properties already with...
# Configure the client's connection Pool to the servers in the cluster
spring.data.gemfire.pool.locators=1.2.3.4[10334]
Regarding...
After a lot of investigation I thought it was that the spring data geode client expects a spring boot geode server
No, SDG does not expect the cluster (of servers) to be configured or bootstrapped with Spring at all. Using Gfsh is perfectly valid. For instance. If the ListRegionsOnServerFunction is not available, SDG falls back to other means (provided by GemFire/Geode itself, which Gfsh knows and uses).
All the messages you are seeing in the Spring Boot app logs are coming from Geode itself, i.e. nothing to do with Spring. In a nutshell, and FWIW, SDG/SBDG is a facade around the Apache Geode (Pivotal GemFire) API and Java client driver. SDG/SBDG is at the mercy of this client doing the right thing, which of course, is partially dependent on proper configuration. Still... I am really just thinking out loud now since I suspect you are already well aware of (or have discovered) all of this.
I would also say the Java client and Native Client are not exactly an apple to apple comparison either. Meaning, if you developed a client using purely the Apache Geode (Pivotal GemFire) API without Spring, you'd have the exact same problem.
I have never seen a case where the first connection is establish but subsequent connections get a "Connection refused", o.O #argh
Have you tried this same configuration/arrangement with older Geode versions, e.g. 1.9?
Sorry for your troubles. I will think on this more.

Spring Boot MongoDB with shared Domain Object

I have 3 services(3 different projects eg ClientService, AggregationService, DataService) that share same domain object and only one of them(DataService) connects to MongoDB and sends back the data to other 2 services.
All these services are spring boot based
When I was keeping separate java file of the domain object kept in the respective project then it all worked fine as the domain object in ClientService and AggregationService didn't have mongodb annotations eg #Document, #Field.
But when I kept the domain object in a common module so that i don't have to maintain 3 copies, ClientService and AggregationService started throwing exception during start up. Though these services do start up and returns the response correctly but the exception also comes up when these services start up.
Below is the domain object:
#Document(collection = "transformed_categories")
public class Category extends ResourceSupport {
#Field("id")
private String customId;
private String name;
private String type;
}
Exception:
2017-02-27 15:10:41.098 INFO 9052 --- [ main] c.c.delivery.CleintServiceApplication : Started CleintServiceApplication in 3.47 seconds (JVM running for 3.918)
2017-02-27 15:10:41.895 INFO 9052 --- [localhost:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server localhost:27017
com.mongodb.MongoSocketOpenException: Exception opening socket
at com.mongodb.connection.SocketStream.open(SocketStream.java:63) ~[mongo-java-driver-3.4.1.jar:na]
at com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:115) ~[mongo-java-driver-3.4.1.jar:na]
at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:113) ~[mongo-java-driver-3.4.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_05]
Caused by: java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method) ~[na:1.8.0_05]
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85) ~[na:1.8.0_05]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) ~[na:1.8.0_05]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[na:1.8.0_05]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[na:1.8.0_05]
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172) ~[na:1.8.0_05]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_05]
at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_05]
at com.mongodb.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:57) ~[mongo-java-driver-3.4.1.jar:na]
at com.mongodb.connection.SocketStream.open(SocketStream.java:58) ~[mongo-java-driver-3.4.1.jar:na]
... 3 common frames omitted
I found the exact place which is causing the issue.
The issue is caused by maven dependency that I added into my common-library project where I intend to keep all my domain objects.
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-mongodb</artifactId>
<version>1.10.0.RELEASE</version>
</dependency>
If I remove this dependency then the exception doesn't come. What should be the approach in keeping domain objects which are shared across the projects?
This is an old post but requires answers.
First thing, Microservices should not be sharing code. When there are domain objects which need to be sent across, we should use DTO pattern which should be owned by every microservice and that code should not be shared across. This is a recommended practice.
Coming to MongoDB exception, this is a normal behavior of a spring boot application. When it sees MongoDB in the class path and no configuration in properties file, it would look for a MongoDB instance at localhost:27017
So if we don't intend to connect to MongoDB from a Microservice then MogoDB driver should not be present in the class path.

org.postgresql.util.PSQLException: Protocol error. Session setup failed

i am trying to connect to a postgresql server 9.3 running in my local machine.
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>9.3-1100-jdbc41</version>
</dependency>
Also tried above jar for Driver class.
Caused by: org.postgresql.util.PSQLException: Protocol error. Session setup failed.
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:510)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:173)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:64)
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:136)
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:29)
resulted in above exception.
any idea how to ge around this problem?
the versions are correct. There was a typo in my connection string. the port given was wrong. After specifying the port, connection created successfully.

How do I connect the NetBeans profiler to a specic remote instance

I have a remote GlassFish server that has a node agent configured. The instance I want to start in profiling mode is controlled by the node agent.
I've installed and calibrated the remote pack and I've modified my domain.xml for the specific instance as follows:
<profiler enabled="true" name="NetBeansProfiler">
<jvm-options>-agentpath:/home/glassfish/glassfish/profiler-server-6.0rc1-linux/lib/deployed/jdk16/linux/libprofilerinterface.so=/home/glassfish/glassfish/profiler-server-6.0rc1-linux/lib,5140</jvm-options>
</profiler>
Now at this point NetBeans tells you to start the domain with the --verbose command but in my case I'm trying to start an instance and "asadmin start-instance" doesn't support --verbose. I've checked the server.log but I'm not seeing any error nor any language that says it's waiting when I try to start the instances.
However, I think GlassFish is properly configured and my NetBeans setup is the issue. Where I think the issue might be is trying to specify the port. If I leave the port off, it just tries to connect forever. If I put the port on it just closes the dialog and the status shows "Inactive".
UPDATE:
It seems there might be a bug with GF2. After verifying everything and getting the server so that it was listening, the following exception is thrown
Could not load Logmanager "com.sun.enterprise.server.logging.ServerLogManager"
java.lang.ClassNotFoundException: com.sun.enterprise.server.logging.ServerLogManager
at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
at java.util.logging.LogManager$1.run(LogManager.java:166)
at java.security.AccessController.doPrivileged(Native Method)
at java.util.logging.LogManager.(LogManager.java:156)
According to this URL, http://java.net/jira/browse/GLASSFISH-3256 it's a known issue and won't be fixed until GF3.
Anyway, my question was about how to connect to a specific instance and I think that was answered.
Do not include the port number in the hostname field. The port number is taken from the global profiler settings.