I'm developing an application with SpringBoot. I already have a RestController and a RabbitMQ component that depending on the message I receive I get some data from a MongoDB and do some logic.
I set up the database as:
MongoClient mongoClient = MongoClients.create("mongodb://localhost:27017");
MongoDatabase db = mongoClient.getDatabase("databaseName");
MongoCollection<Document> collection = db.getCollection("collectionName");
Since I'm using SpringBoot I wanted to do it with Springboot and acess it in every SpringBoot component (the RestController and the RabbitMQ component).
I already understood that I have to put the settings on application.properties.
What I don't get is how do I acess the database afterwards.
Am I supposed to do a #Configuration class?
And how can I do, for example, collection.find(eq("id",userID)).first() everywhere?
Use the spring data JPA. You literally don't have to write any code.
Just follow this
Related
Is it possible to have a microservice application in JHipster that has two microservices: one with a PostgreSQL backend and one with a Cassandra backend? If so, could I have pagination enabled for the PostgreSQL entities and disabled for the Cassandra entities in their respective microservies? I would disable the pagination for the Cassandra microservice since I get the error "Pagination isn't allowed when the application used Cassandra". However, is there a way around this error; i.e., could my PostgreSQL microservice still use pagination - even though my Cassandra microservice does not?
My best,
Amar
I had to move the paginate option into the application {...} object.
application {
config {
baseName geonamesservice,
packageName com.saathratri.geonames,
applicationType microservice,
authenticationType oauth2,
databaseType sql,
prodDatabaseType postgresql,
serverPort 8081,
serviceDiscoveryType eureka
}
entities GnGeoname, GnAdmin1CodeAscii, GnAdmin2Code, GnAlternateName, GnContinentCode, GnCountryInfo, GnHierarchy, GnFeatureCode, GnIsoLanguageCode, GnPostalCode, GnTimeZone
paginate GnGeoname, GnAdmin1CodeAscii, GnAdmin2Code, GnAlternateName, GnContinentCode, GnCountryInfo, GnHierarchy, GnFeatureCode, GnIsoLanguageCode, GnPostalCode, GnTimeZone with pagination
}
I am creating a Spring Boot App with Mongo DB and scratching my head a bit with how to set up the production database configuration.
With a SQL-based Database, I'd be used to setting up a data source bean like this
#Bean
public DataSource getDataSource()
{
DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create();
dataSourceBuilder.driverClassName("org.h2.Driver");
dataSourceBuilder.url("jdbc:h2:file:C:/temp/test");
dataSourceBuilder.username("sa");
dataSourceBuilder.password("");
return dataSourceBuilder.build();
}
However,
It doesn't seem to be needed - my local app connects to a spun up instance of mongo db without any explicit configuration.
It doesn't seem to be a standard with mongo according to [this post][1]
I figured I'd give it a go to see if it would automagically configure in production, but I'm getting a DataAccessResourceFailureException. Info: heroku, did the mLab MongoDB add on.
I have no problem getting the url and I can certainly throw that in an environment variable, but I'm just not sure what I need to add to my app to configure it.
Set values in application.properties file like below
spring.data.mongodb.database = ${SPRING_DATA_MONGODB_DATABASE}
spring.data.mongodb.host = ${SPRING_DATA_MONGODB_HOST}
spring.data.mongodb.port = ${SPRING_DATA_MONGODB_PORT}
You can use the #Value annotation and access the property in whichever Spring bean you're using
#Value("${userBucket.path}")
private String userBucketPath;
The Externalized Configuration section of the Spring Boot docs, explains all the details that you might need.
Context:
A client sends calls to a server to execute a job. For each job, I create a new MongoClient (with Morphia):
MongoClient mongoClient = new MongoClient("000.00.000.000", 27017);
Morphia morphia = new Morphia();
Datastore ds = morphia.createDatastore(mongoClient, "myDatastore");
//operations on the datastore: save, find, update...
The question: is it good practice / totally wrong? Or should I create only one MongoClient / Morphia instance for the whole app as a global variable, and let it be called by each job? (as described here)
The doc for the Mongo Java driver says:
The Java MongoDB driver is thread safe. If you are using in a web
serving environment, for example, you should create a single
MongoClient instance, and you can use it in every request. The
MongoClient object maintains an internal pool of connections to the
database (default pool size of 10). For every request to the DB (find,
insert, etc) the Java thread will obtain a connection from the pool,
execute the operation, and release the connection. This means the
connection (socket) used may be different each time.
So... one MongoClient per app, not per job called.
I'm currently writing a Grails app using Grails 2.2.2 and MySQL, and have been deploying it to Cloudfoundry.
Until recently I've just used a single MySQL datasource for my domain, which Cloudfoundry detects and automagically creates and binds a MySQL service instance to.
I now have a requirement to store potentially large files somewhere, so I figured I'd take a look at MongoDB's GridFS. Cloudfoundry supports MongoDB, so I'd assumed Cloudfoundry would do some more magic when I deployed my app and would provide me with a MongoDB datasource as well.
Unfortunately I'm not prompted to create/bind a MongoDB service when I deploy my app, and I think this may be down to the way I'm connecting to Mongo.
I'm not using the MongoDB plugin, as this conflicts with another plugin I'm using, and in any case I don't need to persist any of my domain to Mongo - just some large files - so I'm using the Mongo java driver directly (similar to this - http://jameswilliams.be/blog/entry/171).
I'm unsure how Cloudfoundry detects that your application requires a particular datasource, but I'd assumed it would figure this out somehow from DataSource.groovy.
Mine looks like this...
environments {
development {
dataSource {
driverClassName = "com.mysql.jdbc.Driver"
dbCreate = "create-drop"
...
}
dataSourceMongo {
host = "localhost"
port = 27017
dbName = "my_mongo_database_name"
...
}
}
}
Is there something I'm missing? Or do I need to manually bind the MongoDB service somehow?
Using answer instead of comments for better formatting. :)
I guess you have already followed step to create the MongoDB service in Cloudfoundry as mentioned here otherwise this has to be done. Plus, it will be lot easier if you use the Groovy wrapper of the Java Driver of MongoDB called GMongo. Refer the GitHUb Source and this Mongo blog for more details.
I want to use lift-mongodb-record in my play scala project.
for using. i need co configure lift-mongodb like this:
import com.mongodb.Mongo
import net.liftweb.mongodb.{MongoIdentifier, MongoDB}
object MainDb extends MongoIdentifier {
val jndiName = "main"
}
MongoDB.defineDb(MainDb, new Mongo, "test")
where can I put mongodb initialisation, to make this work?
It doesn't actually matters where you install mongodb. You just need to know the host where you installed mongodb and port on which it running. I suppose you are running you app and installed mongo on the local computer. In this case host would be localhost and by default mongo accepts connections on port 27017.
So now, you have all needed information and you need to provide it to lift like this:
MongoDB.defineDb(
MainDb,
new Mongo(new ServerAddress("localhost", 27017)),
"test")
It's also not necessarily needed to define new DB identifier (MainDb in your case), you can always you DefaultMongoIdentifier unless you are accessing several DB instances.
In this page you can find more information about mongodb configuration:
http://www.assembla.com/wiki/show/liftweb/Mongo_Configuration
I recommend you to use casbah with play & scala. http://jaredrosoff.com/2011/05/getting-started-with-play-framework-scala-and-casbah/
Regards,
Serdar