GrayLog2 requires both ElasticSearch and MongoDB, while Logstash uses only ElasticSearch for persisting and searching the logs. what does mongo provide in graylog2?
Graylog2 uses mongodb for the web interface entities. Streams, alerts, users, settings, cached stream counts etc. Pretty much everything you see and edit in the web interface except for the logs themselves.
Related
We certainly want to use separate databases since the front-end team finds it robust to work with MongoDB Atlas and AWS cloud architects find it easy to work with DynamoDB.
Our architecture:
Web application uses MongoDB to insert, update and retrieve data.
The MongoDB is synced in real-time with DynamoDB.
Background AWS services use DynamoDB for inserting, updating and retrieving data.
The changes in either DynamoDB or MongoDB are replicated to each other.
Tried so far:
We currently do have a sync in place with DyanmoDB streams and MongoDB atlas trigger to listen to changes on each database and forward them to the other. We use lambas for this, but our replication logic is not robust yet.
AWS Database Migration Service with ongoing replication has been suggested but haven't been able to get it to work in our use case. Perhaps, this is one option.
3rd party services like: https://www.cdata.com/sync/
Ideal Fit
The most ideal solution would be an AWS-based solution if not a reliable 3rd party service.
Greatly appreciate any resources or thoughts on this! :)
for some time now I have been working together with my research team using different fiware components. After putting on ORION, using MongoDB for writing context data, I thought of using Grafana to show in real time data sent to ORION and stored on Mongo (For example data from a sensor). I was wondering what would be the best method to connect Grafana to ORION, considering that the free version does not offer a connector to MongoDB. Currently through a plugin, on Grafana, I can call the API exposed by the context broker to retrieve the data and then build the tables and the different charts. I wanted to know if this is a good practice or there are better ways, thanks!
I would like to store the spring cloud sleuth messages that come in my zipkin server (ala #EnableZipkinStreamServer) into a NoSql store.
I know that the original zipkin impl used cassandra which would work for me but I am curious about say MongoDb or CouchBase.
I looked in the documents (http://cloud.spring.io/spring-cloud-sleuth/spring-cloud-sleuth.html#_zipkin_consumer) and saw that you can use spring-boot-starter-jdbc for this and it will write into the MySQL DB instance that you define.
I don't see an API/SPI for putting this into anything else. Is there one and what is it. I can implement the NoSql portion myself.
I've recently came across a need to store a higher amount of files in my application and because PaaS platform used to host the application provides mongo, I've would like to use it.
However because I'm quite inexperienced with mongo I have almost no idea what is the current state of mongo related plugins and tools for grails. What should I use? As I want to keep domain classes in SQL database and use mongo only to store related files (in this case it will be mostly a bunch of PDFs and text documents related to domain instance) the mongoDB ORM [1] plugin seems too "heavy". Unfortunately mongoDB ORM is probably the only mongo plugin for grails in active development at the moment.
In short, what would be the best plugin / library tool-set for this purpose? The closest thing that matches my need I've found is grails-mongo-files plugin [2], which is probably a little bit outdated with no further development.So far it seems that I will have to use mongo's java driver (or the gmongo wrapper) and write some storage service and taglib by myself (what is not necessary a bad thing).
[1] http://grails.org/plugin/mongodb
[2] https://github.com/quirklabs/grails-mongo-file
There is also the mongodb gridfs plugin. http://grails.org/plugin/mongodb-gridfs
One thing to consider is that gridfs effectively does two calls to mongo, one to retrieve file information and one to retrieve the file. So it might not be a good fit if your files are under 16 megabytes.
Here is a post on how to do this manually if you want to bypass plugins - http://jameswilliams.be/blog/entry/171
I'd like to be able to have an admin app and a client app for my project. Ideally, I'd like to be able to have a shared MongoDB collection. How would I be able to accomplish this?
I tried creating collections with the same name in two different apps, but found that Meteor will keep the data separate. Any idea what I can do? Thanks.
export MONGO_URL=mongodb://localhost:3002/meteor
Then run meteor app, it will change the default database meteor uses. So share databases or collections won't be a problem!
For administrative reason, I would use a individual MongoDB server managed by myself other than using meteor's internal MongoDB.
A reasonable question and probably worth a discussion in excess of this answer:
The MongoDB connection is handled by the Meteor application process itself and this is - as far as I read and understood - part of Meteors philosophy targeting an approach that might be described like: One data source serves one application belonging to it but many clients subscribing to it.
This in mind, combining "admin" and "client" clients in one application (i.e. your Meteor app) is probably the preferred way.
From a server administrative view, however, connections are handled by Meteor in that way that there is always the default local data source which resides in your project directory (.meteor/local/db, try meteor mongo --url to obtain the mongo connection string while the meteor application process is running). But nevertheless one may specify an optional data source string for deployment purposes like described in these deployment instructions.
So you would need to choose a somewhat creepy way of "local development deployment" for your intended setup to get working. Or you go and hack the sources and... no, forget it. You probably want your application and clients to take advantage of e.g. realtime UI updates (publish) and that is why the Meteor application is tied to an "application data source" and vice-versa by now. When connecting from another app, events that trigger changes in the model would not be transported across those applications. The mongoDB instance itself of course isn't aware of that.
I'm sure the core team won't expose the data source connection to a configuration section for considered reasons unless they extend their architecture with some kind of module concept which provides a common service layer of core Model/Collections abstraction across Meteor instances - at least supporting awareness of publish/subscribe events.
Try this DDP test I hacked together for a way to bridge two apps (server A and B).
Both servers can manipulate data, but data is only stored in one collection on server A.
See this link as well