how to define log4j.properties for MongoDB - mongodb

I have been using Wowza Stream Engine for content streaming and actually used MySQL for storing logs coming from wowza streaming with the help of log4j MySQL definitions. Before using MySQL, I utilised from the instructions in official wowza web site. The link is below:
https://www.wowza.com/forums/content.php?130-How-to-log-to-a-mySQL-database
However, because of the fact that MySQL became slower day to day, (sometimes even crashing) while wowza streaming logs coming and accumulating on DB (millions) ; I intended to move the DB Log system to MongoDB. In accordance with this, I used below log4j mongodb statements in order to work it just like in the MySQL DB.
log4j.appender.MongoDB=org.apache.log4j.jdbc.JDBCAppender
log4j.appender.MongoDB= com.mongodb.jdbc.MongoDriver
log4j.appender.MongoDB.hostname=localhost
log4j.appender.MongoDB.port= 27017
log4j.appender.MongoDB.Driver=org.mongodb.mongodb-driver
log4j.appender.MongoDB=org.log4mongo.MongoDbAppender
log4j.appender.MongoDB.databaseName=primarydb
log4j.appender.MongoDB.collectionName=wowza_log
log4j.appender.MongoDB.layout=org.log4mongo.MongoDbPatternLayout
log4j.appender.MongoDB=primarydb.wowza_log.insert({server_ip= {server_ip}, date= {date}, time= {time}, ...}
Moreover, needed MongoDB setup and Service setup processes have also been accomplished correctly.
Consequently, I have set up RoboMongo so as to see and observe the collection ('wowza_log') created by wowza streaming. However, after starting a sample mp3 with wowza, the connection seems to be set, but there are no collection named wowza_log created, nothing happens in MongoDB as I see from RoboMongo. I got stuck at this point and wondered if there are some people who can help me to get rid of this problem.

Related

running camunda with Spring boot & mongodb

Has anyone been able to get Camunda to run with Spring Boot and mongodb?
I tried several approaches and always got into a brick wall.
What I tried:
1. jpa / hibernate-ogm
I was able to initiate a connection to mongo after creating my own CamundaDatasourceConfiguration and ProcessEngineConfigurationImpl.
It failed when Camunda tried to get table metadata. I couldn't plug out this behavior.
2. jdbc driver for mongo by progress
I set up the jdbc url and driver class by progress.
Camunda then gets stuck during the startup process and does not get to the point where Jetty is fully started, i.e. the "Jetty started on port XYZ" message in the log.
3. camunda with postgres with mongo FDW
FDW is a mechanism for postress to interface an external datasource. This way an application can work with postgres over jdbc while the FDW will take care of reading and writing the date to the external source, be it a file, mongodb, etc.
After realizing 1 and 2 don't work, I started working on 3.
Has anyone succeeded in doing this and can share how?
so I ran into the same problem and decided to share my answers with you.
Currently it is not possible to run the Camunda-Engine with a NoSQL Database.
In this Camunda-Forum-Post one of the guys at Camunda also says it is not possible to run the engine completely without a database.
And in the offical Camunda-Docs there is also a list with all supported environments. Currently there are only SQL-Databases listed:
https://docs.camunda.org/manual/7.10/introduction/supported-environments/
But in some earlier Blog-Posts they metioned, that they want to make some proof-of-concept examples with the use of NoSQL-Databases. So we can expect, that these databases will be supported in the future, but not at the moment.
(note: the flowable engine is doing the same proof of concepts, they mentioned, that they want to be able to use NoSQL-databases by the end of the next year).

Not able to migrate the data from Parse to local machine

as some of you might aware about the shutting down of parse service in about a year, i am following the migration process as per their tutorials. However, i am not able to migrate these data from parse to local database(i.e. mongodb).
I've started the mongodb instanse locally on 27017, and also created an admin user as part of migration based on these tutorials. Reference-1 & Reference-2.
But when i try to migrate the data from parse developer console, i get No Reachable Servers or Network Error & i don't understand why. I have doubt in the Connection string that i use for this but i am not sure, please find the following image.
I am new to mongodb so don't have much idea about this, your little help would be greatly appreciated.
Since the migration tool runs at parse.com, the tool needs to be able to access your MongoDB instance over the Internet.
Since you're using a local IP (192.168.1.101), parse.com cannot connect to your IP and the transfer will time out.
Either you need to make your MongoDB reachable from the Internet, or you can - as they do in their guide - use an existing MongoDB service.

Use Cygnus to store historical data from Orion ContextBroker in a local Hadoop database

We are currently working in a project where we use Orion ContextBroker to store information from different sensors and Wirecloud to show them in a web page.
We want to store historical data from these sensors in order to show them in a graph. I have looked around the Fiware documentation and they recommend to store the data in a Cosmos instance of Fi-lab, through Cygnus.
The thing is that we would like to store that historical data in a local Hadoop based server we have in our company, not in Cosmos, because we are running this project in a local net where we don't have internet access, and also to have that information stored in our local server.
Is it possible to configure Cygnus to redirect the output data to my file system? If so, which files must be configured in order to achieve this?
Thank you
The answer is yes. Cygnus is meant to persist context data in whatever HDFS-based filesystem (as the one used by Cosmos), thus nothing special has to be done when configuring Cygnus.
If you download the lastest version (0.7.0 at the moment of writting this), you will need to configure:
A cygnus_instance_default.conf file from cygnus_instance.conf.template. This is the instance configuration. From 0.7.1 is possible to have multiple instance configurations that are run in a parallel way, and they all have to called cygnus_instance_<whatever>.conf.
A agent.conf file from agent.conf.template. This is the Flume specific configuration that you will find in the README.md.

Client cached and Synch with remote Mongodb

I'm trying to sync data between a real MongoDB db on a remote server and a local storage on the client. and I've came across this lib called Minimongo
Here is what were aiming for:
were trying to sync small portion of e.g. a document, so that the user can have something to work with while in the client and not been pinging mongo at every moment...
Then if the user closes the browser and logs back in have a cached part of the document and be able to connect to mongo to see if the document has change or not. And if that's the case restore it from the remote mongoDB instance.
This question is also relevant and similar for what were trying to achieve
So how can we proceed with this workflow using minimongo or if there is another lib/tool like it that can aid in this process ?

Cannot connect to mongodb replica set

I'm using the datanucleus mongodb maven plugin and "access platform" for connecting my java app to mongodb using JPA.
I've followed the instructions on http://docs.mongodb.org/manual/tutorial/deploy-replica-set/
on a ubuntu VM, added db1.mongo, db2.mongo and db3.mongo into the hosts file on both the guest vm and the host (Mac OS X).
I got a simple java app connecting to the servers, (as described in http://www.datanucleus.org/products/accessplatform_3_0/mongodb/support.html).
When I connect the app to the primary (connection url: mongodb:db1.mongo:27017/ops?replicaSet=rs0) everything works just fine, but when I add the other two mongodb's to the connection url, so it becomes mongodb:db1.mongo:27017/ops?replicaSet=rs0,db2.mongo:27018,db3.mongo:27019 I get the exception:
com.mongodb.MongoException: can't find a master
at com.mongodb.DBTCPConnector.checkMaster(DBTCPConnector.java:503)
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:236)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:216)
...
I've searched for this error, but the ones I have found concerns use of localhost/127.0.0.1. I tried to mitigate that by running mongodb on a separate VM and thus a non-local IP as well as adding the names to the hosts file.
The primary goal with trying mongodb is to achieve availability so replication and being able to failover is extremely important. Transactions and consistency between nodes in case of failure is not a problem, neither are we concerned about loosing an update or two once in a while so mongodb looks like a good alternative using JPA (I'm utterly fed up with mysql :-)
Thanks in advance for your help!
I used multiple MongoDB servers when I originally wrote that support and worked back then. Not got time now, but you can look at the DataNucleus code that parses your datastore connection URL and converts it into MongoDB java API calls. Should strip the servers apart and then call "new Mongo(serverAddrs);". If its passing it in correctly (debugger?), then the problem is possibly Mongo-specific, as opposed to what DataNucleus does for you.
Also make sure you're using v3.1.2 (or later) of datanucleus-mongodb
I think you've misformatted your MongoDB URI. Instead of this:
mongodb:db1.mongo:27017/ops?replicaSet=rs0,db2.mongo:27018,db3.mongo:27019
Do this:
mongodb:db1.mongo:27017,db2.mongo:27018,db3.mongo:27019/ops?replicaSet=rs0