Installing and setting up logstash - mongodb

I need to use Logstash to parse data from custom log files (generated from the our application). I have a tomcat server and mongodb. After going through the documentation online, I'm still unclear as to how to use the different input sources. There is a community based mongoDB database, but I'm unclear as to how use it.
How can I set up/ where should I start to begin using logstash parse logs from files?

Related

Making postgresql logs in JSON format

I am using postgresql 9.5 on ubuntu 16.04.
Is there any way in postgresql so that it's logs can be stored in JSON format ?
I need to send it to elasticsearch, that's why I need to make postgresql logs in JSON format.
I followed this tutorial, but did not quite understood that what and where it was asking me to make changes in the conf file.
PostgreSQL listens to their community and your voice is heard !
PostgreSQL 15 beta has been released on May 15th, 2022. PostgreSQL version 15 now supports the jsonlog logging format which is to be released in third quarter of 2022 .
You have to make the below change in the postgresql.conf file
log_destination = 'jsonlog'
The log output will be written to a file, making it the third type of destination of this kind, after stderr and csvlog.
You can send these generated json logs to elasticsearch or any application for further log aggergations.
Check here for more info
Update: PostgreSQL v15 is out now. You can now explore them here
PostgreSQL self doesn't support any other formats than plain text and CSV. When you need other formats, then you need to get somewhere (or write by self) special extension that is able to touch log API and format and push PostgreSQL logs. One extension was developed by Michael and it is described in mentioned link. Here is link to source code: https://github.com/michaelpq/pg_plugins/tree/master/jsonlog . You have to compile this extension like any other (PostgreSQL extension) - code is in C language, and then you can use it.
As I am understanding your problem statement is you want to push postgresql logs to Elasticsearch.
For this I would recommend to use filebeat where you can simple enable the PostgresSQL module and set the log path. Filebeat start reading logs file and push to Elasticsearch.
You can visualize your data from kibana with readymade dashboard. It is simple plug and play.

how to define log4j.properties for MongoDB

I have been using Wowza Stream Engine for content streaming and actually used MySQL for storing logs coming from wowza streaming with the help of log4j MySQL definitions. Before using MySQL, I utilised from the instructions in official wowza web site. The link is below:
https://www.wowza.com/forums/content.php?130-How-to-log-to-a-mySQL-database
However, because of the fact that MySQL became slower day to day, (sometimes even crashing) while wowza streaming logs coming and accumulating on DB (millions) ; I intended to move the DB Log system to MongoDB. In accordance with this, I used below log4j mongodb statements in order to work it just like in the MySQL DB.
log4j.appender.MongoDB=org.apache.log4j.jdbc.JDBCAppender
log4j.appender.MongoDB= com.mongodb.jdbc.MongoDriver
log4j.appender.MongoDB.hostname=localhost
log4j.appender.MongoDB.port= 27017
log4j.appender.MongoDB.Driver=org.mongodb.mongodb-driver
log4j.appender.MongoDB=org.log4mongo.MongoDbAppender
log4j.appender.MongoDB.databaseName=primarydb
log4j.appender.MongoDB.collectionName=wowza_log
log4j.appender.MongoDB.layout=org.log4mongo.MongoDbPatternLayout
log4j.appender.MongoDB=primarydb.wowza_log.insert({server_ip= {server_ip}, date= {date}, time= {time}, ...}
Moreover, needed MongoDB setup and Service setup processes have also been accomplished correctly.
Consequently, I have set up RoboMongo so as to see and observe the collection ('wowza_log') created by wowza streaming. However, after starting a sample mp3 with wowza, the connection seems to be set, but there are no collection named wowza_log created, nothing happens in MongoDB as I see from RoboMongo. I got stuck at this point and wondered if there are some people who can help me to get rid of this problem.

Use Cygnus to store historical data from Orion ContextBroker in a local Hadoop database

We are currently working in a project where we use Orion ContextBroker to store information from different sensors and Wirecloud to show them in a web page.
We want to store historical data from these sensors in order to show them in a graph. I have looked around the Fiware documentation and they recommend to store the data in a Cosmos instance of Fi-lab, through Cygnus.
The thing is that we would like to store that historical data in a local Hadoop based server we have in our company, not in Cosmos, because we are running this project in a local net where we don't have internet access, and also to have that information stored in our local server.
Is it possible to configure Cygnus to redirect the output data to my file system? If so, which files must be configured in order to achieve this?
Thank you
The answer is yes. Cygnus is meant to persist context data in whatever HDFS-based filesystem (as the one used by Cosmos), thus nothing special has to be done when configuring Cygnus.
If you download the lastest version (0.7.0 at the moment of writting this), you will need to configure:
A cygnus_instance_default.conf file from cygnus_instance.conf.template. This is the instance configuration. From 0.7.1 is possible to have multiple instance configurations that are run in a parallel way, and they all have to called cygnus_instance_<whatever>.conf.
A agent.conf file from agent.conf.template. This is the Flume specific configuration that you will find in the README.md.

talend , mongoDB connection

I am facing a problem with mongo DB connection.
I have succefully imported tMongo components it to my Talend Open Studio 5.1.1 and by copying the mongo 1.3.jar file to lib/java folder, my Mongo DB jobs are running successfully, but the problem is even if I provide some fake server path(IP) and fake port for mongoDB, my job is running without an error and it is giving me 1 row with no data. and same goes with right IP and port.
How do I resolve it.
I think the connection is not working. As you must be knowing, mongoDB checks that the connection is actually working or not when you perform a query on it.
(Yeah, it doesn't check for a successful connection when you just connect to it ).
I would suggest to instead add the mongoDB components present in Talend for Big Data by following the steps below:
Components provided for MongoDB are :
tMongoDBInput, tMongoDBOutput, tMongoDBConnection etc.
Or you can Download the components from http://www.talendforge.org/exchange/ and search for Mongo instead of using Talend Big Data. But I would suggest use Talend for big Data for it.
The components will be zipped format , Unzip the same. In Talend Big data you will find the components in Component folder.
Copy these Unzipped Components to the installation Path of TOS.
C:TalendTOS_DI-Win32-r84309V5.1.1pluginsorg.talend.designer.components.localprovider_5.1.1.r84309components
Copy the mongo-1.3.jar file in the component folder into the C:TalendTOS_DI-Win32-r84309-V5.1.1libjava
In many systems you might not be able to see this file then go with ADMINISTRATOR priviliges.
optional for few systems——>>> Inside index.xml add
save index.xml
Restart TOS
Then you will be able to use them as normal components.
Cheers!
The reason for the Job running without any error could be due to the connection / meta-data you have used for the Mongo Connector. It doesn't is not possible for the job to run without any error even after giving fakepath.
I guess you might configured (re-modified) the repository connection but using a built-in meta data for component.

Enterprise Library Database Trace Listener?

I'm using EntLib v4 for Logging and currently I'm saving the events to the default text file listener.
I would like to use MS SQL database as my event sink and I saw that the database listener is already provided, but I don't know how to create logging database and stored procedures?
After googling around I saw that in v3 the database creation scripts were shipped with the EntLib, but I can't find them in v4.
I just checked and its in the installation for the source. On my machine its in C:\EntLib4Src\Blocks\Logging\Src\DatabaseTraceListener\Scripts.
You can use the createloggingdb.cmd file or parse loggingdatabase.sql yourself for the relevant commands.