Feed data to graylog2 from MySql tables - plugins

I am looking for a way to get a data from few specific MySql tables into graylog2. I have done something similar in ELK using the logstash JDBC input plugin as below,
https://www.elastic.co/blog/logstash-jdbc-input-plugin
Is there a similar way or better way to do it via graylog2

There is a plugin to generate messages in GELF format from Logstash, if you want to use Logstash to output events to Graylog2.

Related

How can I ingest into Kafka text files that were created for splunk?

I'm evaluating the use of apache-kafka to ingest existing text files and after reading articles, connectors documentation, etc, I still don't know if there is an easy way to ingest the data or if it would require transformation or custom programming.
The background:
We have a legacy java application (website/ecommerce). In the past, there was a splunk server to do several analytics.
The splunk server is gone, but we still generate the log files used to ingest the data into splunk.
The data was ingested to Splunk using splunk-forwarders; the forwarders read log files with the following format:
date="long date/time format" type="[:digit:]" data1="value 1" data2="value 2" ...
Each event is a single line. The key "type" defines the event type and the remaining key=value pairs vary with the event type.
Question:
What are my options to use these same files to send data to Apache Kafka?
Edit (2021-06-10): Found out this log format is called Logfmt (https://brandur.org/logfmt).
The events are single lines of plaintext, so all you need is a StringSerializer, no transforms needed
If you're looking to replace the Splunk forwarder, then Filebeat or Fluentd/Fluentbit are commonly used options for shipping data to Kafka and/or Elasticsearch rather than Splunk
If you want to pre-parse/filter the data and write JSON or other formats to Kafka, Fluentd or Logstash can handle that

Hbase change data capture

I have a use-case where i want to capture the data changes (Insert/Update) in Hbase tables which is getting populated via Kafka.
I have tried the following approach but it doesn't seem to work. Hbase change data capture
Is there any other way i can achieve the task.

Is it possible to use grafana to write query results of SQL DBs (postgres / mysql) into influxDB ?

I would like to query several different DB's using grafana, and in order to keep metrics history I would like to keep it in influxDB.
I know that I can write my own little process that holds queries and send it to influx, but I wonder if its possible by grafana only?
You won't be able to use Grafana to do that. Grafana isn't really an appropriate tool for transforming/writing data. But either way, its query engine generally just works with one single datasource/database at a time, rather than multiple, which is what you'd need here.

how to make superset display druid data?

I have been trying to have superset display data from druid, but was unable to succeed.
In my druid console I could clearly see a "wiki-edits" data source, but, when I have specified druid cluster and druid data source in superset, it did not pick up any of that data.
Have anyone been able to make this work?
Use the Refresh Druid Metadata option available in the source menu of Superset.
If even after that you are not able to see the data source then make sure you have given the correct coordinator host,port and broker host,port in the Druid Cluster source of Superset.
Have you tried scan for new datasources in the Sources Menu?

Adding user information to centralized logging with ELK stack

I am using ELK stack (first project) to centralize logs of a server and visualize some real-time statistics with Kibana. The logs are stored in an ES index and I have another index with user information (IP, name, demographics). I am trying to:
Join user information with the server logs, matching the IPs. I want to include this information in the Kibana dashboard (e.g. to show in real-time the username of the connected users).
Create new indexes with filtered and processed information (e.g. users that have visited more than 3 times certain url).
Which is the best design to solve those problems (e.g. include username in the logstash stage through a filter, do scheduled jobs,...)? If the processing task (2) gets more complex, would it be better to use MongoDB instead?
Thank you!
I recently wanted to cross reference some log data with user data (containing IPs among other data) and just used elasticsearch's bulk import API. This meant extracting the data from a RDBMS, converting it to JSON and outputting a flat file that adhered to the format desired by the bulk import API (basically prefixing a row that describes the index and type).
That should work for an initial import, then your delta could be achieved using triggers in whatever stores your user data. Might simply write to a flat file and process like other logs. Other options might be JDBC River.
I am also interested to know where the data is stored originally (DB, pushing straight from a server..). However, I initially used the ELK stack to pull data back from a DB server using a batch file utilizing BCP (running on a scheduled task) and storing it to a flat file, monitoring the file with Logstash, and manipulating the data inside the LS config (grok filter). You may also consider a simple console/web application to manipulate the data before grokking with Logstash.
If possible, I would attempt to pull your data via SQL Server SPROC/BCP command and match the returned, complete message within Logstash. You can then store the information in a single index.
I hope this helps as I am by no means an expert, but I will be happy to answer more questions for you if you get a little more specific with the details of your current data storage; namely how the data is entering Logstash. RabbitMQ is another valuable tool to take a look at for your input source.