Server Event Logs. I need unstructured data of server event logs - server

I need data of server event logs. But whenever I google "data of server's event logs", instead of fetching me the data, Google shows me 'approaches' to handle those (server) event logs.
But I need the data of server event logs.
Let me know the sources from where I can have them, other than Windows.
I mean to say like:-
a. data of event logs of Amazon server,
b. data of event logs of
google server,
c. data of event logs of Apache server,
and so

Related

Sync data from REST and Websockets

we are building chat application similar to messenger. There is required behavior:
User log in
User should see last N messages, and he should be able to load older messages
New messages should be appended as well
My solution:
I would like to use websockets for this purpose with combination of REST. My idea was that client application decide by message id which messages need. So REST will be used for initial fetching of messages and fetching older messages.
New messages will received by websockets
Possible issue which I should handle:
Application starts subscribing websocket channel for new messages and send request for old messages without initial message id
There is chance that after calling GET request new message come, and will be stored in DB
Client application started subscribing websocket channel so message will received by websockets.
GET request didn't know about this message and fetch last N messages where this new messages will occured and client application will have duplicate record and have to filtered this messages
Can you give me advice if there is some elegant way how to handle this case? Thank you.
I would resolve your task having in mind the following:
The client application should know only about the topic to which to listen. And not the ID of the message starting from which to listen.
It is up to the server to decide what to return (even time should always be tracked server-side).
The WebSocket is used as a transport for STOMP (simply to not reinvent the wheel). The WebSocket connection could be opened once the client application is loaded and not when it is entering the "listen for messages" state. But topic subscription should be performed when necessary.
You can always send GET request and initiate a STOMP subscription simultaneously (almost simultaneously, well with a delay of 1-2 nano-second). And those always should be processed in different promises. But I would align those in the following way: first, the STOMP subscription is initiated, And a specific message on subscription with the initial timestamp of the start of subscription is delivered; second, REST request to get previous 10-100 messages for the TOPIC prior to a specific timestamp (received from STOMP) is performed.
Getting the last 10 messages (which are prior to subscription moment) could be delivered as by REST as by STOMP approach: you can always react to a subscription event on your server-side, and deliver client-specific messages.
Regarding the problem of multiple identical messages from different "data channels", it is easily resolvable: your client (hope that is not jquery, but rather Angular or React or Vue or anything else) will be storing all the data in a single collection in a controller, and filtering and checking by message-id that only unique entries are stored is easy.
BUT if your system will produce hundreds of thousands of messages per second: I guess HTTP-based protocols are not your choice in this case.

How to choose which events are sent to the user on an occasionally connected system using CQRS with event sourcing?

I am building a web app that users can edit and share notes. Users should be connected to notes with roles (owner, read, read-write). This is an occasionally connected system so I chose to do the syncing using CQRS and event sourcing. Following Greg Young's presentation [36:20 - 38:40], the flow would be as follows:
Client does changes while offline.
Client connects to the Internet.
The "store and forward" sends the events that occurred while the client was offline.
Client compares the local events with the received events and does a merge, deciding what commands to keep. Then updates local view model.
Client sends the stored commands (created offline) to the server.
Server executes the commands and generates events that are stored in event store.
"store and forward" holds the events each user is interested in, until the users come back online.
The question is: How does the "store and foreword" decide what events should be sent to each user?
Obviously sending all events would compromise the security of other users.
Since your client knows which aggregates it displays, then it can just tell backend "hey, are there events for aggregateIds [...] since [timestamp]?".
This is how reSolve framework keeps UI reactive - client subscribes to events for particular aggregateId and receives them in real time via websockets.
So one answer to your question could be "let user ask for events (aggregateIds) he is interested in"

Email alert on AWS for webservice health

I have deployed a webservice on AWS EC2 instances.
I have also implemented a rest call /getStatuswhich returns status of modules in my service in JSON format like Connection status of DB, ActiveMQ cache status etc.
I want a way to creat automatic email trigger which will send mail when there is any issue found in response of /getStatus rest call.
I am looking if its possible using cloudwatch but any other sugestions are welcome
One solution is to make the endpoint return an HTTP status code indicating that something isn't correct (like a 500) and then set up a Route53 Health Check with e-mail notifications (using SNS).
The basic procedure for configuring email alerts is pretty straightforward. Use this flowchart to get started.
If you want detailed instructions, this guide covers how to set up AWS email alerts upon resource status change and includes a few additional steps to refine the reports to be a bit more user-friendly and sent directly to a third-party messenger service.
The workflow will look like this:
Create Route53 Health Check;
Route53 initializes Health Checker Nodes in various regions;
Health Checkers ping the specified URL;
4a. Status is OK if TCP connection is established within 10 seconds and HTTP status code 2xx or 3xx is retrieved within 2 seconds;
OR
4b. Status is FAILURE otherwise: TCP connection fails, TCP connection times out, HTTP status code is 4xx, 5xx or page is too slow (yes, slow 200 response can cause failure);
Health Checker nodes will retry the endpoint as configured;
Cloud watch alarm is triggered on Health Check status change;
Alarm is delivered to AWS SNS topic
AWS SNS notifies topic subscribers
Advanced configuration may be applied to enhance notification contents and delivery method per above guide.
I work for the team that develops Axibase Time Series Database (atsd).
I would suggest a cloudwatch event that runs on a schedule you decide (i.e. every 5 minutes).
The event would call a lambda function, which would make the /getStatus call and decide if an email needs to be sent - if it does, I would further suggest AWS SES to send a custom formatted email with the appropriate alerts to the person(s) that are supposed to get them.
Using the above tools would be 'serverless', and cost very little to nothing and have the benefit of not running on a instance you have to worry about.

SNMP Messages for failed logon with a PostgreSQL Log

Currently my application runs and inserts events into a protected PostgreSQL DB. That's cool and it allows for audit of user login and such.
What I would like to do is be able to take failed login events after they reach a certain threshold and report those via SNMP Message to another service (like a snmp server). I just can't seem to wrap my head around how.
I thought of maybe using POST to a failed page and inside of that PHP script a system to post to PostgreSQL and query events by user by time but it seems brutal. Maybe Python? I have options but I can't think of a good implementation. Help?
I would suggest following approach:
on insert trigger on table where login attempts were logged to check number of attempts and then
when login attempts threshold achieved send NOTIFY notification
finally external service LISTEN for notification and
upon it receives one send SNMP message broadcast
For more info see:
NOTIFY and LISTEN

Delay in message archiving for Monitoring Service plugin

I am using the Monitoring service plugin to save one-to-one chats in the db (mysql). But I observe that there is some delay of about 1 minute between the time when message is received and when it gets stored in the db.
In case if user refreshes the page means the latest messages are not displayed.
How to avoid this delay?