Bitnami ELK in Azure only showing last event - elastic-stack

I created an ELK vm using the bitnami template in Azure and can send events, but when I goto the discover tab, it's only showing the last event.

What filters are you using? You might be filtering to show just the last event.
Can you confirm that the events are created despite they are not shown in the Discover Tab of Kibana? You can check the logs shown in LogStash for that by browsing to http://YOUR-SERVER-IP/logstash/

Related

Grafana dashboard variable from Loki logs

I'm a beginer for Grafana Loki, and now we have a running instance which works without issue (can see the log itself), now we want to define some variables and monitors them in the dashboard.
Below is one of our log froward from promtail->loki->grafana, belongs to job "mqtt_log",
we want to extract the "534654234" and the "1" from the log as two variable, and monitor in the dashboard.
2022-11-02 12:16:23 mqtt_log 2022-11-02 12:16:23,428 - AliyunMqtt - INFO - elevator/534654234/cabin/position/: b'{"Name":"Group.Elevators{EquipmentNumber=534654234}.Cabins:0.Position","Value":"{"Group":"1"}","Representation":"Live","TimeStamp":1667362583365}'
The problem is we don't know how to define the variables, anyone can share some comments, thanks.
You can't create dynamic (only hardcoded) dashboard variable from the parsed logs. You can do that only from existing labels.

Latest container logs not displayed in k8s dashboard

I have a running k8s cluster and we integrated the k8s dashboard to view the logs and I am able to login and view the app logs.
One thing to note here is that our application logs have the current date stamp appended to it, for example : application-20221101.log
I tried to sort the logs in the log location using the below command and this displays the latest logs inside the pod
tail -f `/bin/ls -1td /log-location/application*.log| /usr/bin/head -n1`
but once I add this to the container startup script,
it just displays the current day's logs and after the date changes, i.e it becomes 20221102, it still just displays the previous day's
i.e application-20221101.log only.
I need it to display the latest logs of the current date even after the date changes.
The easiest approach was to just remove the timestamp from the log files, but that would not be possible for our application.
Is there any simple way for configuring this or some workaround would be required to set this up.

Unable to enable Kafka feature on Azure Event Hub

I am trying to enable the Kafka feature on my Event Hub as described in the following link
https://www.codit.eu/blog/getting-familiar-with-azure-event-hubs-for-apache-kafka/?country_sel=uk
However, as you can see from my image of my Event Hub Namespace, I'm not provided with the option to Enable Kafka.
enter image description here
I read a past SO post where the answer was "As of now, “enable Kafka” feature is available on newly created Event Hubs." Also, the feature was only available in certain regions.
However, as you can see from my image, my Event Hub is within a region where Kafka is available and its newly created.
Therefore, can someone let me know why I can't see the Kafka feature in order to enable it?
However, as you can see from the image
The Kafka endpoint for the Event hub namespace is automatically enabled, so there is no such option during creation.
Note that Event Hubs for Kafka is available only on standard and dedicated tiers. The basic tier doesn't support Kafka on Event Hubs(see the Note section in this doc). But in the image you provided, I see you are creating a Basic tier eventhub namespace. Please try to create a Standard tier eventhub namespace, like below:
After the Standard tier eventhub namespace is created, you can check Kafka is enabled as stated in the screenshot below:

EFK - Have preconfigured filter by container that will appear in Kibana

I've got the EFK stack installed on kubernetes following this addon: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
What I want to achieve is having all the logs of the same pod together, and even maybe some other filters. But I don't want to configure the filter in kibana with the GUI, I'd like to have them preconfigured in the way that some of my known containers (the containers that I want to monitorize) are configured previously and installed when kibana rather than using an additional step to import/export them. I'd like to have the predefined filters in a way that, immediately after the installation, I can go to "discover", select the pod name that I want to see and then I see all the logs in the format:
In my understanding, that being the first time that I use this tech is near to zero, the in the fluentd-configmap.yml with the correct parameters should do the trick, but none of my tries has altered what I see in kibana.
Am I looking in the correct place for doing this or this filter is not for this use and I'm completely wasting my time? How could I do this filter in any case?
Any help, even if is only a hint, would be appreciated.

Cloud foundry app status or health notification?

Is there a way to get some notification when a Cloud Foundry application fails or is unreachable? I mean to register to some deployed app and if the status of the application is changed to failed or something, I want to receive a notification.
On Pivotal Cloud Foundry, when a app crashes, an event is emitted thru the firehose.
PCF Metrics tile, available from Pivotal, can be deployed to your PCF foudnation. PCF Metrics will track all events for apps running on the foundation and are accessible to developers (thru Apps Manager). I believe Metrics tile tracks history for up to two weeks. I am not aware of any alerting capabilities in the PCF Metrics tile (I could be wrong, in which case, please correct me), that will prompt you when an app crashes.
Other approaches are to implement event logging tools like Splunk, New Relic etc. They support alerts. You will have to build those.
API monitoring tools like AppD, Apigee, and New Relic provide alerting and can notify you went the response time to an app has degraded (as in your app has crashed). This approach is a little more involved. You may require to add an agent to your buildpack, depending on the tool you choose.
IMHO there is no such built-in feature for Cloud Foundry, but IBM Cloud offers the Availability Monitoring service to monitor apps and send out alerts in case of unavailability or other similar events. The service is part of the DevOps category in the IBM Cloud catalog.
There is also Alert Notification to manage alerts, the notification of the right groups via all kinds of channels and to track the alert status. For your question you should start with the Availability Monitoring and then work towards how those events are handled.
You can use the cf events appname command to get a list of all events about the application, this will print out all the recent events such as application crashes.
if run the cf events appname -v you will see the json rest calls the cf cli makes to Cloud Foundry.
You can use Cloud Foundry Java Client to write you own code to interact with Cloud Foundry.
Another thing you can do is stream your application logs to any syslog compatible log aggregation service for example splunk. Then have splunk monitor for app crash events in the log. You can read how to configure app log streaming at the docs
This functionality is scheduled to be available with PCF Metrics 1.5 and can be seen with PWS (Pivotal Web Services) in Alpha Mode.
The functionality is available under the Monitors Tab inside of PCF Metrics (1.5).
Webhook notifications (i.e. Slack) can be configured for a number of Events (including as you discussed crashes).
You can create a User Provided service and Add a syslog drain URL. And then bind the service to your application. Now in case of any events happening it will put the logs into the URL you have provided.