in unmanaged cluster in order to export the k8s audit log we can use the AuditSink object and redirect the logs to any webhook we would like to . in order to do so we should changed the API server.
in managed cluster the API server is not accessible - is there any way to send the data to webhook as well?
if you can add an example it will be great since i saw the sub/pub option of GCP for example and it seems that i cant use my webhook
Within a managed GKE cluster, the audit logs are sent to Stackdriver Logging. At this time, there is no way to send the logs directly from GKE to a webhook; however, there is a workaround.
You can export the GKE Audit logs from Stackdriver Logging to Pub/Sub using a log sink. You will need to define which GKE Audit logs you will like to export to Pub/Sub.
Once the logs are exported to Pub/Sub, you will then be able to push them from Pub/Sub using your webhook. Cloud Pub/Sub is highly programmable and you can control the data you exchange. Please take a look at this link for an example about webhooks in Cloud Pub/Sub.
Related
I am a junior developer currently running a service in a Kubernetes environment.
How can I check if a resource inside Kubernetes has been deleted for some reason?
As a simple example, if a deployment is deleted, I want to know which user deleted it.
Could you please tell me which log to look at.
And I would like to know how to collect these logs.
I don't have much experience yet, so I'm asking for help.
Also, if you have a reference or link, please share it. It will be very helpful to me.
Thank you:)
Start with enabling audit with lots of online resources about doing this.
If you are on AWS and using EKS I would suggest enabling "Amazon EKS control plane logging" By enabling it you can enable audit and diagnostic logs streaming in AWS cloudwatch logs, which are more easily accessible, and useful for audit and compliance requirements. Control plane logs make it easy for you to secure and run your clusters and make the entire system more audiatable.
As per AWS documentation:
Kubernetes API server component logs (api) – Your cluster's API server is the control plane component that exposes the Kubernetes API. For more information, see kube-apiserver in the Kubernetes documentation.
Audit (audit) – Kubernetes audit logs provide a record of the individual users, administrators, or system components that have affected your cluster. For more information, see Auditing in the Kubernetes documentation.
Authenticator (authenticator) – Authenticator logs are unique to Amazon EKS. These logs represent the control plane component that Amazon EKS uses for Kubernetes Role-Based Access Control (RBAC) authentication using IAM credentials. For more information, see Cluster authentication.
Controller manager (controllerManager) – The controller manager manages the core control loops that are shipped with Kubernetes. For more information, see kube-controller-manager in the Kubernetes documentation.
Scheduler (scheduler) – The scheduler component manages when and where to run pods in your cluster. For more information, see kube-scheduler in the Kubernetes documentation.
Reference: https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html
I've just started using StackDriver and I'm failing to get GoogleDrive logs (or any GSuite logs for that matter).
From the documentation (https://cloud.google.com/resource-manager/docs/audit-logging) I understand that I need to read the Audit Logs in SD, but I'm not sure how to connect them to SD.
It's not possible to gather any logs from Google Drive by StackDriver since it's a Gsuite product and it's not supported.
Stackdriver will work only with Google Cloud Platform / AWS or standalone machines and it requires installation of monitoring agent on them. In case of GCP VM's it's preinstalled by default.
You can read in the documentation which services are supported by StackDriver.
I think a less granular approach is to use the GSuite Exporter (link below) to export user activity events from the Admin SDK Reports API into StackDriver. You'll need to grant extra IAM roles for the service account to have visibility, but it is possible.
GSuite-Exporter: https://github.com/GoogleCloudPlatform/professional-services/tree/master/tools/gsuite-exporter
I have created an pub/sub topic to which I will publish a message every time an new object is uploaded to the bucket. Now I want to create a subscription to push a notification to an endpoint every time a new object is uploaded to that bucket. Following the documentation, I wanted something like that:
gcloud alpha pubsub subscriptions create orderComplete \
--topic projects/PROJECT-ID/topics/TOPIC \
--push-endpoint http://localhost:5000/ENDPOINT/
--ack-deadline=60
However my app is running on kubernetes and it seems that pub/sub cannot reach my endpoint. Any suggestions?
As standing in documentation
In general, the push endpoint must be a publicly accessible HTTPS
server, presenting a valid SSL certificate signed by a certificate
authority and routable by DNS.
So you must expose your service via HTTPS using Ingress as described there:
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
In order for Cloud Pub/Sub to push messages to your application, you need to provide a publicly accessible endpoint. In Kubernetes, this most likely means exposing a Service. With this, you should have a non-local (i.e. no “localhost”) URL to reach the pods running your binaries.
Before creating the Cloud Pub/Sub subscription, you should also verify your domain with the Cloud Console.
Finally, you can set your subscription to push messages by changing its configuration:
gcloud pubsub subscriptions modify-push-config mySubscription \
--push-endpoint="https://publicly-available-domain.com/push-endpoint"
Yeah, so as #jakub-bujny points out you need a SSL endpoint. So one solution, on GKE, to use google's managed certificates with an Ingress resource (link shows you how)
I have a scenario where I need to push application logs running on EKS Cluster to separate cloudwatch log streams. I have followed the below link, which pushes all logs to cloudwatch using fluentd. But the issue is, it pushes logs to a single log stream only.
https://github.com/aws-samples/aws-workshop-for-kubernetes
It also pushes all the logs under /var/lib/docker/container/*.log. How Can I filter this to can only application specific logs?
Collectord now supports AWS CloudWatch Logs (and S3/Athena/Glue). It gives you flexibility to choose to what LogGroup and LogStream you want to forward the data (if the default does work for you).
Installation instructions for CloudWatch
How you can specify LogGroup and LogStream with annotations
Highly recommend to read Setting up comprehensive centralized logging with AWS Services for Kubernetes
I have 3-node on-prem cluster. Now i want to collect and analyze reverse proxy logs (and other system service fabric logs). I google and found this article and it says
Refer to Collect reverse proxy events to enable collecting events from
these channels in local and Azure Service Fabric clusters.
But that link describes how to enable, configure and collect reverse proxy logs for clusters in Azure. And I don't understand how to do it on-prem.
Please, help!
Service Fabric Events are just ETW events, you have the option to use the Builtin mechanism to collect and forward these events to a Monitoring Application like Windows Azure Diagnostics, or you can build your own.
If you decide to follow the approach in the documents, it will work on azure or On premises, the only caveat is that On-Premises it will send the logs to Azure, but it will work the same way.
On Premises, another way is build you own collector using EventFlow, you can configure EventFlow to collect the ReverseProxy ETW events and then forward these to ELK or any other monitoring platform.