How to view GoogleDrive changes in StackDriver - google-workspace

I've just started using StackDriver and I'm failing to get GoogleDrive logs (or any GSuite logs for that matter).
From the documentation (https://cloud.google.com/resource-manager/docs/audit-logging) I understand that I need to read the Audit Logs in SD, but I'm not sure how to connect them to SD.

It's not possible to gather any logs from Google Drive by StackDriver since it's a Gsuite product and it's not supported.
Stackdriver will work only with Google Cloud Platform / AWS or standalone machines and it requires installation of monitoring agent on them. In case of GCP VM's it's preinstalled by default.
You can read in the documentation which services are supported by StackDriver.

I think a less granular approach is to use the GSuite Exporter (link below) to export user activity events from the Admin SDK Reports API into StackDriver. You'll need to grant extra IAM roles for the service account to have visibility, but it is possible.
GSuite-Exporter: https://github.com/GoogleCloudPlatform/professional-services/tree/master/tools/gsuite-exporter

Related

How can I check if a resource inside Kubernetes has been deleted for some reason?

I am a junior developer currently running a service in a Kubernetes environment.
How can I check if a resource inside Kubernetes has been deleted for some reason?
As a simple example, if a deployment is deleted, I want to know which user deleted it.
Could you please tell me which log to look at.
And I would like to know how to collect these logs.
I don't have much experience yet, so I'm asking for help.
Also, if you have a reference or link, please share it. It will be very helpful to me.
Thank you:)
Start with enabling audit with lots of online resources about doing this.
If you are on AWS and using EKS I would suggest enabling "Amazon EKS control plane logging" By enabling it you can enable audit and diagnostic logs streaming in AWS cloudwatch logs, which are more easily accessible, and useful for audit and compliance requirements. Control plane logs make it easy for you to secure and run your clusters and make the entire system more audiatable.
As per AWS documentation:
Kubernetes API server component logs (api) – Your cluster's API server is the control plane component that exposes the Kubernetes API. For more information, see kube-apiserver in the Kubernetes documentation.
Audit (audit) – Kubernetes audit logs provide a record of the individual users, administrators, or system components that have affected your cluster. For more information, see Auditing in the Kubernetes documentation.
Authenticator (authenticator) – Authenticator logs are unique to Amazon EKS. These logs represent the control plane component that Amazon EKS uses for Kubernetes Role-Based Access Control (RBAC) authentication using IAM credentials. For more information, see Cluster authentication.
Controller manager (controllerManager) – The controller manager manages the core control loops that are shipped with Kubernetes. For more information, see kube-controller-manager in the Kubernetes documentation.
Scheduler (scheduler) – The scheduler component manages when and where to run pods in your cluster. For more information, see kube-scheduler in the Kubernetes documentation.
Reference: https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html

export K8S logs in managed cluster

in unmanaged cluster in order to export the k8s audit log we can use the AuditSink object and redirect the logs to any webhook we would like to . in order to do so we should changed the API server.
in managed cluster the API server is not accessible - is there any way to send the data to webhook as well?
if you can add an example it will be great since i saw the sub/pub option of GCP for example and it seems that i cant use my webhook
Within a managed GKE cluster, the audit logs are sent to Stackdriver Logging. At this time, there is no way to send the logs directly from GKE to a webhook; however, there is a workaround.
You can export the GKE Audit logs from Stackdriver Logging to Pub/Sub using a log sink. You will need to define which GKE Audit logs you will like to export to Pub/Sub.
Once the logs are exported to Pub/Sub, you will then be able to push them from Pub/Sub using your webhook. Cloud Pub/Sub is highly programmable and you can control the data you exchange. Please take a look at this link for an example about webhooks in Cloud Pub/Sub.

How to Send On Premises Kubernetes Logs to Stackdriver

Objective: Get some logging/monitoring on Googles
Stackdriver from a Kuberntes HA cluster
that is on premises, version 1.11.2.
I have been able to send logs to Elasticsearch using Fluentd Daemonset for
Kubernetes, but the
project is not supporting Stackdriver
(issue).
That said, there is a docker image created for Stackdriver
(source),
but it does not have the daemonset. Looking at other daemonsets in this
repository, there are similarities between the different fluent.conf files
with the exception of the Stackdriver fluent.conf file that is missing any
environment variables.
As noted in the GitHub
issue
mentioned above there is a plugin located in the Kubernetes GitHub
here,
but it is legacy.
The docs can be found
here.
It states:
"Warning: The Stackdriver logging daemon has known issues on
platforms other than Google Kubernetes Engine. Proceed at your own risk."
Installing in this manner fails, without indication of why.
Some other notes. There is Stackdriver Kubernetes
Monitoring that clearly
states:
"Easy to get started on any cloud or on-prem"
on the front page, but
doesn't seem to explain how. This Stack Overflow
question
has someone looking to add the monitoring to his AWS cluster. It seems that it is not yet supported.
Furthermore, on the actual Google
Stackdriver it is also stated that
"Works with multiple clouds and on-premises infrastructure".
Of note, I am new to Fluentd and the Google Cloud Platform, but am pretty
familiar with administering an on-premise Kubernetes cluster.
Has anyone been able to get monitoring or logging to work on GCP from another platform? If so, what method was used?
Consider reviewing this documentation for using the BindPlane managed fluentd service from Google partner Blue Medora. It is available in Alpha to all Stackdriver users. It parses/forwards Kubernetes logs to Stackdriver, with additional payload markup.
Disclaimer: I am employed by Blue Medora.
Check out the new Stackdriver BindPlane integration which provides on-premise log capabilities.
It is fully supported by Google and is free (other than typical Stackdriver consumption fees)
https://cloud.google.com/solutions/logging-on-premises-resources-with-stackdriver-and-blue-medora

Getting logs of ssh access to pods in kubernetes

I'm running a Kubernetes cluster on Google Cloud, and I'm trying to figure out a way to see all SSH access to the pods, whether it was done using the google cluster management tools, or via kubectl.
I want to be able to see which user account made the access, and ideally what commands they ran. I have stackdriver logging running on all instances which I think may already be recording these actions, but looking at the giant wall of logs, I can't figure out how to tell which of these were generated by someone sshing in.
Is there some kind of standard labeling schema in stackdriver to denote ssh access?
So it turns out that Google Cloud has Auditing enabled by default for Kubernetes, which logs many things, including access to the pod through kubectl. I was able to update my stackdriver log filter like so:
protoPayload.#type="type.googleapis.com/google.cloud.audit.AuditLog"
And get the logs I was interested in.

How to go about logging in GKE without using Stackdriver

We are unable to grab logs from our GKE cluster running containers if StackDriver is disabled on GCP. I understand that it is proxying stderr/stdout but it seems rather heavy handed to block these outputs when Stackdriver is disabled.
How does one get an ELF stack going on GKE without being billed for StackDriver aka disabling it entirely? or is it so much a part of GKE that this is not doable?
From the article linked on a similar question regarding GCP:
"Kubernetes doesn’t specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: Stackdriver Logging for use with Google Cloud Platform, and Elasticsearch. You can find more information and instructions in the dedicated documents. Both use fluentd with custom configuration as an agent on the node." (https://kubernetes.io/docs/concepts/cluster-administration/logging/#exposing-logs-directly-from-the-application)
Perhaps our understanding of Stackdriver billing is wrong?
But we don't want to be billed for Stackdriver as the 150MB of logs outside of the GCP metrics is not going to be enough and we have some expertise in setting up ELF for logging that we'd like to use.
You can disable Stackdriver logging/monitoring on Kubernetes by editing your cluster, and setting "Stackdriver Logging" and "Stackdriver Monitoring" to disable.
I would still suggest sticking to GCP over AWS as you get the whole Kube as a service experience. Amazon's solution is still a little way off, and they are planning charging for the service in addition to the EC2 node prices (Last I heard).