Kubernetes command logging on Google Cloud Platform for PCI Compliance - kubernetes

Using Kubernetes' kubectl I can execute arbitrary commands on any pod such as kubectl exec pod-id-here -c container-id -- malicious_command --steal=creditcards
Should that ever happen, I would need to be able to pull up a log saying who executed the command and what command they executed. This includes if they decided to run something else by simply running /bin/bash and then stealing data through the tty.
How would I see which authenticated user executed the command as well as the command they executed?

Audit logging is not currently offered, but the Kubernetes community is working to get it available in the 1.4 release, which should come around the end of September.

There are 3rd party solutions that can solve the auditing issue, and if you're looking for a PCI compliance as the title implies solutions exist that helps solve the broader problem, and not just auditing.
Here is a link to such a solution by Twistlock. https://info.twistlock.com/guide-to-pci-compliance-for-containers
Disclaimer, I work for Twistlock.

Related

How to wait for full cloud-initialization before VM is marked as running

I am currently configuring a virtual machine to work as an agent within Azure (with Ubuntu as image). In which the additional configuration is running through a cloud init file.
In which, among others, I have the below 'fix' within bootcmd and multiple steps within runcmd.
However the machine already gives the state running within the azure portal, while still running the cloud configuration phase (cloud_config_modules). This has as a result pipelines see the machine as ready for usage while not everything is installed/configured yet and breaks.
I tried a couple of things which did not result in the desired effect. After which I stumbled on the following article/bug;
The proposed solution worked, however I switched to a rhel image and it stopped working.
I noticed this image is not using walinuxagent as the solution states but waagent, so I tried to replacing that like the example below without any success.
bootcmd:
- mkdir -p /etc/systemd/system/waagent.service.d
- echo "[Unit]\nAfter=cloud-final.service" > /etc/systemd/system/waagent.service.d/override.conf
- sed "s/After=multi-user.target//g" /lib/systemd/system/cloud-final.service > /etc/systemd/system/cloud-final.service
- systemctl daemon-reload
After this, also tried to set the runcmd steps to the bootcmd steps. This resulted in a boot which took ages and eventually froze.
Since I am not that familiar with rhel and Linux overall, I wanted to ask help if anyone might have some suggestions which I can additionally try.
(Apply some other configuration to ensure await on the cloud-final.service within a waagent?)
However the machine already had the state running, while still running the cloud configuration phase (cloud_config_modules).
Could you please be more specific? Where did you read the machine state?
The reason I ask is that cloud-init status will report status: running until cloud-init is done running, at which point it will report status: done
I what is the purpose of waiting until cloud-init is done? I'm not sure exactly what you are expecting to happen, but here are a couple of things that might help.
If you want to execute a script "at the end" of cloud-init initialization, you could put the script directly in runcmd, and if you want to wait for cloud-init in an external script you could do cloud-init status --wait, which will print a visual indicator and eventually return once cloud-init is complete.
On not too old Azure Linux VM images, cloud-init rather than WALinuxAgent acts as the VM provisioner. The VM is marked provisioned by the Azure cloud-init datasource module very early during cloud-init processing (source), before any cloud-init modules configurable with user data. WALinuxAgent is only responsible for provisioning Azure VM extensions. It does not appear to be possible to delay sending the 'VM ready' signal to Azure without modifying the VM image and patching the source code of cloud-init Azure datasource.

Cloud Init on Google Compute Engine (GCE) with Centos7/8 doesn't run properly on First boot, but fine after any other reboot

We have a CentOS 8 (tried 7 as well) image and I am adding some config to act as a router.
The issue is, for some reason, the first time the instance is created, cloud init doesn't read the network config we pass using the user-data metadata
#cloud-config
network
version: 1
etc...
We configure eth1 to use dhcp and get cloud-init to manage it, as well as add a route.
Works perfectly every time after the initial boot up (and stop>start again).
To me it feels like cloud-init is not aware of the config, but when I go in the machine and do cloud-init query userdata i can see the data, and even then if I do cloud-init clean && cloud-init init it doesn't do anything. The same commands work fine if the machine was rebooted
Try running cloud-init analyze show both times (instance creation and consecutive reboot) and check for any differences.
Sadly, cloud providers kind-of abuse the abilities of cloud-init, not to a complete fault. cloud-init allows for customization of vendor/user provided configuration (who overrides what), changing the order of boot stages, etc.
This is done mostly because different cloud providers need network/provisioning/storage at different times. For example, AWS attaches storage after network (EBS only), Azure provides VM only after storage is attached and it's natively provided as NTFS (they really format the drive if you need anything else), etc.
These shenanigans, while understandable (datacenter infrastructure defines user availability) make cloud-init's documentation merely a suggestion for the user to investigate.
From my experience, Azure is the closest to original implementation. Possibly they haven't learned yet how to utilize the potential in their favor.
My general suggestion for any instance customization (almost always works) is to write a script with write_files and execute them with bootcmd/runcmd, because these run at the final stage, and provide for best override opportunity. Edit hosts, change firewall rules - most of the stuff will not require reboot.

How to run MirrorMaker 2.0 in production?

From the documentation, I see MirrorMaker 2.0 like this on the command line -
./bin/connect-mirror-maker.sh mm2.properties
In my case I can go to an EC2 instance and enter this command.
But what is the correct practice if this needs to run for many days. It is possible that EC2 instance can get terminated etc.
Therefore, I am trying to find out what are best practices if you need to run MirrorMaker 2.0 for a long period and want to make sure that it is able to stay up and running without any manual intervention.
You have many options, these include:
Add it as a service to systemd. Then you can specify that it should be started automatically and restarted on failure. systemd is very common now, but if you're not running systemd, there are many other process managers. https://superuser.com/a/687188/80826.
Running in a Docker container, where you can specify a restart policy.

Why shouldn't you run Kubernetes pods for longer than an hour from Composer?

The Cloud Composer documentation explicitly states that:
Due to an issue with the Kubernetes Python client library, your Kubernetes pods should be designed to take no more than an hour to run.
However, it doesn't provide any more context than that, and I can't find a definitively relevant issue on the Kubernetes Python client project.
To test it, I ran a pod for two hours and saw no problems. What issue creates this restriction, and how does it manifest?
I'm not deeply familiar with either the Cloud Composer or Kubernetes Python client library ecosystems, but sorting the GitHub issue tracker by most comments shows this open item near the top of the list: https://github.com/kubernetes-client/python/issues/492
It sounds like there is a token expiration issue:
#yliaog this is an issue for us, as we are running kubernetes pods as
batch processes and tracking the state of the pods with a static
client. Once the client object is initialized, it does no refresh, and
therefore any job that takes longer than 60 minutes will fail. Looking
through python-base, it seems like we could make a wrapper class that
generates a new client (or refreshes the config) every n minutes, or
checks status prior to every call (as #mvle suggested). The best fix
would be in swagger-codegen, but a temporary solution would probably
be very useful for a lot of people.
- #flylo, https://github.com/kubernetes-client/python/issues/492#issuecomment-376581140
https://issues.apache.org/jira/browse/AIRFLOW-3253 is the reason (and hopefully, my fix will be merged soon). As the others suggested, this affects anyone using the Kubernetes Python client with GCP auth. If you are authenticating with a Kubernetes service account, you should see no problem.
If you are authenticating via a GCP service account with gcloud (e.g. using the GKEPodOperator), you will generally see this problem with jobs that take longer than an hour because the auth token expires after an hour.
There are more insights here too.
Currently, long-running jobs on GKE always eventually fail with a 404 error (https://bitbucket.org/snakemake/snakemake/issues/932/long-running-jobs-on-kubernetes-fail). We believe that the problem is in the Kubernetes client, as we determined that although _refresh_gcp_token is being called when the token is expired, the next API call still fails with a 404 error.
You can see here that Snakemake uses the kubernetes python client.

Is it possible to destroy a dotcloud service from within itself?

I have a dotcloud service which I want to configure such that it shuts down and destroys itself after it has completed its task.
This should mean I am not charged for more server time than I actually require.
The obvious way to do this would be from the dotcloud CLI, but this is not installed on dotcloud instances. Also the dotcloud user does not have privilege to run the shutdown command.
Is there a simple way to do this, or would I need to deploy a custom service which installs the dotcloud CLI and from that can then destroy itself?
There is no official way for a service to destroy itself, but you could, as you've described, install the CLI and your credentials (I'd suggest using your API key in an environment variable set via dotcloud env set) and then script the dotcloud destroy -A <app name> <servicename> call.
A more efficient approach would be to have a permanent worker and keep feeding it jobs to do. The dotCloud platform is best suited for applications which have relatively consistent RAM needs, since we don't offer autoscaling.