How to fetch already rotated logs in Kubernetes? - kubernetes

Currently I tried to fetch already rotated logs within the node using --since-time parameter.
Can anybody suggest what is the command/mechanism to fetch already rotated logs within kubernetes architecture using commands

You can't. Kubernetes does not store logs for you, it's just providing an API to access what's on disk. For long term storage look at things like Loki, ElasticSearch, Splunk, SumoLogic, etc etc.

Related

How do we create our own scalable storage buckets with Kubernetes?

Instead of using Google Cloud or AWS Storage buckets; how do we create our own scalable storage bucket?
For example; if someone was to hit a photo 1 billion times a day. What would be the options here? Saying that the photo is user generated and not image/app generated.
If I have asked this in the wrong place, please redirect me.
As an alternative to GKE or AWS objects storage, you could consider using something like MinIO.
It's easy to set up, it could run in Kubernetes. All you need is some PersistentVolumeClaim, to write your data. Although you could use emptyDirs to evaluate the solution, with ephemeral storage.
A less obvious alternative would be something like Ceph. It's more complicated to setup, although it goes beyond objects storage. If you need to implement block storage as well, for your Kubernetes cluster, then Ceph could do this (Rados Block Devices), whilst offering with object storage (Rados Gateways).

Updating AWS Elasticsearch cluster settings

By default in Elasticsearch, the maximum number of open scrolls is 500 but I need to increase this number. There s no problem in updating "search.max_open_scroll_context" in local machine but AWS Elasticsearch has not allowed to make changes.
While trying to update with answer given in this thread configure-search-max-open-scroll-context, the response is: {"Message":"Your request: '/_cluster/settings' payload is not allowed."} while I can perform such operation in my local Elasticsearch but AWS Elasticsearch doesn't seems to allow such operation. Does anyone has answer to this for AWS Elasticsearch or have faced similar?
This is restricted in AWS ES for customer end.
You need to reach out to AWS Support Team for this. Just let them know the value of "search.max_open_scroll_context" that you are looking for and they will update it from the backend.
Here the link to AWS-supported operations on elasticsearch.
Currently, AWS doesn't support updating "search.max_open_scroll_context" as of now. You can definitely contact AWS support to increase the scroll context count. Alternatively, you can use Search-After API instead of scroll.

Way to configure notifications/alerts for a kubernetes pod which is reaching 90% memory and which is not exposed to internet(backend microservice)

I am currently working on a solution for alerts/notifications where we have microservices deployed on kubernetes in a way of frontend and back end services. There has been multiple occasions where backend services are not able to restart or reach a 90% allocated pod limit, if they encounter memory exhaust. To identify such pods we want an alert mechanism to lookin when they fail or saturation level. We have prometheus and grafana as monitoring services but are not able to configure alerts, as i have quite a limited knowledge in these, however any suggestions and references provided where i can have detailed way on achieving this will be helpful. Please do let me know
I did try it out on the internet for such ,but almost all are pointing to node level ,cluster level monitoring only. :(
enter image description here
The Query used to check the memory usage is :
sum (container_memory_working_set_bytes{image!="",name=~"^k8s_.*",namespace=~"^$namespace$",pod_name=~"^$deployment-[a-z0-9]+-[a-z0-9]+"}) by (pod_name)
I saw this recently on google. It might be helpful to you. https://groups.google.com/u/1/g/prometheus-users/c/1n_z3cmDEXE?pli=1

Kubernetes API custom image metadata

I try to use the Kubernetes API to read metadata via annotations from container images. The metadata is applicable to every instance of the respecting image and is needed in order to run any resulting container properly. Following this SO question it is not possible to read Docker image labels from the kubernetes API directly.
My next thought was to use custom annotations added to the image manifest, although this seems to be a pretty hacky solution for such a "simple" task. Anyway if I add the annotations to the manifest using docker, I see no way to read them from the Kubernetes API.
I think I am on the completely wrong track here. This seems to be a rather simple task which other people likely have implemented already...anyway I cannot find any further information regarding this. Is it really that hard to read image metadata via kubernetes before deploying a container of that image?
Thanks in advance for any help!
Edit:
The reason I am asking is because I want to grant the containers of specific images access to specific serial USB devices (e.g. FTDI232) on diverse host systems. Since I have no idea which path (e.g. /dev/ttyUSB0) will be assigned to the USB devices, I wrote a program that is monitoring USB devices and, in case an appropriate device is plugged in or gets plugged in, creates the container and passes it the corresponding path. From inside the container I want to access the serial device via a static, non-changing path (e.g. /dev/FTDI232)
Yes. The K8s API is limited when it comes to this, I believe the abstractions for container image metadata are at lower level and probably left out for a reason. You can always look at the CRI spec to see what's supported (note that the doc is out of date so you might have to look at the code).
If the end goal is to use Kubernetes to run your workloads it sounds like the more feasible route here is just to write a script that reads that image manifest outside Kubernetes and create the manifest files that you need to run your workloads after (based on that metadata) and then finally apply it to your cluster.
If you are using a common container image registry you could also write something that pulls the images from that registry to just pick metadata and metadata changes.

Where do I submit events on failure of my custom operator?

I'm working on a mysql users operator and I'm somewhat stuck on what's the proper way to report any issues.
The plan is to watch on CRD for MysqlUser and create Secrets and mysql users in the specified DB. Obviously, either of that can go wrong, at which point I need to report an error.
Some k8s object track events in the status.conditions. There's also the Event object, but I've only seen that used by kubelet / controllermanager insofar.
If say, I have a problem creating mysql user because my operator cannot talk to mysql, but otherwise the CRD is valid, should it go to event or to CRD's status?
CRDs do not have a status part yet (1.7). Notifying via events is perfectly fine, that's the reason for having them in the first place.
This sounds similar to events reported from volume plugin (kubelet) where, for example, kubelet is unable to mount a volume from NFS server because server address is invalid, thus can not take to it.
Tracking events in status.conditions is less useful in this scenario since users typically have no control over how kubelet (or operator in your case) interacts with underline resources. In general, status.conditions only signals the status of the object, not why it is in this condition.
This is just my understanding of how to make the choice. I don't know if there is any rules around it.