we have a problem, we have cadvisor installed as a daemonset with hostport setup. We request metrics at , for example, worker5:31194/metrics and the request takes a very long time, about 40 seconds. As I understand it, the problem is related to the fact that cadvisor gives away extra empty labels.
looks like
container_cpu_cfs_periods_total{container_label_annotation_cni_projectcalico_org_containerID="",container_label_annotation_cni_projectcalico_org_podIP="",container_label_annotation_cni_projectcalico_org_podIPs="",container_label_annotation_io_kubernetes_container_hash="",container_label_annotation_io_kubernetes_container_ports="",container_label_annotation_io_kubernetes_container_preStopHandler="",container_label_annotation_io_kubernetes_container_restartCount="",container_label_annotation_io_kubernetes_container_terminationMessagePath="",container_label_annotation_io_kubernetes_container_terminationMessagePolicy="",container_label_annotation_io_kubernetes_pod_terminationGracePeriod="",container_label_annotation_kubernetes_io_config_seen="",container_label_annotation_kubernetes_io_config_source="",container_label_app="",container_label_app_kubernetes_io_component="",container_label_app_kubernetes_io_instance="",container_label_app_kubernetes_io_name="",container_label_app_kubernetes_io_version="",container_label_architecture="",container_label_build_date="",container_label_build_id="",container_label_com_redhat_build_host="",container_label_com_redhat_component="",container_label_com_redhat_license_terms="",container_label_control_plane="",container_label_controller_revision_hash="",container_label_description="",container_label_distribution_scope="",container_label_git_commit="",container_label_io_k8s_description="",container_label_io_k8s_display_name="",container_label_io_kubernetes_container_logpath="",container_label_io_kubernetes_container_name="",container_label_io_kubernetes_docker_type="",container_label_io_kubernetes_pod_name="",container_label_io_kubernetes_pod_namespace="",container_label_io_kubernetes_pod_uid="",container_label_io_kubernetes_sandbox_id="",container_label_io_openshift_expose_services="",container_label_io_openshift_tags="",container_label_io_rancher_rke_container_name="",container_label_k8s_app="",container_label_license="",container_label_maintainer="",container_label_name="",container_label_org_label_schema_build_date="",container_label_org_label_schema_license="",container_label_org_label_schema_name="",container_label_org_label_schema_schema_version="",container_label_org_label_schema_url="",container_label_org_label_schema_vcs_ref="",container_label_org_label_schema_vcs_url="",container_label_org_label_schema_vendor="",container_label_org_label_schema_version="",container_label_org_opencontainers_image_created="",container_label_org_opencontainers_image_description="",container_label_org_opencontainers_image_documentation="",container_label_org_opencontainers_image_licenses="",container_label_org_opencontainers_image_revision="",container_label_org_opencontainers_image_source="",container_label_org_opencontainers_image_title="",container_label_org_opencontainers_image_url="",container_label_org_opencontainers_image_vendor="",container_label_org_opencontainers_image_version="",container_label_pod_template_generation="",container_label_pod_template_hash="",container_label_release="",container_label_summary="",container_label_url="",container_label_vcs_ref="",container_label_vcs_type="",container_label_vendor="",container_label_version="",id="/kubepods/burstable/pod080e6da8-7f00-403d-a8de-3f93db373776",image="",name=""} 3.572708e+06
is there any solution to remove the empty label or remove the label altogether?
I found two parameters, the first one suited me, but you never know who will need the second, there is little information, so I decided to post the answer
-store_container_labels
convert container labels and environment variables into labels on prometheus
metrics for each container. If flag set to false, then only metrics exported are
container name, first alias, and image name (default true)
-whitelisted_container_labels string
comma separated list of container labels to be converted to labels on prometheus
metrics for each container. store_container_labels must be set to false for this to
take effect.
I am monitoring a instance and changed its target IP. now when I graph it in grafana, there is 2 lines(with different color) showing with the tail of first line the head of the second line.
My goal is to remove the first line and just show the updated, second line.
My attempt is to adjust the time frame in grafana which works but it will affect all the instances that are not changed.
My second attempt is to remove the time-series in prometheus but the API was not enabled and restarting would cause a hiccup in the prometheus system (which is not good in monitoring).
It also said here that time-series can only be deleted via API but this is 2018. I was wondering if it is now possible to remove time-series without API.
No, the only way to remove time series is using the API
Yes, restarting would cause a hiccup, but let's be practical: the downtime is really very small.
In our project we have multiple cron-job using very large images, configured to run pretty often.
Whenever the garbage collection threshold is met images associated with those cron-jobs are removed, because they are not currently in use. Pulling those images from repository whenever they are needed introduces some problems due to their size.
My question is can i make it so that images associated with cron-jobs are ommited during garbage collection? A way to add an exception?
So far the only thing i came up with was creating another deployment that would use same image 24/7 with some changes so that it's execution doesn't finish normally. So that the image is in use when garbage collection is triggered.
I don`t know the way to specify a list of image name exceptions to Image Garbage Collection Policy, but maybe you can workaround it by overriding a default value (2 minutes) of
Minimum age for an unused image before it is garbage collected.
through the following kubelet flags:
--minimum-image-ttl-duration=12h (by default it`s set to 2m - minutes)
the other user controlled flags are documented here
The above one I found in kubelet source code on GitHub
I'm testing with a kubernetes cluster, and it's been marvelous to work with. But I got the following scenario:
I need to pass to each pod a custom value(s) just for that pod.
Let's say, I got deployment 1, and I define some env vars to that deployment, the env vars will go to each pod and that's good, but what I need is to send custom values that may go to a specific pod(like "to the third pod that I may create, send this").
This is what I got now:
Then, what I need is something like this:
Is there any artifact/feature I could use? It does not have to be an env var, it may be a configmap value, or anything. thanks in advance
Pods in a deployment are homogenous. If you want to set up a set of pods that are distinct from one another, you might want to use StatefulSet, which gives each pod an index you could use within the pod to select relevant config params
The real question here is how do you know what you want to put in particular pod in the first place. You could probably achieve something like this writing a custom initialiser for your pods. You could also have an init container prefetching information from central coordinator. To propose a solution, you need to figure it out in a "not a snowflake" way.
I'm feeling a little stupid, and my searches for answers haven't yielded anyone else having this problem.
Imagine that I have NodeHQ, Node1 and Node2. I have created triggers to synchronize TableA between the 3 like so:
Node1 <---> NodeHQ <---> Node2
Node1 and Node2 have different subsets of data from each other. NodeHQ has administrative information from both nodes (subsets of both). Each of the 3 nodes is in a different NODE_GROUP.
Right now, with the triggers and routers I have setup, Inserting/Updating/Deleting a record at NodeHQ works at Node1 and Node2. However, if I make a change at Node1 or Node2, it only makes it to NodeHQ. It never passes through to the other.
So far I've tried:
Setting SYNC_ON_INCOMING_BATCH to 1 for the triggers involved, no change
Creating separate SYM_TRIGGER for each NODE_GROUP, no change
Using transforms to alter the record innocuously, no change
Deleting and then Inserting all of the rules, no change
Using Symadmin sync-triggers -f to force trigger recreation, no change
I've read the user guides up and down on this, and they are relatively unspecific about this. http://www.symmetricds.org/doc/3.6/user-guide/html/advanced.html#bi-direction-sync
Right now, all of the nodes have SYNC_ENABLED=1. All of the SYM_TRIGGERs are set for SYNC_ON_INCOMING_BATCH=1. My SYM_ROUTERs are all set to SYNC=1, and are using ROUTER_TYPE='default'. I'll be honest, I've tried a lot of other small things, but nothing seems to make it pass on data to the next NODE_GROUP. I'm running out of ideas.
Their own documentation indicates that SYNC_ON_INCOMING_BATCH makes it so that trigger passes data onto other nodes at each place it arrives. So far though, my changes to that have yielded nothing. What's left to try? Or what do you think I should do?
I am using Firebird 2.52 and SQL Dialect 1.
So in running version 3.7.19 of SymmetricDS in debug, I discovered the triggers weren't being regenerated properly in most circumstances that I was changing SYM tables. Whenever I changed the rules, the logs indicated it was remaking relating triggers.
The solution: Running symadmin sync-triggers -f on every engine. This forces every single trigger to be regenerated, and it seems to have fixed this. I'll definitely track this down to help the developers nip it in the bud.