Kubernet Helm Concern [closed] - kubernetes

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
You have a collection of pods that all run the same application, but with slightly different configuration. Applying the configuration at run -time would also be desirable. What's the best way to achieve this?
a: Create a separate container image for each version of the application, each with a different configuration
b: Use a single container image, but create ConfigMap objects for the different Configurations and apply them to the different pods
c: Create persistent Volumes that contain each config files and mount then to different pods

In my opinion, the best solution for this would be "b" to use ConfigMaps.
Apply changes to the ConfigMap and redeploy the image/pod. I use this approach very often at work.
Edit from #AndD: [...] ConfigMaps can contain small files and as such can be mounted as read only volumes instead of giving environment variables. So they can be used also instead of option c in case files are required.
My favorite command for this
kubectl patch deployment <DEPLOYMENT> -p "{\"spec\": {\"template\": {\"metadata\": { \"labels\": { \"redeploy\": \"$(date +%s)\"}}}}}"
Patches a redeploy date to now in spec.template.metadata.labels.redeploy -> date +%s
Option "a" seems to be evenly good, the downside is you have to build the new Image, push it, deploy it. You could build a Jenkins/GitLab/whatever Pipeline to automate this way. But, I think ConfigMaps are fairly easier and are more "manageable". This approach is viable if you have very very many config files, which would be too much work to implement in ConfigMaps.
Option "c" feels a bit too clunky just for config files. Just my two cents. Best Practice is to not store config files in volumes. As you (often) want your Applications to be as "stateless" as possible.

Related

How to add nodes if "kubectl get nodes" shows an empty list? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
This post was edited and submitted for review 4 months ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I am trying to run some installation instructions for a software development environment built on top of K3S.
I am getting the error "no nodes available to schedule pods", which when Googled takes me to the question no nodes available to schedule pods - Running Kubernetes Locally with No VM
The answer to that question tells me to run kubectl get nodes.
And when I do that, it shows me perhaps not surprisingly, that I don't have any nodes running.
Without having to learn how Kubernetes actually works, how can I start some nodes and get past this error?
This is a local environment running on a single VM (just like the linked question).
It would depend how your K8s was installed. Kubernetes is a complex system requiring multiple nodes all configured correctly in order to function.
If there are no nodes found for scheduling, my first though would be you only have a single node and its a master node (which runs the control plane services but not workloads) and have not attached any worker nodes. You would need to add another node to the cluster which is running as a worker for it to schedule workloads.
If you want to get up and running without understanding it, there are distributions such as minikube or k3s, which will set it up out of the box and are designed to run on a single machine.

Need to setup a Customized Kubernetes Logging strategy [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
So far in our legacy deployments of webservices to VM clusters, we have effectively been using Log4j2 based multi-file logging on to a persistent Volume where the log files are rolled over each day. We have a need to maintain logs for about 3 months, before they can be purged.
We are migrating to a Kubernetes Infrastructure and have been struggling on what would be the best logging strategy to adapt with Kubernetes Clusters. We don't quite like the strategies involving spitting out all logging to STDOUT/ERROUT and using come centralized tools like Datadog to manage the logs.
Our Design requirements for the Kubernetes Logging Solution are:
Using Lo4j2 to multiple files appenders.
We want to maintain the multi-file log appender structure.
We want to preserve the rolling logs in archives for about 3-months
Need a way to have easy access to the logs for searching, filtering etc.
The Kubectrl setup for viewing logs may be a bit too cumbersome for our needs.
Ideally we would like to use the Datadog dashboard approach BUT using multi-file appenders.
The serious limitation of Datadog we run into is the need for having everything pumped to STDOUT.
Start using containers platforms or building containers means that as a first step we must to change our mindset. Create logs files in your containers is not the best practices for two reasons:
Your containers should be stateless, so the should not save anything inside of it, because when it is deleted and created again your files will be desapeared.
When you send your outputs using Passive Logging(STDOUT/STDERR), Kubernetes creates the logs files for you, this files can be used by platforms like fluentd or logstash to collects those logs and send it to a log aggregation tool.
I recommend to use the Passive Logging which is the recommended way by Kubernetes and the standard for cloud native applications, maybe in the future you will need to use your app in a cloud services, which also use Passive Logging to check application errors
In the following links you will see some refereces about why k8s recommends to use Passive Logging:
k8s checklist best practices
Twelve Factor Applications Logging

Kubernetes Storage [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
We have Azure Kubernetes Cluster, 2 Nodes, and 30 GB Os Disk.
We are deploying multiple application on the Cluster with 2 replicas and 2 pods per application. Now we are facing issue with the disk space. The 30GB Os disk is full and we have to change the 30GB to 60GB.
I tried to increase the disk size to 60GB but It will destroy the whole cluster and recreating it. So we loose all the deployments and we have to deploy all the applications again.
How can I overcome with the disk space issue?
There is no way to overcome this really.
Recreate the cluster with something like 100GB os disk (maybe use ephemeral os disks to cut costs). Alternatively - create a new system node pool, migrate resources to it and decommission the old one.

GCP Cloud Run with PostgreSQL - how to do migrations? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
we are starting with our first cloud run project. in the past we used postgres in combination with spring boot. and we had our migrations running via flyway (similar to liquibase) when the app started.
now with cloud run, this approach will maybe hit it's limit because of the following (corner) cases:
multiple incoming requests (http, messages) routed to parallel instances that could do the same migration in parallel when bootstrapping the container. that would result in exceptions and retries of failed messages or http errors
flyway check on bootstrap would slow down the cold start times additionally every time a container gets started, which could be a lot if we do not have constant traffic with "warm" instances
what would be a good approach having springboot/flyway and postgres as backing database shared accross the instances? a similar problem arises when you replace postgres with a nosql datastore i guess if you want/need to migrate new structures....
right now i can think of:
do a migration of the postgres schema as part of the deployment pipeline before the cloud revision gets replaced which would introduce new challenges (rollbacks etc.)
please share your ideas? looking forward to your answers?
marcel
For migrations that introduce breaking changes either on commit or on rollback, it's mandatory to have a full stop, and of course, rollback planned accordingly.
Also pay attention, that a commit/push should not trigger immediately the new migrations. Often these are not part of the regular CI/CD pipeline that goes to production.
After you deploy a service, you can create a new revision and assign a tag that allows you to access the revision at a specific URL without serving traffic.
A common use case for this, is to run and control the first visit to this container. You can then use that tag to gradually migrate traffic to the tagged revision, and to rollback a tagged revision.
To deploy a new revision of an existing service to production:
gcloud beta run deploy myservice --image IMAGE_URL --no-traffic --tag TAG_NAME
The tag allows you to directly test(or run via this the migration - the very first request) the new revision at a specific URL, without serving traffic. The URL starts with the tag name you provided: for example if you used the tag name green on the service myservice, you would test the tagged revision at the URL https://green---myservice-abcdef.a.run.app

iptables debugging no longer working with Debian 10 (and iptables-legacy; after upgrading from Debian 9) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I used to be able to do iptables debugging on a Debian 9 host with specific rules in chains PREROUTING and OUTPUT (both in table raw) and target TRACE and as described here. Messages showed up in /var/log/kern.log when such rules fired.
The host had the following relevant entries in its boot config file. Things apparently worked without either CONFIG_IP_NF_TARGET_LOG or CONFIG_IP6_NF_TARGET_LOG. (I am interested in IPv4 traffic.)
CONFIG_NETFILTER_XT_TARGET_TRACE=m
CONFIG_IP_NF_RAW=m
CONFIG_IP6_NF_RAW=m
# CONFIG_IP_NF_TARGET_LOG missing
# CONFIG_IP6_NF_TARGET_LOG missing
CONFIG_NETFILTER_XT_TARGET_LOG=m
I have by now upgraded the same host to Debian 10 (Buster). It uses iptables-legacy (not the default iptables-nft), for this is in the context of a Kubernetes cluster.
What I am observing is that the same rules (e.g. iptables -t raw -A PREROUTING -d $service_ip -p tcp -j TRACE; also the same with $pod_ip) are apparently no longer working in the sense in that I do not see any resulting messages in /var/log/kern.log.
What could be the reason why and how can I further diagnose? It is perhaps the case that the TRACE capability requires a different boot config (different modules) with Debian 10, or does iptables-legacy now hinder somehow?
Now it looks as if this kind of iptables debugging does in fact still work under Debian 10 as it did previously for me under Debian 9.
Apparently I had made a mistake by installing rules for iptables debugging before recreating targeted Kubernetes services, etc. That way the iptable rules and Kubernetes resources were out of sync with respect to cluster IPs, node ports, pod IPs, etc., and so the rules never fired with traffic to those services, etc.