kubernetes minikube faster uptime - kubernetes

I am using minikube and building my projects by tearing down the previous project and rebuilding it with
kubectl delete -f myprojectfiles
kubectl apply -f myprojectfiles
The files are a deployment and a service.
When I access my website I get a 503 error as I'm waiting for kubernetes to bring up the deployment. Is there anyway to speed this up? I see that my application is already built because the logs show it is ready. However it stays showing 503 for what feels like a few minute before everything in kubernetes triggers and starts serving me the application.
What are some things I can do to speed up the uptime?

Configure what is called readinessProbe, it won't fasten your boot up time, but it will help you by not giving false sense that application is up and running. With this your traffic will only be sent to your application pod when it is ready to accept the connection. Please read about it here.
FWIW your application might be waiting on some dependency to be up and running, also add these kinda health checks to that dependency pod.

You should not delete your Kubernetes resources. Use either kubectl apply or kubectl replace to update your project.
If you delete it, the nginx ingress controller won't find any upstream for a short period of time and puts on a blacklist for some seconds.
Also you should make sure, that you use Deployment which is able to do a rolling update without any downtime.

Related

Kubernetes - keeping the execution logs of a pod

I'm trying to keep the execution logs of containers in Kubernetes.
I added in my cronjob yaml the successfulJobsHistoryLimit: 5 failedJobsHistoryLimit: 5 in order to see the execution history, but when I try to view the logs of the pods I get this error
I assume it is because the pods have been deleted because when I go to a running pod I can see the logs.
So is there a way of keeping the logs in this part of Kubernetes or is there something that I have to setup in order to have this functionality?
Sorry if the question have been asked but I didn't really find something and I'm new to Kubernetes.
Thanks for the replies.
Looking at this problem in a bigger picture it's generally a good idea to have your logs stored via logging agents or directly pushed into an external service as per the official documentation.
Taking advantage of Kubernetes logging architecture explained here you can also try to fetch the logs directly from the log-rotate files in the node hosting the pods. Please note that this option might depend on the specific Kubernetes implementation as log files might be deleted when the pod eviction is triggered.

Kubernetes rolling deploy: terminate a pod only when there are no containers running

I am trying to deploy updates to pods. However I want the current pods to terminate only when all the containers inside the pod have terminated and their process is complete.
The new pods can keep waiting to start untill all container in the old pods have completed. We have a mechanism to stop old pods from picking up new tasks and therefore they should eventually terminate.
It's okay if twice the pods exist at some instance of time. I tried finding solution for this in kubernetes docs but wan't successful. Pointers on how / if this is possible would be helpful.
well I guess then you may have to create a duplicate kind of deployment with new image as required and change the selector in service to new deployment, which will prevent external traffic from entering pre-existing pods and new calls can go to new pods. Then later you can check for something like -
Kubectl top pods -c containers
and if the load appears to be static and low, then preferrably you can delete the old pods related deployment later.
But for this thing everytime the service selectors have to be updated and likely for keeping track of things you can append the git commit hash to the service selector to keep it unique everytime.
But rollback to previous versions if required from inside Kubernetes cluster will be difficult, so preferably you can trigger the wanted build again.
I hope this makes some sense !!

Checking for particular pod status before each initialisation of another pod

Assume deployment like that:
Deployment contains two types of pods Config and App
Each App pod to start needs to have access to Config pod
There is always only one Config pod
Already launched App pods can work without access to Config pod service
Situation I would like to manage:
Node containing some of App pods and Config pod going down for any reason
On another Node first starts Config pod
After Config pod is successfully started App pods are launched
Already read about:
InitContainers - couldn't find an information if Config pod would be of type Init if in above situation it would rerun - I think not
StatueFullSet - I cannot find a way how this could help me in that situation
From my perspective I was thinking about a loop for App pods before running target application, that would wait for Config pod to come up and in case of unavailability after timeout force them to fail. But I'm not sure if that is best practice, would like better to handle this with Kubernetes configuration rather that with such script.
You would use either code in your app or an initContainer to block until a config pod is available. Combine this with a readinessProbe that checks if the app is up. Doing the block-and-retry loop in your own code is a bit more work but recommended since you can more carefully control the behavior. This means that app pods can launch whenever, but they won't be marked as ready for traffic until the initialize.

pods keep creating themselves even I deleted all deployments

I am running k8s on aws, and I updated the deployment of nginx - which normally, it works fine-, but after this time, the nginx deployment won't show up in "kubectl get deployments".
I want to kill all the pods related to nginx, but they keep reproduce themselves. I deleted all deployments "kubectl delete --all deployments", other pods just got terminated, but not nginx.
I have no idea where I can stop the pods recreating.
any idea where to start ?
check the deployment, replication controller and replica set and remove them.
kubectl get deploy,rc,rs
In modern kubernetes, there is also an annotation kubernetes.io/created-by on the Pod showing its "owner", as seen here, but I can't lay my hands on the documentation link right now. However, I found a pastebin containing a concrete example of the contents of the annotation

Google (Stackdriver) Logging fails after Kubernetes rolling-update

When performing a kubectl rolling-update of a replication controller in Kubernetes (Google Container Engine), the Google (Stackdriver) Logging agent doesn't pick up the newly deployed pod. The Log is stuck at the last message produced from the old pod.
Consequently, the logs for the replication controller are out-of-date until we do a manual restart (i.e. kubectl scale and kubectl delete) of the pod and the logs are updated again.
Can anybody else confirm that behaviour? Is there a workaround?
I can try to repro the behavior, but first can you try running kubectl logs <pod-name> on the newly created pod after doing the rolling-update to verify that the new version of your app was producing logs at all?
This sounds more likely to be an application problem than an infrastructure problem, but if you can confirm that it is an infra problem I'd love to get to the bottom of it.