When I do a rolling update, I get exceptions from Sentry saying:
DatabaseError('server closed the connection unexpectedly. This probably means the server terminated abnormally before or while processing the request.',...)
I have two containers running inside each Pod, my app container and a cloudsql-proxy container, which the app container uses to communicate to Cloud SQL.
Is there a way to make sure that my app container goes down first during the 30 seconds of grace period (terminationGracePeriodSeconds)?
In other words, I want to drain the connections and have all the current requests finish before the cloudsql-proxy is taken out.
It would be ideal if I could specify that the app container be taken down first during the 30 seconds of grace period, and then the cloudsql-proxy.
This discussion suggests setting the “terminationGracePeriodSeconds” or the "PreStop hook" in the manifest.
Another idea that could work is running the two containers in different pods to allow granular control over the rolling update. You might also want to consider using Init Containers on your deployment to allow the proxy to be ready before your app container.
Related
Apologies for a basic question. I have a simple Kubernetes deployment where I have 3 containers (each in their own pod) deployed to a Kubernetes cluster.
The RESTapi container is dependent upon the OracleDB container starting. However, the OracleDB container takes a while to startup and by that time the RESTapi container has restarted a number of times due to not being able to connect and ends up in a Backoff state.
Is there a more elegant solution for this?
I’ve also noticed that when the RESTapi container goes into the Backoff state it stops retrying?
This is a community wiki answer posted for better visibility. Feel free to expand it.
The best approach in this case is to improve your “RESTapi” application to provide a more reliable and fault-tolerant service that will allow it reconnect to the database anyway.
From Kubernetes
production
best practices:
When the app starts, it shouldn't crash because a dependency such as a
database isn't ready.
Instead, the app should keep retrying to connect to the database until
it succeeds.
Kubernetes expects that application components can be started in any
order.
In other case you can use solution with Init Containers.
You can look at this question on stackoverflow, which is just one of many others about the practical use of Init Containers for the case described.
An elegant way to achieve this is by using a combination of Kubernetes Init Containers paired with k8s-wait-for scripts.
Essentially, what you would do is configure an Init Container for your RESTapi which uses k8s-wait-for. You configure k8s-wait-for to wait for a specific pod to be in a given state, in this case, you can provide the OracleDB pod and wait for it to be in a Ready state.
The resulting effect will be that the deployment of the RESTapi will be paused until the OracleDB is ready to go. That should alleviate the constant restarts.
https://github.com/groundnuty/k8s-wait-for
I have a back-end service that I will control using Kubernetes (with a Helm chart). This back-end service connects to a data-base (MonogoDB, it so happens). There is no point in starting up the back-end service until the data-base is ready to receive a connection (the back-end will handle the missing data-base, by retrying, but it wastes resources and fills the log file with distracting error messages).
To do this I believe I could add an init container to my back-end, and have that init container wait (or poll) until the data-base is ready. It seems this is one of the intended uses of init containers
Because init containers run to completion before any app containers start, init containers offer a mechanism to block or delay app container startup until a set of preconditions are met.
That is, have the init container of my service do the same operations as the readiness probe of the data-base. That in turn means copying and pasting code from the configuration (Helm chart) of the data-base to the configuration (or Helm chart) of my back-end. Not ideal. Is there an easier way? Is there a way I can declare to Kubernetes that my service should not be started until the data-base is known to be ready?
If I have understood you correctly.
From Mongo DB point of view everything is working as expected using readinessprobe:
As per documentation:
The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.
From the back-end point of view, you can use initcontainer - the one drawback is that when your back-end service will start once (after successfully initcontainer initialization) the DB pod will be ready to serve the traffic but when it will fail the back-end service will filling your errors messages - as previously.
So what I can propose to use solution described here.
In your back-end deployment you can combine additional readinessprobes to verify if your primary deployment is ready to serve the traffic you can use sidecar container to handle this process (verifying connection to the primary db service and writing f.e. info in static file per specific period of time). As an example please take a look at EKG library with mongoDBCheck sidecar.
Or just simply exec command as the result of your script running inside your sidecar container:
readinessProbe:
exec:
command:
- find
- alive.txt
- -mmin
- '-1'
initialDelaySeconds: 5
periodSeconds: 15
Hope this help
I have an ECS service that serves an SSH process. I am deploying updates to this service through CodeDeploy. I noticed that this service is much slower to deploy than other services with identical images deployed at the same time using CodePipeline. The difference with this service is that it's behind an NLB (the others are no LB or behind an ALB).
The service is set to 1 container, deploying 200%/100% so the services brings up 1 new container, ensure's it's healthy, then removes the old one. What I see happen is:
New Container started in Initial state
3+ minutes later, New Container becomes Healthy. Old Container enters Draining
2+ minutes later, Old Container finishes Draining and stops
Deploying thus takes 5-7 minutes, mostly waiting for health checks or draining. However, I'm pretty sure SSH starts up very quickly, and I have the following settings on the target group which should make things relatively quick:
TCP health check on the correct port
Healthy/Unhealthy threshold: 2
Interval: 10s
Deregistation Delay: 10s
ECS Docker stop custom timeout: 65s
So the minimum time from SSH being up to the old container being terminated would be:
2*10=20s for TCP health check to turn to Healthy
10s for the deregistration delay before Docker stop
65s for the Docker stop timeout
This is 115 seconds, which is a lot less the observed 5-7 minutes. Other services take 1-3 minutes and the LB/Target Group timings are not nearly as aggressive there.
Any ideas why my service behind an NLB seems slow to cycle through these lifecycle transitions?
You are not doing anything wrong here; this simply appears to be a (current) limitation of this product.
I recently noticed similar delays in registration/availability time with ECS services behind an NLB and decided to explore. I created a simple Javascript TCP echo server and set it up as an ECS service behind an NLB (ECS service count of 1). Like you, I used a TCP healthcheck with a healthy/unhealthy threshold of 2 and interval/deregistration delay of 10 seconds.
After the initial deploy was successful and the service reachable via the NLB, I wanted to see how long it would take for service to be restored in the event of a complete failure of the underlying instance. To simulate, I killed the service via the ECS console. After several iterations of this test, I consistently observed a timeline similar to the following (times are in seconds):
0s: killed service
5s: ECS reports old service draining
Target Group shows service draining
ECS reports new service instance is started
15s: ECS reports new task is registered
Target Group shows new instance with status of 'initial'
135s: TCP healthcheck traffic from the load balancer starts arriving
for the service (as measured by tcpdump on the EC2 host running
the container)
225s: Target Group finally marks the service as 'healthy'
ECS reports service has reached a steady state
I performed the same tests with a simple express app behind an ALB, and the gap between ECS starting the service and the ALB reporting it healthy was 10-15 seconds. The best result we achieved testing the NLB was 3.5 minutes from service stop to full availability.
I shared these findings with AWS via support case, asking specifically for clarification on why there was a consistent 120 second gap before the NLB started healthchecking the service and why we consistently saw 90-120 seconds between the beginning of healthchecks and service availability. They confirmed that this behavior is known but did not offer a time for resolution or a strategy to decrease latency in service availability.
Unfortunately, this will not do much to help resolve your issue, but at least you can know that you're not doing anything wrong.
I have a web deployment and a mongoDB statefulset. The web deployment connects to the mongodb but once in a while a error may occur in the mongodb and it reboots and starts up. The connection from the web deployment to the mongodb never get restarted. Is there a way in the web deployment. If the mongodb pod restarts to restart the web pod as well?
Yes, you can use a liveness probe on your application container that probes your Mongo Pod/StatefulSet. You can configure it in such a way that it fails if it fails to TCP connect to your Mongo Pod/StatefulSet when Mongo crashes (Maybe check every second)
Keep in mind that with this approach you will have to always start your Mongo Pod/StatefulSet first.
The sidecar function described in the other answer should work too, only it would take a bit more configuration.
Unfortunately, there's no easy way to do this within Kubernetes directly, as Kubernetes has no concept of dependencies between resources.
The best place to handle this is within the web server pod itself.
The ideal solution is to update the application to retry the connection on a failure.
A less ideal solution would be to have a side-car container that just polls the database and causes a failure if the database goes down, which should cause Kubernetes to restart the pod.
How does the auto-recovery option on a container in a scalable group work?
I have enabled it (by using --auto and it says Autorecovery: On in the web UI) but it did not try to restart the container when it crashed this morning.
The container in the group died at 2015-09-29T05:51:27.187Z and was manually restarted over one hour later at 2015-09-29T07:35:33.561Z
Restarting the container "solves" the runtime problem (a bug that is being fixed) until a user tries to to the same thing again in the app crashing it.
According to the docs:
To start a new container when one of the containers in the group crashes or becomes unavailable, the Enable autorecovery option. If you do not select this option, a new instance is not started automatically.
Listed in known problems:
Auto-recovery is not immediate
Auto-recovery for container groups might take more than 15 minutes for new systems to come online. Wait for auto-recovery to become available, which can take more than 15 minutes.
For every container in the group, the service will run a curl request against the port that you specified when you created the group.
If a container does not respond for whatever reason, the service assumes the container needs to be replaced. So it will destroy that container and create a new one in its place.
The fine print
The containers need to be running a service that responds to http requests on a particular port.
The port that you expose when you create the container group must be the same as the port in #1.
The port in #1/#2 must respond to http requests, not https requests. The route for the group (eg https://example.mybluemix.net) is secure, and traffic internally from the route to containers is also encrypted, so the containers in the group do not need to listen on https.
The service checks every container in the group once every 2 minutes or so.
Roughly if the service has to replace every instance in the group more than 3 times within roughly a 10 minute period, the service will stop tearing down and recovering instances in the group from that point forward. On the Bluemix site, you might see the Autorecovery label switch from On to Off. This is to prevent a never-ending loop of teardowns and replacements of containers that are either always crashing or consistently non-responsive.
In the IBM Containers service, auto-recovery works by the service doing an http curl against the port that you specify when you launch the container group. If that port does not respond to an http curl, then the service assumes it needs to be recovered and will destroy that container and recreate it.