Service fabric package error during code package activation - azure-service-fabric

I have a ServiceFabric Application running on 5 nodes. I was in the process of restarting the nodes one by one to get rid of a known Memory Leakage problem.
Restarting node 1 went fine.
Ran into problems when restarting node 2.
On node 2 there is a Stateful Service with 1 (large) partition that ended up in Quarum Loss.
Unhealthy deployed service package: ApplicationName='fabric:/MyApplication', ServiceManifestName='ServicePkg', ServicePackageActivationId='', NodeName='node02', AggregatedHealthState='Error'.
Deleted the Service from the node, which was fine but ServicePkg is still lingering on that node, with the following Error
Error event: SourceId='System.Hosting', Property='CodePackageActivation:Code:EntryPoint'. There was an error during CodePackage activation.The service host terminated with exit code:2148734499
For now I managed to get the service started on other nodes as long as I allowed for more Replicas, but as long as the error on Node 2 exists, upgrades will fail.
Can this be fixed without a complete purge on Node 2?

Related

pixielabs deploy stuck Wait for PEMs/Kelvin

After installing pixielabs with the bash-installer and deploying with px deploy, this deployment got stuck (over 30min) with:
Wait for PEMs/Kelvin
After aborting I got an new namespace pl with many pods pending or in Init.
But no working pixielab.
Check if the etcd pod in the pl namespace is in pending state.
The Pixie Command Module is deployed in the K8s cluster to isolate data storage, therefore you'll need a persistent volume in your cluster.

Kubernetes Node NotReady: ContainerGCFailed / ImageGCFailed context deadline exceeded

Worker node is getting into "NotReady" state with an error in the output of kubectl describe node:
ContainerGCFailed rpc error: code = DeadlineExceeded desc = context deadline exceeded
Environment:
Ubuntu, 16.04 LTS
Kubernetes version: v1.13.3
Docker version: 18.06.1-ce
There is a closed issue on that on Kubernetes GitHub k8 git, which is closed on the merit of being related to Docker issue.
Steps done to troubleshoot the issue:
kubectl describe node - error in question was found(root cause isn't clear).
journalctl -u kubelet - shows this related message:
skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful]
it is related to this open k8 issue Ready/NotReady with PLEG issues
Check node health on AWS with cloudwatch - everything seems to be fine.
journalctl -fu docker.service : check docker for errors/issues -
the output doesn't show any erros related to that.
systemctl restart docker - after restarting docker, the node gets into "Ready" state but in 3-5 minutes becomes "NotReady" again.
It all seems to start when I deployed more pods to the node( close to its resource capacity but don't think that it is direct dependency) or was stopping/starting instances( after restart it is ok, but after some time node is NotReady).
Questions:
What is the root cause of the error?
How to monitor that kind of issue and make sure it doesn't happen?
Are there any workarounds to this problem?
What is the root cause of the error?
From what I was able to find it seems like the error happens when there is an issue contacting Docker, either because it is overloaded or because it is unresponsive. This is based on my experience and what has been mentioned in the GitHub issue you provided.
How to monitor that kind of issue and make sure it doesn't happen?
There seem to be no clarified mitigation or monitoring to this. But it seems like the best way would be to make sure your node will not be overloaded with pods. I have seen that it is not always shown on disk or memory pressure of the Node - but this is probably a problem of not enough resources allocated to Docker and it fails to respond in time. Proposed solution is to set limits for your pods to prevent overloading the Node.
In case of managed Kubernetes in GKE (not sure but other vendors probably have similar feature) there is a feature called node auto-repair. Which will not prevent node pressure or Docker related issue but when it detects an unhealthy node it can drain and redeploy the node/s.
If you already have resources and limits it seems like the best way to make sure this does not happen is to increase memory resource requests for pods. This will mean fewer pods per node and the actual used memory on each node should be lower.
Another way of monitoring/recognizing this could be done by SSH into the node check the memory, the processes with PS, monitoring the syslog and command $docker stats --all
I have got the same issue. I have cordoned and evicted the pods.
Rebooted the server. automatically node came into ready state.

All Kubernetes Pods go down simultaneously periodically

I've been running a Kubernetes cluster for a while now, but I haven't been able to keep it stable.
My cluster consists of four nodes, two masters and two workers. All nodes run on the same physical server, which in turn runs VMware vSphere 6.5. Each node runs CoreOS stable (1353.7.0), and I'm running Kubernetes/Hyperkube v1.6.4, using Calico for networking. I've followed the steps in this guide.
What happens is that for a few hours/days, the cluster will run without a hitch. Then, all of a sudden (for no discernible reason as far as I can tell) all my pods go to status "Pending" and stay that way. Any hosted services are then no longer reachable.
After a while (usually 5 to 10 minutes), it seems to restore itself, after which it starts recreating all my pods, and trying (but failing) to shut down all my running pods. Some of the newly created pods come up, but will initially have no connection to the internet.
For a couple of weeks now I've had this issue intermittently, and it's been preventing me from using Kubernetes in production. I'd really like to figure out what's been causing this!
Weirdly enough, when I try to diagnose the problem by inspecting the logs,
I've noticed that on both of my worker nodes, the journald logs will have become corrupted! On the master nodes, the log is still readable, but not very informative.
Even when running, kubelet is constantly emitting errors in its logs. On all the nodes, this is what's posted about once a minute:
May 26 09:37:14 kube-master1 kubelet-wrapper[24228]: E0526 09:37:14.012890 24228 cni.go:275] Error deleting network: open /var/lib/cni/flannel/3975179a14dac15cd41881266c9bfd6b8763c0a48934147582cb55d5618a9233: no such file or directory
May 26 09:37:14 kube-master1 kubelet-wrapper[24228]: E0526 09:37:14.014762 24228 remote_runtime.go:109] StopPodSandbox "3975179a14dac15cd41881266c9bfd6b8763c0a48934147582cb55d5618a9233" from runtime service failed: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "logstash-s3498_default" network: open /var/lib/cni/flannel/3975179a14dac15cd41881266c9bfd6b8763c0a48934147582cb55d5618a9233: no such file or directory
May 26 09:37:14 kube-master1 kubelet-wrapper[24228]: E0526 09:37:14.014818 24228 kuberuntime_gc.go:138] Failed to stop sandbox "3975179a14dac15cd41881266c9bfd6b8763c0a48934147582cb55d5618a9233" before removing: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "logstash-s3498_default" network: open /var/lib/cni/flannel/3975179a14dac15cd41881266c9bfd6b8763c0a48934147582cb55d5618a9233: no such file or directory
May 26 09:38:07 kube-master1 kubelet-wrapper[24228]: I0526 09:38:07.422341 24228 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/9a378211-3597-11e7-a7ec-000c2958a0d7-default-token-0p3gf" (spec.Name: "default-token-0p3gf") pod "9a378211-3597-11e7-a7ec-000c2958a0d7" (UID: "9a378211-3597-11e7-a7ec-000c2958a0d7").
May 26 09:38:14 kube-master1 kubelet-wrapper[24228]: W0526 09:38:14.037553 24228 docker_sandbox.go:263] NetworkPlugin cni failed on the status hook for pod "logstash-s3498_default": Unexpected command output nsenter: cannot open : No such file or directory
May 26 09:38:14 kube-master1 kubelet-wrapper[24228]: with error: exit status 1
I've googled this error, encountered this issue, but that has been closed and people indicate that using v1.6.0 or later should resolve it, but it definitely hasn't in my case...
Can anybody point me in the right direction?!
Thanks!
Seen this as well. problem seems to go away if you downgrade CoreOS to a older version with docker 1.12.3.
Docker is a nightmare with regressions in every single version they release :(

Updating deployment in GCE leads to node restart

We have some odd issue happening with GCE.
We have 2 clusters dev and prod each consisting of 2 nodes.
Production nodes are n1-standard-2, dev - n1-standard-1.
Typically dev cluster is busier with more pods eating more resources.
We deploy updates mostly with deployments (few projects still recreate RCs to update to latest versions)
Normally, the process is: build project, build docker image, docker push, create new deployment config and kubectl apply new config.
What's constantly happening on production is after applying new config, single or both nodes restart. Cluster does not seem to be starving with memory/cpu and we could not find anything in the logs that would explain those restarts.
Same procedure on staging never causes nodes to restart.
What can we do to diagnose the issue? Any specific events,logs we should be looking at?
Many thanks for any pointers.
UPDATE:
This is still happening and I found following in Computer Engine - Operations:
repair-1481931126173-543cefa5b6d48-9b052332-dfbf44a1
Operation type: compute.instances.repair.recreateInstance
Status message : Instance Group Manager 'projects/.../zones/europe-west1-c/instanceGroupManagers/gke-...' initiated recreateInstance on instance 'projects/.../zones/europe-west1-c/instances/...'. Reason: instance's intent is RUNNING but instance's health status is TIMEOUT.
We still can't figure out why this is happening and it's having a negative effect on our production environment every time we deploy our code.

Minions can't rejoin cluster on reboot of AWS instance

The kubernetes cluster using v1.3.4 starts a master and 2 minions
The cluster starts fine and pods can be started and controlled without issue
As soon as one of the minions is rebooted, or any of the dependent services, such as kubelet is restarted, the minions will not rejoin the cluster
The error from the kubelet service is of the form:
Aug 08 08:21:15 ip-10-16-1-20 kubelet[911]: E0808 08:21:15.955309 911 kubelet.go:2875] Error updating node status, will retry: error getting node "ip-10-16-1-20.us-west-2.compute.internal": nodes "ip-10-16-1-20.us-west-2.compute.internal" not found
The only way, that we can see to rectify this issue at the moment is to tear down the whole cluster and rebuild it
UPDATE:
I had a look at the controller manager log and got the following
W0815 13:36:39.087991 1 nodecontroller.go:433] Unable to find Node: ip-10-16-1-25.us-west-2.compute.internal, deleting all assigned Pods.
W0815 13:37:39.123811 1 nodecontroller.go:433] Unable to find Node: ip-10-16-1-25.us-west-2.compute.internal, deleting all assigned Pods.
E0815 13:37:39.133045 1 nodecontroller.go:434] pods "kube-proxy-ip-10-16-1-25.us-west-2.compute.internal" not found
This is actually a coreos issue, although it is difficult to ascertain what the problem actually is. It is more than likely the low level os host resolution code being called from the aws go layers, but that is purely a guess. By upgrading the coreos ami to a later version solved many of the issues we were facing.