With the following commit I can add YouTrack work item to the type Development:
#DEV-1 work Development 15m
How can I add to two multiple work items? I tried the following:
#DEV-1 work Development 15m Documentation 15m
#DEV-1 work Development 15m, Documentation 15m
#DEV-1 work Development 15m work Documentation 15m
#DEV-1 work Development 15m, work Documentation 15m
Unfortunately, it's neither possible to execute a single command that adds multiple work items nor execute several commands from a single VCS commit message.
Related
I am using terraform to deploy helm releases. It took about 4 min until I saw a couple of pods show up (this issue is about pods showing up slow not starting up slow).
A couple of things I have checked:
liveness probe -> might not be related as this is about after the pod is created, my current issue is pods being created/showing up slow
request limit -> might not be related as this about after pod being created, the pod has a pending statue
I need some ideas why it behaves like this. thanks.
Is there a way that I can get release logs for a particular K8s release within my K8s cluster as the replica-sets related to that deployment is no longer serving pods?
For an example kubectl rollout history deployment/pod1-dep would result
1
2 <- failed deploy
3 <- Latest deployment successful
If I want to pick the logs related to events in 2, would it be a possible task, or is there a way that we can such functionality with this.
This is a Community Wiki answer, posted for better visibility, so feel free to edit it and add any additional details you consider important.
As David Maze rightly suggested in his comment above:
Once a pod is deleted, its logs are gone with it. If you have some
sort of external log collector that will generally keep historical
logs for you, but you needed to have set that up before you attempted
the update.
So the answer to your particular question is: no, you can't get such logs once those particular pods are deleted.
Is it possible to set the --http2-max-streams-per-connection value for a cluster created by Kops.
I have an interesting situation where one of my nodes falls into a NotReady status whenever I deploy a helm chart to it and I feel like it might be connected to this setting.
The nodes are generally fine and run without any issues, but once I deploy my helm chart, the status of whatever node it gets deployed to changes after a few minutes to NotReady Which I find weird.
I've done a bit of reading and seen a number of similar issues pointing to the setting --http2-max-streams-per-connection but I'm not how to go about setting this.
Any ideas anyone?
You should be able to set this by adding the following to the cluster spec:
spec:
kubeAPIServer:
http2MaxStreamsPerConnection: <value>
See https://pkg.go.dev/k8s.io/kops/pkg/apis/kops#KubeAPIServerConfig and https://kops.sigs.k8s.io/cluster_spec/
That being said, I do not believe the reason for your NotReady nodes is due to that setting. You may want to join #kops-users on the kubernetes slack space and ask for help triaging that problem.
I am currently working on a monitoring service that will monitor Kubernetes' deployments and their pods. I want to notify users when a deployment is not running the expected amount of replicas and also when pods' containers restart unexpectedly. This may not be the right things to monitor and I would greatly appreciate some feedback on what I should be monitoring.
Anyways, the main question is the differences between all of the Statuses of pods. And when I say Statuses I mean the Status column when running kubectl get pods. The statuses in question are:
- ContainerCreating
- ImagePullBackOff
- Pending
- CrashLoopBackOff
- Error
- Running
What causes pod/containers to go into these states?
For the first four Statuses, are these states recoverable without user interaction?
What is the threshold for a CrashLoopBackOff?
Is Running the only status that has a Ready Condition of True?
Any feedback would be greatly appreciated!
Also, would it be bad practice to use kubectl in an automated script for monitoring purposes? For example, every minute log the results of kubectl get pods to Elasticsearch?
You can see the pod lifecycle details in k8s documentation.
The recommended way of monitoring kubernetes cluster and applications are with prometheus
I will try to tell what I see hidden behind these terms
ContainerCreating
Showing when we wait to image be downloaded and the
container will be created by a docker or another system.
ImagePullBackOff
Showing when we have problem to download the image from a registry. Wrong credentials to log in to the docker hub for example.
Pending
The container starts (if start take time) or started but redinessProbe failed.
CrashLoopBackOff
This status showing when container restarts occur too much often. For example, we have process that tries to read not exists file and crash. Then the container will be recreated by Kube and repeat.
Error
This is pretty clear. We have some errors to run the container.
Running
All is good container running and livenessProbe is OK.
Attempted to install: jFrog Artifactory HA
Platform: GCE kubernetes cluster on CoreOS; 1 master, 2 workers
Installation method: Helm chart
Helm steps taken:
Add jFrog repo to local helm: helm repo add jfrog https://charts.jfrog.io
Install license as kubernetes secret in cluster: kubectl create secret generic artifactory-cluster-license --from-file=./art.lic
Install via helm:
helm install --name artifactory-ha jfrog/artifactory-ha
--set artifactory.masterKey=,artifactory.license.secret=artifactory-cluster-license,artifactory.license.dataKey=art.lic
Result:
Helm installation went without complaint. Checked services, seemed to be fine, LoadBalancer was pending and came online.
Checked PVs and PVCs, seemed to be fine and bound:
NAME STATUS
artifactory-ha-postgresql Bound
volume-artifactory-ha-artifactory-ha-member-0 Bound
volume-artifactory-ha-artifactory-ha-primary-0 Bound
Checked the pods and only postgres was ready:
NAME READY STATUS RESTARTS AGE
artifactory-ha-artifactory-ha-member-0 0/1 Running 0 3m
artifactory-ha-artifactory-ha-primary-0 0/1 Running 0 3m
artifactory-ha-nginx-697844f76-jt24s 0/1 Init:0/1 0 3m
artifactory-ha-postgresql-676999df46-bchq9 1/1 Running 0 3m
Waited for a few minutes, no change. Waited 2 hours, still at the same state as above. Checked logs of the artifactory-ha-artifactory-ha-primary-0 pod (it's quite long, but I can post if that will help anybody determine the problem), but noted this error:
SEVERE: One or more listeners failed to start. Full details will be found in the appropriate container log file. I couldn't think of where else to check for logs. Services were running, other pods seemed to be waiting on this primary pod.
The log continues with SEVERE: Context [/artifactory] startup failed due to previous errors and then starts spewing Java stack dumps after the "ACCESS" ASCII art, messages like WARNING: The web application [artifactory] appears to have started a thread named [Thread-5] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
I ended up leaving the cluster up over night, and now, about 12 hours later, I'm very surprised to see that the "primary" pod did actually come online:
NAME READY STATUS RESTARTS AGE
artifactory-ha-artifactory-ha-member-0 1/1 Terminating 0 19m
artifactory-ha-artifactory-ha-member-1 0/1 Terminating 0 17m
artifactory-ha-artifactory-ha-primary-0 1/1 Running 0 3h
artifactory-ha-nginx-697844f76-vsmzq 0/1 Running 38 3h
artifactory-ha-postgresql-676999df46-gzbpm 1/1 Running 0 3h
Though, the nginx pod did not. It eventually succeeded at its init container command (until nc -z -w 2 artifactory-ha 8081 && echo artifactory ok; do), but cannot pass its readiness probe: Warning Unhealthy 1m (x428 over 3h) kubelet, spczufvthh-worker-1 Readiness probe failed: Get http://10.2.2.45:80/artifactory/webapp/#/login: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Perhaps I missed some required step in the setup or helm installation switches? This is my first attempt at setting up jFrog Artifactory HA, and I noticed most of the instructions seem to be for baremetal clusters, so perhaps I confused something.
Any help is appreciated!
Turned out we messed up a couple of things, and had a few misunderstandings about how the install process works. Maybe this will be some help to people in the future.
1) The masterKey value needs to be at least 16 characters long. We had initially tried too short of a key. We tried installing again and writing this new masterKey to a secret instead, but...
2) The values in the secrets seem to get read once at initial install attempt, then they are written to the persistent volume and updating the secret after that seems to have no effect.
3) We also didn't understand the license key format and constraints. You need a license for every node that will run Artifactory, and all the licenses go into a single file, with each license separated by two return/new lines.
The error logs were pretty unhelpful to us in these errors. We eventually wiped out the install, including the PVs, and finally everything went fine.