I have installed Sonatype nexus repository manager in my Kubernetes Cluster using the helm chart.
I am using the Kyma installation.
Nexus repository manager got installed properly and I can access the application.
But it seems the login password file is in a pv volume claim /nexus-data attached in the pod.
Now whenever I am trying to access the pod with kubectl exec command:
kubectl exec -i -t $POD_NAME -n dev -- /bin/sh
I am getting the following error:
OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown
I understand that this issue is because of the image does not offer shell functionality.
Is there any other way i can access the password file present in the pvc?
You can try kubectl cp command but probably it won't work as the there is no shell inside the container.
You can't really access the pv used by pvc directly in Kubernetes, but there is a simple work-around - just create another pod (with shell) with this pvc mounted and access it. To avoid errors like Volume is already used by pod(s) / node(s) I suggest to schedule this pod on the same node as nexus pod.
Check on which node is located your nexus pod: NODE=$(kubectl get pod <your-nexus-pod-name> -o jsonpath='{.spec.nodeName}')
Set nexus label for node: kubectl label node $NODE nexus=here (avoid using "yes" or "true" instead of "here"; Kubernetes will read it as boolean, not as the string)
Get your nexus pvc name mounted on the pod by running kubectl describe pod <your-nexus-pod-name>
Create simple pod definition refereeing to nexus pvc from previous step:
apiVersion: v1
kind: Pod
metadata:
name: access-nexus-data
spec:
containers:
- name: access-nexus-data-container
image: busybox:latest
command: ["sleep", "999999"]
volumeMounts:
- name: nexus-data
mountPath: /nexus-data
readOnly: true
volumes:
- name: nexus-data
persistentVolumeClaim:
claimName: <your-pvc-name>
nodeSelector:
nexus: here
Access to the pod using kubectl exec access-nexus-data -it -- sh and read data. You can also use earlier mentioned kubectl cp command.
If you are using some cloud provided Kubernetes solution, you can try to mount pv volume used by pvc to VM hosted on the cloud.
Source: similar Stackoverflow topic
【Error Summary】
I am new to RedHat OpenShift.
OpenShift Pod status CrashLoopBackOff.
Pod logs shows “id: cannot find name for user ID 1000660000” and “java.io.FileNotFoundException: ..(Permission denied)”.
I tried to solve this problem by changing UID but it doesn’t work.
If the cause is not UID ,it might be access to pvc.
Is there any way to check and change pvc?
【Error Reproduction(using OpenShift web console and terminal)】
1.Create OpenShift cluster and project.
2.Add container image from external registry
and create deployment.
(Application and component are created at the same time)
At this point the pod was running.
3.Open Deployment page and change Pod number to 0.
4.Remove existing Container Volume.
5.Add storage and create PVC.
6.Change Pod number to 1.
7.Pod is not running and the pod status is CrashLoopBackOff.
8.Create service account “awag-sa” by command below.
oc create sa awag-sa
oc adm policy add-scc-to-user anyuid-z awag-sa
9.Create patch yaml file “patch-file.yaml” for patching serviceAccount
spec:
template:
spec:
serviceAccountName: awag-sa
10.Patch yaml file to deployment by command below
kubectl patch deployment nexus3-comp --patch "$(cat patch-file.yaml)"
11.Check Deployment yaml file(OpenShift web console) that spec.template.spec.serviceAccountName is modified correctly.
But the pod status is still CrashLoopBackOff .
…
spec:
replicas: 0
selector:
matchLabels:
app: nexus3-comp
template:
metadata:
creationTimestamp: null
labels:
app: nexus3-comp
deploymentconfig: nexus3-comp
annotations:
openshift.io/generated-by: OpenShiftWebConsole
spec:
restartPolicy: Always
serviceAccountName: awag-sa
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
securityContext: {}
containers:
- name: nexus3-comp
…
OpenShift would use "random" UIDs -- relative to your Project / Namespace, there's an annotation telling you which UID was allocated to your Project. Unless otherwise configured, your containers would run as that UID.
If your application somehow needs a working getpwnam / resolution from UID to user name, then you want to use nsswrapper.
Make sure it is installed, in your Dockerfile
apt-get install libnss-wrapper
yum install nss_wrapper
Then, in your entrypoint, load your own passwd / groups:
RUNTIME_USER=${RUNTIME_USER:-nexus}
RUNTIME_GROUP=${RUNTIME_GROUP:-$RUNTIME_USER}
RUNTIME_HOME=${RUNTIME_HOME:-/tmp}
echo Setting up nsswrapper mapping `id -u` to $RUNTIME_GROUP
(
grep -v ^$RUNTIME_USER /etc/passwd
echo "$RUNTIME_USER:x:`id -u`:`id -g`:$RUNTIME_USER:$RUNTIME_HOME:/usr/sbin/nologin"
) >/tmp/java-passwd
(
grep -v ^$RUNTIME_GROUP /etc/group
echo "$RUNTIME_GROUP:x:`id -g`:"
) >/tmp/java-group
export NSS_WRAPPER_PASSWD=/tmp/java-passwd
export NSS_WRAPPER_GROUP=/tmp/java-group
export LD_PRELOAD=/usr/lib/libnss_wrapper.so
# or /usr/lib64/libnss_wrapper.so, on EL x86_64
[rest of your entrypoint.sh -- eg: exec $#]
edit: actually, nexus doesn't care -- though previous notes would still apply, if a container crashes complaining about some missing UID.
I can't reproduce the message you're getting. As far as I've seen nexus would first crash, failing to write logs. Then its data. Fixed it both times adding a volume:
oc create deploy nexus --image=sonatype/nexus3
oc edit deploy/nexus
[...]
volumeMounts:
- mountPath: /opt/sonatype/nexus3/log
name: empty
subPath: log
- mountPath: /nexus-data
name: empty
subPath: data
...
volumes:
- emptyDir: {}
name: empty
Now, in your case, /nexus-data should probably be stored in a PVC, rather than some emptyDir. Either way, adding those two volumes fixed it:
# oc logs -f nexus-f7c577ff9-pqmdc
id: cannot find name for user ID 1000230000
2021-07-10 16:36:48,155+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.pax.logging.NexusLogActivator - start
2021-07-10 16:36:49,184+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.features.internal.FeaturesWrapper - Fast FeaturesService starting
2021-07-10 16:36:53,004+0000 INFO [FelixStartLevel] *SYSTEM ROOT - bundle org.apache.felix.scr:2.1.26 (63) Starting with globalExtender setting: false
2021-07-10 16:36:53,038+0000 INFO [FelixStartLevel] *SYSTEM ROOT - bundle org.apache.felix.scr:2.1.26 (63) Version = 2.1.26
...
※Answered by questionner
I needed to change volumeMounts setttings of my deployment.
volumeMounts:
- name: nexus-data-pvc
mountPath: /nexus-data
subPath: data
- name: nexus-data-pvc
mountPath: /opt/sonatype/nexus3/log
subPath: log
Can anyone point out how to connect to the mongo db instance using mongo client using either command line client or from .net core programs with connection strings?
We have created a sample cluster in digitalocean with a namespace, let's say mongodatabase.
We installed the mongo statefulset with 3 replicas. We are able to successfully connect with the below command
kubectl --kubeconfig=configfile.yaml -n mongodatabase exec -ti mongo-0 mongo
But when we connect from a different namespace or from default namespace with the pod names in the below format, it doesn't work.
kubectl --kubeconfig=configfile.yaml exec -ti mongo-0.mongo.mongodatabase.cluster.svc.local mongo
where mongo-0.mongo.mongodatabase.cluster.svc.local is in pod-0.service_name.namespace.cluster.svc.local (also tried pod-0.statfulset_name.namespace.cluster.svc.local and pod-0.service_name.statefulsetname.namespace.cluster.svc.local) etc.,
Can any one help with the correct dns name/connection string to be used while connecting with mongo client in command line and also from the programs like java/.net core etc.,?
Also should we use kubernetes deployment instead of statefulsets here?
You need to reference the mongo service by namespaced dns. So if your mongo service is mymongoapp and it is deployed in mymongonamespace, you should be able to access it as mymongoapp.mymongonamespace.
To test, I used the bitnami/mongodb docker client. As follows:
From within mymongonamespace, this command works
$ kubectl config set-context --current --namespace=mymongonamespace
$ kubectl run mongodbclient --rm --tty -i --image bitnami/mongodb --command -- mongo --host mymongoapp
But when I switched to namespace default it didn't work
$ kubectl config set-context --current --namespace=default
$ kubectl run mongodbclient --rm --tty -i --image bitnami/mongodb --command -- mongo --host mymongoapp
Qualifying the host with the namespace then works
$ kubectl run mongodbclient --rm --tty -i --image bitnami/mongodb --command -- mongo --host mymongoapp.mymongonamespace
This is how you can get inside mongo-0 pod
kubectl --kubeconfig=configfile.yaml exec -ti mongo-0 sh
I think you are looking for this DNS for Services and Pods.
You can have a fully qualified domain name (FQDN) for a Services or for a Pod.
Also please have a look at this kubernetes: Service located in another namespace, as I think it will provide you with answer on how to access it from different namespace.
An example would look like this:
apiVersion: v1
kind: Service
metadata:
name: default-subdomain
spec:
selector:
name: busybox
clusterIP: None
ports:
- name: foo # Actually, no port is needed.
port: 1234
targetPort: 1234
---
apiVersion: v1
kind: Pod
metadata:
name: busybox1
labels:
name: busybox
spec:
hostname: busybox-1
subdomain: default-subdomain
containers:
- image: busybox:1.28
command:
- sleep
- "3600"
name: busybox
---
apiVersion: v1
kind: Pod
metadata:
name: busybox2
labels:
name: busybox
spec:
hostname: busybox-2
subdomain: default-subdomain
containers:
- image: busybox:1.28
command:
- sleep
- "3600"
name: busybox
If there exists a headless service in the same namespace as the pod and with the same name as the subdomain, the cluster’s KubeDNS Server also returns an A record for the Pod’s fully qualified hostname. For example, given a Pod with the hostname set to “busybox-1” and the subdomain set to “default-subdomain”, and a headless Service named “default-subdomain” in the same namespace, the pod will see its own FQDN as “busybox-1.default-subdomain.my-namespace.svc.cluster.local”. DNS serves an A record at that name, pointing to the Pod’s IP. Both pods “busybox1” and “busybox2” can have their distinct A records.
The Endpoints object can specify the hostname for any endpoint addresses, along with its IP.
Note: Because A records are not created for Pod names, hostname is required for the Pod’s A record to be created. A Pod with no hostname but with subdomain will only create the A record for the headless service (default-subdomain.my-namespace.svc.cluster.local), pointing to the Pod’s IP address. Also, Pod needs to become ready in order to have a record unless publishNotReadyAddresses=True is set on the Service.
Your question about Deployments vs StatefulSets should be a different question. But the answer is that the StatefulSet is used when you want "Stable Persistent Storage" kubernetes.io.
Also from the same page "stable is synonymous with persistence across Pod (re)scheduling". So basically your mongo instance is backed by a PeristentVolume and you want the volume reattached after the pod is rescheduled.
I'm using Openshift and Kubernetes as cloud platform for my application. For test purposes I need to intercept incoming http requests to my pods. Is this possible to do that with Kubernetes client library or maybe it can be configured with yaml?
Simple answer is no, you can't.
One of the ways to overcome this is to exec into your container (kubectl exec -it <pod> bash), install tcpdump and run something like tcpdump -i eth0 -n.
A more reasonable way to have it solved on infra level is to use some tracing tool like Jaeger/Zipkin
You can try something like below it will work. First you need create a job.
Let's say with name (tcpdumppod.yaml)
apiVersion: batch/v1
kind: Job
metadata:
name: tcpdump-capture-job
namespace: blue
spec:
template:
metadata:
name: "tcpdumpcapture-pod"
spec:
hostNetwork: true
nodeSelector:
kubernetes.io/hostname: "ip-xx-x-x-xxx.ap-south-1.compute.internal"
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: "job-container"
image: "docker.io/centos/tools"
command: ["/bin/bash", "-c", "--"]
args: [ "tcpdump -i any -s0 -vv -n dst host 10.233.6.70 and port 7776 || src 10.233.64.23" ]
restartPolicy: Never
backoffLimit: 3
activeDeadlineSeconds: 460
=> kubectl create -f tcpdumppod.yaml
And check the pod logs which is created by the job when the container is running.
This is what I keep getting:
[root#centos-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-server-h6nw8 1/1 Running 0 1h
nfs-web-07rxz 0/1 CrashLoopBackOff 8 16m
nfs-web-fdr9h 0/1 CrashLoopBackOff 8 16m
Below is output from describe pods
kubectl describe pods
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
16m 16m 1 {default-scheduler } Normal Scheduled Successfully assigned nfs-web-fdr9h to centos-minion-2
16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Created Created container with docker id 495fcbb06836
16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Started Started container with docker id 495fcbb06836
16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Started Started container with docker id d56f34ae4e8f
16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Created Created container with docker id d56f34ae4e8f
16m 16m 2 {kubelet centos-minion-2} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "web" with CrashLoopBackOff: "Back-off 10s restarting failed container=web pod=nfs-web-fdr9h_default(461c937d-d870-11e6-98de-005056040cc2)"
I have two pods: nfs-web-07rxz, nfs-web-fdr9h, but if I do kubectl logs nfs-web-07rxz or with -p option I don't see any log in both pods.
[root#centos-master ~]# kubectl logs nfs-web-07rxz -p
[root#centos-master ~]# kubectl logs nfs-web-07rxz
This is my replicationController yaml file:
replicationController yaml file
apiVersion: v1 kind: ReplicationController metadata: name: nfs-web spec: replicas: 2 selector:
role: web-frontend template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: eso-cmbu-docker.artifactory.eng.vmware.com/demo-container:demo-version3.0
ports:
- name: web
containerPort: 80
securityContext:
privileged: true
My Docker image was made from this simple docker file:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y nginx
RUN apt-get install -y nfs-common
I am running my kubernetes cluster on CentOs-1611, kube version:
[root#centos-master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
If I run the docker image by docker run I was able to run the image without any issue, only through kubernetes I got the crash.
Can someone help me out, how can I debug without seeing any log?
As #Sukumar commented, you need to have your Dockerfile have a Command to run or have your ReplicationController specify a command.
The pod is crashing because it starts up then immediately exits, thus Kubernetes restarts and the cycle continues.
#Show details of specific pod
kubectl describe pod <pod name> -n <namespace-name>
# View logs for specific pod
kubectl logs <pod name> -n <namespace-name>
If you have an application that takes slower to bootstrap, it could be related to the initial values of the readiness/liveness probes. I solved my problem by increasing the value of initialDelaySeconds to 120s as my SpringBoot application deals with a lot of initialization. The documentation does not mention the default 0 (https://kubernetes.io/docs/api-reference/v1.9/#probe-v1-core)
service:
livenessProbe:
httpGet:
path: /health/local
scheme: HTTP
port: 8888
initialDelaySeconds: 120
periodSeconds: 5
timeoutSeconds: 5
failureThreshold: 10
readinessProbe:
httpGet:
path: /admin/health
scheme: HTTP
port: 8642
initialDelaySeconds: 150
periodSeconds: 5
timeoutSeconds: 5
failureThreshold: 10
A very good explanation about those values is given by What is the default value of initialDelaySeconds.
The health or readiness check algorithm works like:
wait for initialDelaySeconds
perform check and wait timeoutSeconds for a timeout
if the number of continued successes is greater than successThreshold return success
if the number of continued failures is greater than failureThreshold return failure otherwise wait periodSeconds and start a new check
In my case, my application can now bootstrap in a very clear way, so that I know I will not get periodic crashloopbackoff because sometimes it would be on the limit of those rates.
I had the need to keep a pod running for subsequent kubectl exec calls and as the comments above pointed out my pod was getting killed by my k8s cluster because it had completed running all its tasks. I managed to keep my pod running by simply kicking the pod with a command that would not stop automatically as in:
kubectl run YOUR_POD_NAME -n YOUR_NAMESPACE --image SOME_PUBLIC_IMAGE:latest --command tailf /dev/null
My pod kept crashing and I was unable to find the cause. Luckily there is a space where kubernetes saves all the events that occurred before my pod crashed.
(#List Events sorted by timestamp)
To see these events run the command:
kubectl get events --sort-by=.metadata.creationTimestamp
make sure to add a --namespace mynamespace argument to the command if needed
The events shown in the output of the command showed my why my pod kept crashing.
From This page, the container dies after running everything correctly but crashes because all the commands ended. Either you make your services run on the foreground, or you create a keep alive script. By doing so, Kubernetes will show that your application is running. We have to note that in the Docker environment, this problem is not encountered. It is only Kubernetes that wants a running app.
Update (an example):
Here's how to avoid CrashLoopBackOff, when launching a Netshoot container:
kubectl run netshoot --image nicolaka/netshoot -- sleep infinity
In your yaml file, add command and args lines:
...
containers:
- name: api
image: localhost:5000/image-name
command: [ "sleep" ]
args: [ "infinity" ]
...
Works for me.
I observed the same issue, and added the command and args block in yaml file. I am copying sample of my yaml file for reference
apiVersion: v1
kind: Pod
metadata:
labels:
run: ubuntu
name: ubuntu
namespace: default
spec:
containers:
- image: gcr.io/ow/hellokubernetes/ubuntu
imagePullPolicy: Never
name: ubuntu
resources:
requests:
cpu: 100m
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
dnsPolicy: ClusterFirst
enableServiceLinks: true
As mentioned in above posts, the container exits upon creation.
If you want to test this without using a yaml file, you can pass the sleep command to the kubectl create deployment statement. The double hyphen -- indicates a command, which is equivalent of command: in a Pod or Deployment yaml file.
The below command creates a deployment for debian with sleep 1234, so it doesn't exit immediately.
kubectl create deployment deb --image=debian:buster-slim -- "sh" "-c" "while true; do sleep 1234; done"
You then can create a service etc, or, to test the container, you can kubectl exec -it <pod-name> -- sh (or -- bash) into the container you just created to test it.
I solved this problem I increased memory resource
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 100m
memory: 250Mi
In my case the problem was what Steve S. mentioned:
The pod is crashing because it starts up then immediately exits, thus Kubernetes restarts and the cycle continues.
Namely I had a Java application whose main threw an exception (and something overrode the default uncaught exception handler so that nothing was logged). The solution was to put the body of main into try { ... } catch and print out the exception. Thus I could find out what was wrong and fix it.
(Another cause could be something in the app calling System.exit; you could use a custom SecurityManager with an overridden checkExit to prevent (or log the caller of) exit; see https://stackoverflow.com/a/5401319/204205.)
Whilst troubleshooting the same issue I found no logs when using kubeclt logs <pod_id>.
Therefore I ssh:ed in to the node instance to try to run the container using plain docker. To my surprise this failed also.
When entering the container with:
docker exec -it faulty:latest /bin/sh
and poking around I found that it wasn't the latest version.
A faulty version of the docker image was already available on the instance.
When I removed the faulty:latest instance with:
docker rmi faulty:latest
everything started to work.
I had same issue and now I finally resolved it. I am not using docker-compose file.
I just added this line in my Docker file and it worked.
ENV CI=true
Reference:
https://github.com/GoogleContainerTools/skaffold/issues/3882
Try rerunning the pod and running
kubectl get pods --watch
to watch the status of the pod as it progresses.
In my case, I would only see the end result, 'CrashLoopBackOff,' but the docker container ran fine locally. So I watched the pods using the above command, and I saw the container briefly progress into an OOMKilled state, which meant to me that it required more memory.
In my case this error was specific to the hello-world docker image. I used the nginx image instead of the hello-world image and the error was resolved.
i solved this problem by removing space between quotes and command value inside of array ,this is happened because container exited after started and no executable command present which to be run inside of container.
['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
I had similar issue but got solved when I corrected my zookeeper.yaml file which had the service name mismatch with file deployment's container names. It got resolved by making them same.
apiVersion: v1
kind: Service
metadata:
name: zk1
namespace: nbd-mlbpoc-lab
labels:
app: zk-1
spec:
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
selector:
app: zk-1
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: zk-deployment
namespace: nbd-mlbpoc-lab
spec:
template:
metadata:
labels:
app: zk-1
spec:
containers:
- name: zk1
image: digitalwonderland/zookeeper
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zk1
In my case, the issue was a misconstrued list of command-line arguments. I was doing this in my deployment file:
...
args:
- "--foo 10"
- "--bar 100"
Instead of the correct approach:
...
args:
- "--foo"
- "10"
- "--bar"
- "100"
I finally found the solution when I execute 'docker run xxx ' command ,and I got the error then.It is caused by incomplete-platform .
It seems there could be a lot of reasons why a Pod should be in crashloopbackoff state.
In my case, one of the container was terminating continuously due to the missing Environment value.
So, the best way to debug is to -
1. check Pod description output i.e. kubectl describe pod abcxxx
2. check the events generated related to the Pod i.e. kubectl get events| grep abcxxx
3. Check if End-points have been created for the Pod i.e. kubectl get ep
4. Check if dependent resources have been in-place e.g. CRDs or configmaps or any other resource that may be required.
kubectl logs -f POD, will only produce logs from a running container. Suffix --previous to the command to get logs from a previous container. Used maily for debugging. Hope this helps.