Accessing Nexus repository manager password in a kubernetes pod - kubernetes

I have installed Sonatype nexus repository manager in my Kubernetes Cluster using the helm chart.
I am using the Kyma installation.
Nexus repository manager got installed properly and I can access the application.
But it seems the login password file is in a pv volume claim /nexus-data attached in the pod.
Now whenever I am trying to access the pod with kubectl exec command:
kubectl exec -i -t $POD_NAME -n dev -- /bin/sh
I am getting the following error:
OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown
I understand that this issue is because of the image does not offer shell functionality.
Is there any other way i can access the password file present in the pvc?

You can try kubectl cp command but probably it won't work as the there is no shell inside the container.
You can't really access the pv used by pvc directly in Kubernetes, but there is a simple work-around - just create another pod (with shell) with this pvc mounted and access it. To avoid errors like Volume is already used by pod(s) / node(s) I suggest to schedule this pod on the same node as nexus pod.
Check on which node is located your nexus pod: NODE=$(kubectl get pod <your-nexus-pod-name> -o jsonpath='{.spec.nodeName}')
Set nexus label for node: kubectl label node $NODE nexus=here (avoid using "yes" or "true" instead of "here"; Kubernetes will read it as boolean, not as the string)
Get your nexus pvc name mounted on the pod by running kubectl describe pod <your-nexus-pod-name>
Create simple pod definition refereeing to nexus pvc from previous step:
apiVersion: v1
kind: Pod
metadata:
name: access-nexus-data
spec:
containers:
- name: access-nexus-data-container
image: busybox:latest
command: ["sleep", "999999"]
volumeMounts:
- name: nexus-data
mountPath: /nexus-data
readOnly: true
volumes:
- name: nexus-data
persistentVolumeClaim:
claimName: <your-pvc-name>
nodeSelector:
nexus: here
Access to the pod using kubectl exec access-nexus-data -it -- sh and read data. You can also use earlier mentioned kubectl cp command.
If you are using some cloud provided Kubernetes solution, you can try to mount pv volume used by pvc to VM hosted on the cloud.
Source: similar Stackoverflow topic

Related

How to access kube-scheduler on a kubernetes cluster?

I'm trying to figure out how to configure the kubernetes scheduler using a custom config but I'm having a bit of trouble understanding exactly how the scheduler is accessible.
The scheduler runs as a pod under the kube-system namespace called kube-scheduler-it-k8s-master. The documentation says that you can configure the scheduler by creating a config file and calling kube-scheduler --config <filename>. However I am not able to access the scheduler container directly as running kubectl exec -it kube-scheduler-it-k8s-master -- /bin/bash returns:
OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown
command terminated with exit code 126
I tried modifying /etc/kubernetes/manifests/kube-scheduler to mount my custom config file within the pod and explicitly call kube-scheduler with the --config option set, but it seems that my changes get reverted and the scheduler runs using the default settings.
I feel like I'm misunderstanding something fundamentally about the kubernetes scheduler. Am I supposed to pass in the custom scheduler config from within the scheduler pod itself? Or is this supposed to be done remotely somehow?
Thanks!
Since your X problem is "how to modify scheduler configuration", you can try the following for it.
Using kubeadm
If you are using kubeadm to bootstrap the cluster, you can use --config flag while running kubeadm init to pass a custom configuration object of type ClusterConfiguration to pass extra arguments to control plane components.
Example config for scheduler:
$ cat sched.conf
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.0
scheduler:
extraArgs:
address: 0.0.0.0
config: /home/johndoe/schedconfig.yaml
kubeconfig: /home/johndoe/kubeconfig.yaml
$ kubeadm init --config sched.conf
You could also try kubeadm upgrade apply --config sched.conf <k8s version> to apply updated config on a live cluster.
Reference: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/
Updating static pod manifest
You could also edit /etc/kubernetes/manifests/kube-scheduler.yaml, modify the flags to pass the config. Make sure you mount the file into the pod by updating volumes and volumeMounts section.
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
- --config=/etc/kubernetes/mycustomconfig.conf
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
- mountPath: /etc/kubernetes/mycustomconfig.conf
name: customconfig
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kubeconfig
- hostPath:
path: /etc/kubernetes/mycustomconfig.conf
type: FileOrCreate
name: customconfig

kubernetes dones't reach internal registry

I've deployed an docker registry inside my kubernetes:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
registry-docker-registry ClusterIP 10.43.39.81 <none> 443/TCP 162m
I'm able to pull images from my machine (service is exposed via an ingress rule):
$ docker pull registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd
...
Status: Downloaded newer image for registry-do...
When I'm trying to test it in order to deploy my image into the same kubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: covid-backend
namespace: skaffold
spec:
replicas: 3
selector:
matchLabels:
app: covid-backend
template:
metadata:
labels:
app: covid-backend
spec:
containers:
- image: registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd
name: covid-backend
ports:
- containerPort: 8080
Then, I've tried to deploy it:
$ cat pod.yaml | kubectl apply -f -
However, kubernetes isn't able to reach registry:
Extract of kubectl get events:
6s Normal Pulling pod/covid-backend-774bd78db5-89vt9 Pulling image "registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd"
1s Warning Failed pod/covid-backend-774bd78db5-89vt9 Failed to pull image "registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd": rpc error: code = Unknown desc = failed to pull and unpack image "registry-docker-registry.registry/skaffold-covid-backend#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd": failed to resolve reference "registry-docker-registry.registry/skaffold-covid-backend#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd": failed to do request: Head https://registry-docker-registry.registry/v2/skaffold-covid-backend/manifests/sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd: dial tcp: lookup registry-docker-registry.registry: Try again
1s Warning Failed pod/covid-backend-774bd78db5-89vt9 Error: ErrImagePull
As you can see, kubernetes is not able to get access to the internal deployed registry...
Any ideas?
I would recommend to follow docs from k3d, they are here.
More precisely this one
Using your own local registry
If you don't want k3d to manage your registry, you can start it with some docker commands, like:
docker volume create local_registry
docker container run -d --name registry.local -v local_registry:/var/lib/registry --restart always -p 5000:5000 registry:2
These commands will start you registry in registry.local:5000. In order to push to this registry, you will need to add the line at /etc/hosts as we described in the previous section . Once your registry is up and running, we will need to add it to your registries.yaml configuration file. Finally, you must connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.local. And then you can check you local registry.
Pushing to your local registry address
The registry will be located, by default, at registry.local:5000 (customizable with the --registry-name and --registry-port parameters). All the nodes in your k3d cluster can resolve this hostname (thanks to the DNS server provided by the Docker daemon) but, in order to be able to push to this registry, this hostname but also be resolved from your host.
The easiest solution for this is to add an entry in your /etc/hosts file like this:
127.0.0.1 registry.local
Once again, this will only work with k3s >= v0.10.0 (see the section below when using k3s <= v0.9.1)
Local registry volume
The local k3d registry uses a volume for storying the images. This volume will be destroyed when the k3d registry is released. In order to persist this volume and make these images survive the removal of the registry, you can specify a volume with the --registry-volume and use the --keep-registry-volume flag when deleting the cluster. This will create a volume with the given name the first time the registry is used, while successive invocations will just mount this existing volume in the k3d registry container.
Docker Hub cache
The local k3d registry can also be used for caching images from the Docker Hub. You can start the registry as a pull-through cache when the cluster is created with --enable-registry-cache. Used in conjuction with --registry-volume/--keep-registry-volume can speed up all the downloads from the Hub by keeping a persistent cache of images in your local machine.
Testing your registry
You should test that you can
push to your registry from your local development machine.
use images from that registry in Deployments in your k3d cluster.
We will verify these two things for a local registry (located at registry.local:5000) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker's documentation for this).
Firstly, we can download some image (like nginx) and push it to our local registry with:
docker pull nginx:latest
docker tag nginx:latest registry.local:5000/nginx:latest
docker push registry.local:5000/nginx:latest
Then we can deploy a pod referencing this image to your cluster:
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-test-registry
labels:
app: nginx-test-registry
spec:
replicas: 1
selector:
matchLabels:
app: nginx-test-registry
template:
metadata:
labels:
app: nginx-test-registry
spec:
containers:
- name: nginx-test-registry
image: registry.local:5000/nginx:latest
ports:
- containerPort: 80
EOF
Then you should check that the pod is running with kubectl get pods -l "app=nginx-test-registry".
Additionaly there are 2 github links worth visting
K3d not able resolve dns
You could try to use an answer provided by #rjshrjndrn, might solve your issue with dns.
docker images are not pulled from docker repository behind corporate proxy
Open github issue on k3d with same problem as yours.

Setting a container in Kubernetes with cloudbuild.yaml error: unable to find container named "XXX" error

First time creating a pipeline in Google Cloud Platform.
I have been following their guide, and the last step I want to set the build container into Kubernetes cluster.
This is my yaml file that is failling in the last step.
steps:
# This steps clone the repository into GCP
- name: gcr.io/cloud-builders/git
args: ['clone', 'https://<user>:<password>#github.com/PatrickVibild/scrappercontroller']
# This step runs the unit tests on the app
- name: 'docker.io/library/python:3.7'
id: Test
entrypoint: /bin/sh
args:
- -c
- 'pip install -r requirements.txt && python -m pytest app/tests/**'
#This step creates a container and leave it on CloudBuilds repository.
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/abiding-robot-255320/scrappercontroller', '.']
#Adds the container to Google container registry as an artefact
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/abiding-robot-255320/scrappercontroller']
#Uses the container and replaces the existing one in Kubernetes
- name: 'gcr.io/cloud-builders/kubectl'
args: ['set', 'image', 'deployment/scrapper-config', 'scrappercontroller=gcr.io/abiding-robot-255320/scrappercontroller']
env:
- 'CLOUDSDK_COMPUTE_ZONE=us-central1-a'
- 'CLOUDSDK_CONTAINER_CLUSTER=n1scrapping'
I have been using GCP guideline
- name: 'gcr.io/cloud-builders/kubectl'
args: ['set', 'image', 'deployment/myimage', 'frontend=gcr.io/myproject/myimage']
env:
- 'CLOUDSDK_COMPUTE_ZONE=us-east1-b'
- 'CLOUDSDK_CONTAINER_CLUSTER=node-example-cluster'
But I dont know what do I have to replace in the last argument. frontend=gcr.io/myproject/myimage in my case.
Also my intention is to replace the container that is running on kubernetes, if this help identifying any issues.
Thanks for any help!
I'm going to guess from the title you're seeing a message like this in your CloudBuild logs:
+ kubectl set image deployment/scrapper-config scrappercontroller=gcr.io/abiding-robot-255320/scrappercontroller
error: unable to find container named "scrappercontroller"
I dont know what do I have to replace in the last argument. frontend=gcr.io/myproject/myimage in my case.
The meaning of this argument is <container_name>=<image_ref>.
You're setting this value to scrappercontroller=gcr.io/abiding-robot-255320/scrappercontroller.
That means: "set the image for the 'scrappercontroller' container in my Pods to this image from GCR".
You can learn more about this by running kubectl set image --help:
Update existing container image(s) of resources.
Possible resources include (case insensitive):
pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), replicaset (rs)
Examples:
# Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'.
kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1
# Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'
kubectl set image deployments,rc nginx=nginx:1.9.1 --all
# Update image of all containers of daemonset abc to 'nginx:1.9.1'
kubectl set image daemonset abc *=nginx:1.9.1
# Print result (in yaml format) of updating nginx container image from local file, without hitting the server
kubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml
You're working with a Deployment object.
Deployments create Pods using their spec.template.
Pods can have multiple containers, and each one will have a name.
This command will show you your container names for the Pods in your Deployment:
kubectl get --output=wide deploy/scrapper-config
Here's an example of a Deployment that creates Pods with two containers: "myapp" and "cool-sidecar". (See the CONTAINERS column.)
kubectl get --output=wide deploy/myapp
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
myapp 1/1 1 0 10m myapp,cool-sidecar nginx,nginx run=myapp
You can use that container name in your final argument:
'my-container-name=gcr.io/abiding-robot-255320/scrappercontroller'
You can also just use a wildcard(*) if your Pods only have a single container each:
'*=gcr.io/abiding-robot-255320/scrappercontroller'
Hopefully that helps đź‘Ť

Connect to Kubernetes mongo db in different namespace

Can anyone point out how to connect to the mongo db instance using mongo client using either command line client or from .net core programs with connection strings?
We have created a sample cluster in digitalocean with a namespace, let's say mongodatabase.
We installed the mongo statefulset with 3 replicas. We are able to successfully connect with the below command
kubectl --kubeconfig=configfile.yaml -n mongodatabase exec -ti mongo-0 mongo
But when we connect from a different namespace or from default namespace with the pod names in the below format, it doesn't work.
kubectl --kubeconfig=configfile.yaml exec -ti mongo-0.mongo.mongodatabase.cluster.svc.local mongo
where mongo-0.mongo.mongodatabase.cluster.svc.local is in pod-0.service_name.namespace.cluster.svc.local (also tried pod-0.statfulset_name.namespace.cluster.svc.local and pod-0.service_name.statefulsetname.namespace.cluster.svc.local) etc.,
Can any one help with the correct dns name/connection string to be used while connecting with mongo client in command line and also from the programs like java/.net core etc.,?
Also should we use kubernetes deployment instead of statefulsets here?
You need to reference the mongo service by namespaced dns. So if your mongo service is mymongoapp and it is deployed in mymongonamespace, you should be able to access it as mymongoapp.mymongonamespace.
To test, I used the bitnami/mongodb docker client. As follows:
From within mymongonamespace, this command works
$ kubectl config set-context --current --namespace=mymongonamespace
$ kubectl run mongodbclient --rm --tty -i --image bitnami/mongodb --command -- mongo --host mymongoapp
But when I switched to namespace default it didn't work
$ kubectl config set-context --current --namespace=default
$ kubectl run mongodbclient --rm --tty -i --image bitnami/mongodb --command -- mongo --host mymongoapp
Qualifying the host with the namespace then works
$ kubectl run mongodbclient --rm --tty -i --image bitnami/mongodb --command -- mongo --host mymongoapp.mymongonamespace
This is how you can get inside mongo-0 pod
kubectl --kubeconfig=configfile.yaml exec -ti mongo-0 sh
I think you are looking for this DNS for Services and Pods.
You can have a fully qualified domain name (FQDN) for a Services or for a Pod.
Also please have a look at this kubernetes: Service located in another namespace, as I think it will provide you with answer on how to access it from different namespace.
An example would look like this:
apiVersion: v1
kind: Service
metadata:
name: default-subdomain
spec:
selector:
name: busybox
clusterIP: None
ports:
- name: foo # Actually, no port is needed.
port: 1234
targetPort: 1234
---
apiVersion: v1
kind: Pod
metadata:
name: busybox1
labels:
name: busybox
spec:
hostname: busybox-1
subdomain: default-subdomain
containers:
- image: busybox:1.28
command:
- sleep
- "3600"
name: busybox
---
apiVersion: v1
kind: Pod
metadata:
name: busybox2
labels:
name: busybox
spec:
hostname: busybox-2
subdomain: default-subdomain
containers:
- image: busybox:1.28
command:
- sleep
- "3600"
name: busybox
If there exists a headless service in the same namespace as the pod and with the same name as the subdomain, the cluster’s KubeDNS Server also returns an A record for the Pod’s fully qualified hostname. For example, given a Pod with the hostname set to “busybox-1” and the subdomain set to “default-subdomain”, and a headless Service named “default-subdomain” in the same namespace, the pod will see its own FQDN as “busybox-1.default-subdomain.my-namespace.svc.cluster.local”. DNS serves an A record at that name, pointing to the Pod’s IP. Both pods “busybox1” and “busybox2” can have their distinct A records.
The Endpoints object can specify the hostname for any endpoint addresses, along with its IP.
Note: Because A records are not created for Pod names, hostname is required for the Pod’s A record to be created. A Pod with no hostname but with subdomain will only create the A record for the headless service (default-subdomain.my-namespace.svc.cluster.local), pointing to the Pod’s IP address. Also, Pod needs to become ready in order to have a record unless publishNotReadyAddresses=True is set on the Service.
Your question about Deployments vs StatefulSets should be a different question. But the answer is that the StatefulSet is used when you want "Stable Persistent Storage" kubernetes.io.
Also from the same page "stable is synonymous with persistence across Pod (re)scheduling". So basically your mongo instance is backed by a PeristentVolume and you want the volume reattached after the pod is rescheduled.

Pod mounts wrong directory on Node when a flexvolume with cifs is configured

The following problem occurs on a Kubernetes cluster with 1 master and 3 nodes and also on a single-machine Kubernetes.
I set up the Kubernetes with flexvolume smb support (https://github.com/Azure/kubernetes-volume-drivers/tree/master/flexvolume/smb). When I apply a new pod with flexvolume the Node mounts the smb share as expected. But the Pod points his share to some docker directory on the Node.
My installation:
latest CentOS 7
latest Kubernetes v1.14.0
(https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/)
disabled SELinux and disabled firewall
Docker 1.13.1
jq and cifs-utils
https://raw.githubusercontent.com/Azure/kubernetes-volume-drivers/master/flexvolume/smb/deployment/smb-flexvol-installer/smb installed to /usr/libexec/kubernetes/kubelet-plugins/volume/exec/microsoft.com~smb and executable
Create Pod with
smb-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: smb-secret
type: microsoft.com/smb
data:
username: YVVzZXI=
password: YVBhc3N3b3Jk
nginx-flex-smb.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-flex-smb
spec:
containers:
- name: nginx-flex-smb
image: nginx
volumeMounts:
- name: test
mountPath: /data
volumes:
- name: test
flexVolume:
driver: "microsoft.com/smb"
secretRef:
name: smb-secret
options:
source: "//<host.with.smb.share>/kubetest"
mountoptions: "vers=3.0,dir_mode=0777,file_mode=0777"
What happens
Mount point on Node is created on /var/lib/kubelet/pods/bef26895-5ac7-11e9-a668-00155db9c92e/volumes/microsoft.com~smb.
mount returns //<host.with.smb.share>/kubetest on /var/lib/kubelet/pods/bef26895-5ac7-11e9-a668-00155db9c92e/volumes/microsoft.com~smb/test type cifs (rw,relatime,vers=3.0,cache=strict,username=aUser,domain=,uid=0,noforceuid,gid=0,noforcegid,addr=172.27.72.43,file_mode=0777,dir_mode=0777,soft,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)
read and write works as expected on host and on the Node itself
on Pod
mountfor /data points to tmpfs on /data type tmpfs (rw,nosuid,nodev,seclabel,size=898680k,nr_inodes=224670,mode=755)
but the content of the directory /data comes from /run/docker/libcontainerd/8039742ae2a573292cd9f4ef7709bf7583efd0a262b9dc434deaf5e1e20b4002/ on the node.
I tried to install the Pod with a PersistedVolumeClaime and get the same problem. Searching for this problem got me no solutions.
Our other pods uses GlusterFS and heketi which works fine.
Is there maybe a configuration failure? Something missing?
EDIT: Solution
I upgraded Docker to the latest validated Version 18.06 and everything works well now.
I upgraded Docker to the latest validated Version 18.06 and everything works well now.
To install it follow the instructions on Get Docker CE for CentOS.