Naming gitRepo mount path in Kubernetes - kubernetes

When using a gitRepo volume in Kubernetes, the repo is cloned into the mountPath directory. For the following pod specification, for example:
apiVersion: v1
kind: Pod
metadata:
name: server
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /usr/share/docroot
name: docroot-volume
volumes:
- name: docroot-volume
gitRepo:
repository: "git#somewhere:me/my-git-repository.git"
The directory appears in the container at /usr/share/docroot/my-git-repository. This means my container needs to know my repository name. I don't want my container knowing anything about the repository name. It should just know there is a "docroot", however initialized. The only place the git repository name should appear is in the pod specification.
Is there anyway in Kubernetes to specify the full internal path to a git repo volume mount?

Currently there is no native way to do this, but I filed an issue for you.
Under the hood kuberetes is just doing a git clone $source over an emptyDir volume, but since the source is passed as a single argument there is no way to specify the destination name.
Fri, 09 Oct 2015 18:35:01 -0700 Fri, 09 Oct 2015 18:49:52 -0700 90 {kubelet stclair-minion-nwpu} FailedSync Error syncing pod, skipping: failed to exec 'git clone https://github.com/kubernetes/kubernetes.git k8s': Cloning into 'kubernetes.git k8s'...
error: The requested URL returned error: 400 while accessing https://github.com/kubernetes/kubernetes.git k8s/info/refs
fatal: HTTP request failed
: exit status 128
In the meantime, I can think of 2 options to avoid the dependency on the repository name:
Supply the repository name as an environment variable, which you can then use from your container
Modify your containers command to move the repository to the desired location before continuing

Related

Any way to reduce output from Google Cloud Run emulator?

The Google Cloud Run emulator (gcloud beta code dev) watches for file changes and rebuilds on every change.
So, in my terminal, there's a constant churn of building messages as I type, and it's distracting.
I tried (reference: https://cloud.google.com/sdk/gcloud/reference)
--verbosity="none" (no effect)
--quiet just elminates interactivity.
--no-user-output-enabled crashes the emulator with
Flag --enable-rpc has been deprecated, flags --rpc-port or --rpc-http-port now imply --enable-rpc=true, so please use only those instead
^CException in thread Thread-13:
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
and a whole bunch more that I can copy if it matters.
Is there a way to either silence build logs, but still get (1) my own console.logs and (2) errors?
I suspect (because your question was the first time I became aware of this [useful] facility) that, because gcloud beta code dev is using (in my case) minikube (locally), the output is being generated by minikube (kubelet) process and not gcloud, that you can't (yet!) control the output by adding gcloud flags.
It's a good suggestion and I recommend you file an issue on Google's Issue Tracker.
kubectl (!) has a new configuration that points to minikube while it's running and (!) I'm able to kubectl logs deployment/${APP} from another term to view only my app's logs:
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
${APP} 1/1 1 1 1m
kubectl logs deployment/${APP}
2022/01/06 17:21:58 Entered
2022/01/06 17:21:58 Starting server [:8080]
2022/01/06 17:21:58 Sleeping
2022/01/06 17:26:58 Awake
2022/01/06 17:26:58 Sleeping
~/.kube/config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /path/to/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 06 Jan 2022 09:21:47 PST
provider: minikube.sigs.k8s.io
version: v1.24.0
name: cluster_info
server: https://192.168.49.2:8443
name: gcloud-local-dev
contexts:
- context:
cluster: gcloud-local-dev
extensions:
- extension:
last-update: Thu, 06 Jan 2022 09:21:47 PST
provider: minikube.sigs.k8s.io
version: v1.24.0
name: context_info
namespace: default
user: gcloud-local-dev
name: gcloud-local-dev
current-context: gcloud-local-dev
kind: Config
preferences: {}
users:
- name: gcloud-local-dev
user:
client-certificate: /path/to/.minikube/profiles/gcloud-local-dev/client.crt
client-key: /path/to/.minikube/profiles/gcloud-local-dev/client.key

How to configure microk8s kubernetes to use private container's in https://hub.docker.com/?

microk8s document "Working with a private registry" leaves me unsure what to do. The Secure registry portion says Kubernetes does it one way (no indicating whether or not Kubernetes' way applies to microk8), and microk8s uses containerd inside its implementation.
My YAML file contains a reference to a private container on dockerhub.
apiVersion: apps/v1
kind: Deployment
metadata:
name: blaw
spec:
replicas: 1
selector:
matchLabels:
app: blaw
strategy:
type: Recreate
template:
metadata:
labels:
app: blaw
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
When I microk8s kubectl apply this file and do a microk8s kubectl describe, I get:
Warning Failed 16m (x4 over 18m) kubelet Failed to pull image "johngrabner/py_blaw_service:v0.3.10": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/johngrabner/py_blaw_service:v0.3.10": failed to resolve reference "docker.io/johngrabner/py_blaw_service:v0.3.10": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
I have verified that I can download this repo from a console doing a docker pull command.
Pods using public containers work fine in microk8s.
The file /var/snap/microk8s/current/args/containerd-template.toml already contains something to make dockerhub work since public containers work. Within this file, I found
# 'plugins."io.containerd.grpc.v1.cri".registry' contains config related to the registry
[plugins."io.containerd.grpc.v1.cri".registry]
# 'plugins."io.containerd.grpc.v1.cri".registry.mirrors' are namespace to mirror mapping for all namespaces.
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io", ]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:32000"]
endpoint = ["http://localhost:32000"]
The above does not appear related to authentication.
On the internet, I found instructions to create a secret to store credentials, but this does not work either.
microk8s kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/john/.docker/config.json --type=kubernetes.io/dockerconfigjson
While you have created the secret you have to then setup your deployment/pod to use that secret in order to download the image. This can be achieved with imagePullSecrets as described on the microk8s document you mentioned.
Since you already created your secret you just have reference it in your deployment:
...
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
imagePullSecrets:
- name: regcred
...
For more reading check how to Pull an Image from a Private Registry.

unable to deploy local container image to k8s cluster

I have tried to deploy one of the local container images I created but keeps always getting the below error
Failed to pull image "webrole1:dev": rpc error: code = Unknown desc =
Error response from daemon: pull access denied for webrole1,
repository does not exist or may require 'docker login': denied:
requested access to
I have followed the below article to containerize my application and I was able to successfully complete this but when I try to deploy it to k8s pod I don't succeed
My pod.yaml looks like below
apiVersion: v1
kind: Pod
metadata:
name: learnk8s
spec:
containers:
- name: webrole1dev
image: 'webrole1:dev'
ports:
- containerPort: 8080
and below are some images from my PowerShell
I am new to dockers and k8s so thanks for the help in advance and would appreciate if I get some detailed response.
When you're working locally, you can use an image name like webrole, however that doesn't tell Docker where the image came from (because it didn't come from anywhere, you built it locally). When you start working with multiple hosts, you need to push things to a Docker registry. For local Kubernetes experiments you can also change your config so you build your image in the same Docker environment as Kubernetes is using, though the specifics of that depend on how you set up both Docker and Kubernetes.

Kubernetes pod cannot mount iSCSI volume: failed to get any path for iscsi disk

I would like to add an iSCSI volume to a pod as in this this example. I have already prepared an iSCSI target on a Debian server and installed open-iscsi on all my worker nodes. I have also confirmed that I can mount the iSCSI target on a worker node with command line tools (i.e. still outside Kubernetes). This works fine. For simplicity, there is no authentication (CHAP) in play yet, and there is already a ext4 file system present on the target.
I would now like for Kubernetes 1.14 to mount the same iSCSI target into a pod with the following manifest:
---
apiVersion: v1
kind: Pod
metadata:
name: iscsipd
spec:
containers:
- name: iscsipd-ro
image: kubernetes/pause
volumeMounts:
- mountPath: "/mnt/iscsipd"
name: iscsivol
volumes:
- name: iscsivol
iscsi:
targetPortal: 1.2.3.4 # my target
iqn: iqn.2019-04.my-domain.com:lun1
lun: 0
fsType: ext4
readOnly: true
According to kubectl describe pod this works in the initial phase (SuccessfulAttachVolume), but then fails (FailedMount). The exact error message reads:
Warning FailedMount ... Unable to mount volumes for pod "iscsipd_default(...)": timeout expired waiting for volumes to attach or mount for pod "default"/"iscsipd". list of unmounted volumes=[iscsivol]. list of unattached volumes=[iscsivol default-token-7bxnn]
Warning FailedMount ... MountVolume.WaitForAttach failed for volume "iscsivol" : failed to get any path for iscsi disk, last err seen:
Could not attach disk: Timeout after 10s
How can I further diagnose and overcome this problem?
UPDATE In this related issue the solution consisted of using a numeric IP address for the target. However, this does not help in my case, since I am already using a targetPortal of the form 1.2.3.4 (have
also tried both with and without port number 3260).
UPDATE Stopping scsid.service and/or open-iscsi.service (as suggested here) did not make a difference either.
UPDATE The error apparently gets triggered in pkg/volume/iscsi/iscsi_util.go if waitForPathToExist(&devicePath, multipathDeviceTimeout, iscsiTransport) fails. However, what is strange is that when it is triggered the file at devicePath (/dev/disk/by-path/ip-...-iscsi-...-lun-...) does actually exist on the node.
UPDATE I have used this procedure for defining an simple iSCSI target for these test purposes:
pvcreate /dev/sdb
vgcreate iscsi /dev/sdb
lvcreate -L 10G -n iscsi_1 iscsi
apt-get install tgt
cat >/etc/tgt/conf.d/iscsi_1.conf <<EOL
<target iqn.2019-04.my-domain.com:lun1>
backing-store /dev/mapper/iscsi-iscsi_1
initiator-address 5.6.7.8 # my cluster node #1
... # my cluster node #2, etc.
</target>
EOL
systemctl restart tgt
tgtadm --mode target --op show
This is probably because of authentication issue to your iscsi target.
If you don't use CHAP authentication yet, you still have to disable authentication.
For example, if you use targetcli, you can run below commands to disable it.
$ sudo targetcli
/> /iscsi/iqn.2003-01.org.xxxx/tpg1 set attribute authentication=0 # will disable auth
/> /iscsi/iqn.2003-01.org.xxxx/tpg1 set attribute generate_node_acls=1 # will force to use tpg1 auth mode by default
If this doesn't help you, please share your iscsi target configuration, or guide that you followed.
this worked for me
iscsi:
chapAuthSession: false
References:
https://github.com/kubernetes-retired/external-storage/tree/master/iscsi/targetd
https://docs.mirantis.com/docker-enterprise/v3.1/dockeree-products/mke/deploy-apps-with-kubernetes/persistent-storage/configure-iscsi.html

Private repository passing through kubernetes yaml file

We have tried to setup hivemq manifest file. We have hivemq docker image in our private repository
Step1: I have logged into the private repository like
docker login "private repo name"
It was success
After that I have tried to create manifest file for that like below
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hivemq
spec:
replicas: 1
template:
metadata:
labels:
name: hivemq1
spec:
containers:
- env:
xxxxx some envronment values I have passed
name: hivemq
image: privatereponame:portnumber/directoryname/hivemq:
ports:
- containerPort: 1883
Its successfully creating, but I am getting the below issues. Could you please help any one to solve this issue.
hivemq-4236597916-mkxr4 0/1 ImagePullBackOff 0 1h
Logs:
Error from server (BadRequest): container "hivemq16" in pod "hivemq16-1341290525-qtkhb" is waiting to start: InvalidImageName
Some times I am getting that kind of issues
Error from server (BadRequest): container "hivemq" in pod "hivemq-4236597916-mkxr4" is waiting to start: trying and failing to pull image
In order to use a private docker registry with Kubernetes it's not enough to docker login.
You need to add a Kubernetes docker-registry Secret with your credentials as described here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/. Also in that article is imagePullSecrets setting you have to add to your yaml deployment file, referencing that secret.
I just fixed this on my machine, kubectl v1.9.0 failed to create the secret properly. Upgrading to v1.9.1, deleting the secret, recreating it resolved the issue for me. https://github.com/kubernetes/kubernetes/issues/57427