Mount point attachment on remote machine using ansible - server

I am trying to add a mount point to a remote machine using ansible.
The mount point that has to attached there is a folder of another server.
I wrote following ansible script to do that but it hanged and no response.
- hosts: all
remote_user: deepcompute
become: true
become_method: sudo
tasks:
- name: Adding SSH key
authorized_key:
user: deepcompute
state: present
manage_dir: yes
key: "{{lookup('file','/home/deepcompute/personal/test_class/data.pub')}}"
- name: Adding mount point in fstab
lineinfile:
path: /etc/fstab
line: "user#machine1.servers.nferx.com:/home/deepcompute/hpcentraldata/ /example_mount/ fuse.sshfs _netdev,user,idmap=user,transform_symlinks,identityfile=/home/deepcompute/.ssh/id_rsa,allow_other,default_permissions,reconnect,ServerAliveInterval=20,ServerAliveCountMax=5,uid=1000,gid=1000 0 0"
- name: Mount Directory Example
file:
path: /example_mount
state: directory
notify:
- Change Permission
handlers:
- name: Change Permission
file:
path: /example_mount
owner: user
group: user
state: directory
notify:
- Add mount point
- name: Add mount point
mount:
path: /example_mount
src: user#machine2.servers.nferx.com:/home/deepcompute/hpcentraldata
fstype: ext4
state: mounted
opts: bind
So In above script , i will create a mount point in a new server.

Use mount module.
- mount:
src: 'user#machine1.servers.nferx.com:/home/deepcompute/hpcentraldata/'
name: '/example_mount/'
fstype: 'fuse.sshfs'
opts: '_netdev,user,idmap=user,transform_symlinks,identityfile=/home/deepcompute/.ssh/id_rsa,allow_other,default_permissions,reconnect,ServerAliveInterval=20,ServerAliveCountMax=5,uid=1000,gid=1000'

Related

File exists for webhook, but cube-api failed with file not exists

I am trying to configure Kubernetes with webhook, I created file, and put it at /etc/kubernetes/webhook.yaml.
I modify /etc/kubernetes/manifests/kube-apiserver.yaml and add the flag - --authentication-token-webhook-config-file=/etc/kubernetes/webhook.yaml.
When kubelet find, manifest file modified, and it has to restart the api (or destroy and create new api container), it failed with no such file or directory
2021-07-16T17:26:49.218961383-04:00 stderr F I0716 21:26:49.218777 1 server.go:632] external host was not specified, using 172.17.201.214
2021-07-16T17:26:49.219614716-04:00 stderr F I0716 21:26:49.219553 1 server.go:182] Version: v1.20.5
2021-07-16T17:26:49.642268874-04:00 stderr F Error: stat /etc/kubernetes/webhook.yaml: no such file or directory
But when I check for file, its exists.
[root#kubemaster01 ~]# ls -al /etc/kubernetes/webhook.yaml
-rw-r--r-- 1 root root 272 Jul 13 16:14 /etc/kubernetes/webhook.yaml
I change the file permission to 600 but still its not working.
Do I have to set something to enable Kubelet to access this file ?
I forget to mount the host directory to the Kube-api server.
If we add section for mount, it will work.
/etc/kubernetes/manifests/kube-apiserver.yaml
spec:
containers:
...
...
volumeMounts:
...
...
- mountPath: /etc/kubernetes
name: webhook
readOnly: true
...
...
...
...
volumes:
...
...
- hostPath:
path: /etc/kubernetes
type: DirectoryOrCreate
name: webhook
...

add kubernetes worker node with ansible but it doesn't get join

I'm trying to stablish a kubernetes system and I'm using ansible.
and these are my playbooks:
hosts:
[masters]
master ansible_host=157.90.96.140 ansible_user=root
[workers]
worker1 ansible_host=157.90.96.138 ansible_user=root
worker2 ansible_host=157.90.96.139 ansible_user=root
[all:vars]
ansible_user=ubuntu
ansible_python_interpreter=/usr/bin/python3
kubelet_cgroup_driver=cgroupfs
ansible_ssh_common_args='-o StrictHostKeyChecking=no
initial
become: yes
tasks:
- name: create the 'ubuntu' user
user: name=ubuntu append=yes state=present createhome=yes shell=/bin/bash
- name: allow 'ubuntu' to have passwordless sudo
lineinfile:
dest: /etc/sudoers
line: 'ubuntu ALL=(ALL) NOPASSWD: ALL'
validate: 'visudo -cf %s'
- name: set up authorized keys for the ubuntu user
authorized_key: user=ubuntu key="{{item}}"
with_file:
- ~/.ssh/id_rsa.pub
kube-dependencies
- hosts: all
become: yes
tasks:
- name: install Docker
apt:
name: docker.io
state: present
update_cache: true
- name: install APT Transport HTTPS
apt:
name: apt-transport-https
state: present
- name: add Kubernetes apt-key
apt_key:
url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
state: present
- name: add Kubernetes' APT repository
apt_repository:
repo: deb http://apt.kubernetes.io/ kubernetes-xenial main
state: present
filename: 'kubernetes'
- name: install kubernetes-cni
apt:
name: kubernetes-cni=0.7.5-00
state: present
force: yes
update_cache: true
- name: install kubelet
apt:
name: kubelet=1.14.0-00
state: present
update_cache: true
- name: install kubeadm
apt:
name: kubeadm=1.14.0-00
state: present
- hosts: master
become: yes
tasks:
- name: install kubectl
apt:
name: kubectl=1.14.0-00
state: present
force: yes
master
- hosts: master
become: yes
tasks:
- name: initialize the cluster
shell: kubeadm init --pod-network-cidr=10.244.0.0/16 >> cluster_initialized.txt
args:
chdir: $HOME
creates: cluster_initialized.txt
- name: create .kube directory
become: yes
become_user: ubuntu
file:
path: $HOME/.kube
state: directory
mode: 0755
- name: copy admin.conf to user's kube config
copy:
src: /etc/kubernetes/admin.conf
dest: /home/ubuntu/.kube/config
remote_src: yes
owner: ubuntu
- name: install Pod network
become: yes
become_user: ubuntu
shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml >> pod_network_setup.txt
args:
chdir: $HOME
creates: pod_network_setup.txt
worker
- hosts: master
become: yes
gather_facts: false
tasks:
- name: get join command
shell: kubeadm token create --print-join-command
register: join_command_raw
- name: set join command
set_fact:
join_command: "{{ join_command_raw.stdout_lines[0] }}"
- hosts: workers
become: yes
tasks:
- name: join cluster
shell: "{{ hostvars['master'].join_command }} >> node_joined.txt"
args:
chdir: $HOME
creates: node_joined.txt
my manual for installation was
https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-ansible-on-ubuntu-16-04
now I have several question:
Are these configs right?
2.In master playbook I'm using manual pod network and I didn't change it is that correct?
my main problem is my workers don't get join what's the problem?

How to add encryption-provider-config option to kube-apiserver?

I am using kubernetes 1.15.7 version.
I am trying to follow the link https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration to enable 'encryption-provider-config' option on 'kube-apiserver'.
I edited file '/etc/kubernetes/manifests/kube-apiserver.yaml' and provided below option
- --encryption-provider-config=/home/rtonukun/secrets.yaml
But after that I am getting below error.
The connection to the server 171.69.225.87:6443 was refused - did you specify the right host or port?
with all kubectl commands like 'kubectl get no'.
Mainy, how do I do these below two steps?
3. Set the --encryption-provider-config flag on the kube-apiserver to point to the location of the config file.
4. Restart your API server.
I've reproduced exactly your scenario, and I'll try to explain how I fixed it
Reproducing the same scenario
Create the encrypt file on /home/koopakiller/secrets.yaml:
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: r48bixfj02BvhhnVktmJJiuxmQZp6c0R60ZQBFE7558=
- identity: {}
Edit the file /etc/kubernetes/manifests/kube-apiserver.yaml and set the --encryption-provider-config flag:
- --encryption-provider-config=/home/koopakiller/encryption.yaml
Save the file and exit.
When I checked the pods status got the same error:
$ kubectl get pods -A
The connection to the server 10.128.0.62:6443 was refused - did you specify the right host or port?
Troubleshooting
Since kubectl is not working anymore, I tried to look directly the running containers using docker command, then I see kube-apiserver container was recently recreated:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
54203ea95e39 k8s.gcr.io/pause:3.1 "/pause" 1 minutes ago Up 1 minutes k8s_POD_kube-apiserver-lab-1_kube-system_015d9709c9881516d6ecf861945f6a10_0
...
Kubernetes store the logs of created pods on /var/log/pods directory, I've checked the kube-apiserver log file and found a valuable information:
{"log":"Error: error opening encryption provider configuration file "/home/koopakiller/encryption.yaml": open /home/koopakiller/encryption.yaml: no such file or directory\n","stream":"stderr","time":"2020-01-22T13:28:46.772768108Z"}
Explanation
Taking a look at manifest file kube-apiserver.yaml is possible to see the command kube-apiserver, it runs into container, so they need to have the encryption.yaml file mounted into container.
If you check the volumeMounts in this file, you could see that only the paths below is mounted in container by default:
/etc/ssl/certs
/etc/ca-certificates
/etc/kubernetes/pki
/usr/local/share/ca-certificates
/usr/share/ca-certificates
...
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
...
Based on the facts above, we can assume that apiserver failed to start because /home/koopakiller/encryption.yaml doesn't actually mounted into container.
How to solve
I can see 2 ways to solve this issue:
1st - Copy the encryption file to /etc/kubernetes/pki (or any of the path above) and change the path in /etc/kubernetes/kube-apiserver.yaml:
- --encryption-provider-config=/etc/kubernetes/encryption.yaml
Save the file and wait apiserver restart.
2nd - Create a new volumeMounts in the kube-apiserver.yaml manifest to mount a custom directory from node into container.
Let's create a new directory in /etc/kubernetes/secret (home folder isn't a good location to leave config files =)).
Edit /etc/kubernetes/manifests/kube-apiserver.yaml:
...
- --encryption-provider-config=/etc/kubernetes/secret/encryption.yaml
...
volumeMounts:
- mountPath: /etc/kubernetes/secret
name: secret
readOnly: true
...
volumes:
- hostPath:
path: /etc/kubernetes/secret
type: DirectoryOrCreate
name: secret
...
After save the file kubernetes will mount the node path /etc/kubernetes/secret into the same path into the apiserver container, wait start completely and try to list your node again.
Please let know if that helped!

How to pass user credentials to (user-restricted) mounted volume inside Kubernetes Pod?

I am trying to pass user credentials via Kubernetes secret to a mounted, password protected directory inside a Kubernetes Pod.
The NFS folder /mount/protected has user access restrictions, i.e. only certain users can access this folder.
This is my Pod configuration:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
volumes:
- name: my-volume
hostPath:
path: /mount/protected
type: Directory
secret:
secretName: my-secret
containers:
- name: my-container
image: <...>
command: ["/bin/sh"]
args: ["-c", "python /my-volume/test.py"]
volumeMounts:
- name: my-volume
mountPath: /my-volume
When applying it, I get the following error:
The Pod "my-pod" is invalid:
* spec.volumes[0].secret: Forbidden: may not specify more than 1 volume type
* spec.containers[0].volumeMounts[0].name: Not found: "my-volume"
I created my-secret according to the following guide:
https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-secret
So basically:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
username: bXktYXBw
password: PHJlZGFjdGVkPg==
But when I mount the folder /mount/protected with:
spec:
volumes:
- name: my-volume
hostPath:
path: /mount/protected
type: Directory
I get a permission denied error python: can't open file '/my-volume/test.py': [Errno 13] Permission denied when running a Pod that mounts this volume path.
My question is how can I tell my Pod that it should use specific user credentials to gain access to this mounted folder?
You're trying to tell Kubernetes that my-volume should get its content from both a host path and a Secret, and it can only have one of those.
You don't need to manually specify a host path. Kubernetes will figure out someplace appropriate to put the Secret content and it will still be visible on the mountPath you specify within the container. (Specifying hostPath: at all is usually wrong, unless you can guarantee that the path will exist with the content you expect on every node in the cluster.)
So change:
volumes:
- name: my-volume
secret:
secretName: my-secret
# but no hostPath
I eventually figured out how to pass user credentials to a mounted directory within a Pod by using CIFS Flexvolume Plugin for Kubernetes (https://github.com/fstab/cifs).
With this Plugin, every user can pass her/his credentials to the Pod.
The user only needs to create a Kubernetes secret (cifs-secret), storing the username/password and use this secret for the mount within the Pod.
The volume is then mounted as follows:
(...)
volumes:
- name: test
flexVolume:
driver: "fstab/cifs"
fsType: "cifs"
secretRef:
name: "cifs-secret"
options:
networkPath: "//server/share"
mountOptions: "dir_mode=0755,file_mode=0644,noperm"

Writing from container to host in Kubernetes

I currently have a job that runs a script. This script needs to create a file on the host file system.
To do so I make use of a hostPath volume with:
volumeMounts:
- mountPath: /var/logs/test
name: joblogs
volumes:
- hostPath:
path: /root/test
type: DirectoryOrCreate
name: joblogs
I used
chcon -Rt svirt_sandbox_file_t /root/test
to allow the writing on this directory, but even if files are created in
/var/logs/test
they are not on
/root/test
on the host.
EDIT: The pod itself runs on the same node I am talking about.