Following the project from here, I am trying to integrate airflow kubernetes executor using NFS server as backed storage PV. I've a PV airflow-pv which is linked with NFS server. Airflow webserver and scheduler are using a PVC airflow-pvc which is bound with airflow-pv. I've placed my dag files in NFS server /var/nfs/airflow/development/<dags/logs>. I can see newly added DAGS in webserver UI aswell. However when I execute a DAG from UI, the scheduler fires a new POD for that tasks BUT the new worker pod fails to run saying
Unable to mount volumes for pod "tutorialprintdate-3e1a4443363e4c9f81fd63438cdb9873_development(976b1e64-b46d-11e9-92af-025000000001)": timeout expired waiting for volumes to attach or mount for pod "development"/"tutorialprintdate-3e1a4443363e4c9f81fd63438cdb9873". list of unmounted volumes=[airflow-dags]. list of unattached volumes=[airflow-dags airflow-logs airflow-config default-token-hjwth]
here is my webserver and scheduler deployment files;
apiVersion: v1
kind: Service
metadata:
name: airflow-webserver-svc
namespace: development
spec:
type: NodePort
ports:
- name: web
protocol: TCP
port: 8080
selector:
app: airflow-webserver-app
namespace: development
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: airflow-webserver-dep
namespace: development
spec:
replicas: 1
selector:
matchLabels:
app: airflow-webserver-app
namespace: development
template:
metadata:
labels:
app: airflow-webserver-app
namespace: development
spec:
restartPolicy: Always
containers:
- name: airflow-webserver-app
image: airflow:externalConfigs
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
args: ["-webserver"]
env:
- name: AIRFLOW_KUBE_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: AIRFLOW__CORE__FERNET_KEY
valueFrom:
secretKeyRef:
name: airflow-secrets
key: AIRFLOW__CORE__FERNET_KEY
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: airflow-secrets
key: MYSQL_PASSWORD
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: airflow-secrets
key: MYSQL_PASSWORD
- name: DB_HOST
value: mysql-svc.development.svc.cluster.local
- name: DB_PORT
value: "3306"
- name: MYSQL_DATABASE
value: airflow
- name: MYSQL_USER
value: airflow
- name: MYSQL_PASSWORD
value: airflow
- name: AIRFLOW__CORE__EXECUTOR
value: "KubernetesExecutor"
volumeMounts:
- name: airflow-config
mountPath: /usr/local/airflow/airflow.cfg
subPath: airflow.cfg
- name: airflow-files
mountPath: /usr/local/airflow/dags
subPath: airflow/development/dags
- name: airflow-files
mountPath: /usr/local/airflow/plugins
subPath: airflow/development/plugins
- name: airflow-files
mountPath: /usr/local/airflow/logs
subPath: airflow/development/logs
- name: airflow-files
mountPath: /usr/local/airflow/temp
subPath: airflow/development/temp
volumes:
- name: airflow-files
persistentVolumeClaim:
claimName: airflow-pvc
- name: airflow-config
configMap:
name: airflow-config
The scheduler yaml file is exactly the same except the container args is args: ["-scheduler"]. Here is my airflow.cfg file,
apiVersion: v1
kind: ConfigMap
metadata:
name: "airflow-config"
namespace: development
data:
airflow.cfg: |
[core]
airflow_home = /usr/local/airflow
dags_folder = /usr/local/airflow/dags
base_log_folder = /usr/local/airflow/logs
executor = KubernetesExecutor
plugins_folder = /usr/local/airflow/plugins
load_examples = false
[scheduler]
child_process_log_directory = /usr/local/airflow/logs/scheduler
[webserver]
rbac = false
[kubernetes]
airflow_configmap =
worker_container_repository = airflow
worker_container_tag = externalConfigs
worker_container_image_pull_policy = IfNotPresent
delete_worker_pods = true
dags_volume_claim = airflow-pvc
dags_volume_subpath =
logs_volume_claim = airflow-pvc
logs_volume_subpath =
env_from_configmap_ref = airflow-config
env_from_secret_ref = airflow-secrets
in_cluster = true
namespace = development
[kubernetes_node_selectors]
# the key-value pairs to be given to worker pods.
# the worker pods will be scheduled to the nodes of the specified key-value pairs.
# should be supplied in the format: key = value
[kubernetes_environment_variables]
//the below configs gets overwritten by above [kubernetes] configs
AIRFLOW__KUBERNETES__DAGS_VOLUME_CLAIM = airflow-pvc
AIRFLOW__KUBERNETES__DAGS_VOLUME_SUBPATH = var/nfs/airflow/development/dags
AIRFLOW__KUBERNETES__LOGS_VOLUME_CLAIM = airflow-pvc
AIRFLOW__KUBERNETES__LOGS_VOLUME_SUBPATH = var/nfs/airflow/development/logs
[kubernetes_secrets]
AIRFLOW__CORE__SQL_ALCHEMY_CONN = airflow-secrets=AIRFLOW__CORE__SQL_ALCHEMY_CONN
AIRFLOW_HOME = airflow-secrets=AIRFLOW_HOME
[cli]
api_client = airflow.api.client.json_client
endpoint_url = https://airflow.crunchanalytics.cloud
[api]
auth_backend = airflow.api.auth.backend.default
[admin]
# ui to hide sensitive variable fields when set to true
hide_sensitive_variable_fields = true
After firing a manual task, the logs of the Scheduler tells me that KubernetesExecutorConfig() executed with all values as None. Seems like it didn't picked up the configs ? I've tried almost everything I know of, but cannot manage to make it work. Could someone tell me waht am I missing ?
[2019-08-01 14:44:22,944] {jobs.py:1341} INFO - Sending ('kubernetes_sample', 'run_this_first', datetime.datetime(2019, 8, 1, 13, 45, 51, 874679, tzinfo=<Timezone [UTC]>), 1) to executor with priority 3 and queue default
[2019-08-01 14:44:22,944] {base_executor.py:56} INFO - Adding to queue: airflow run kubernetes_sample run_this_first 2019-08-01T13:45:51.874679+00:00 --local -sd /usr/local/airflow/dags/airflow/development/dags/k8s_dag.py
[2019-08-01 14:44:22,948] {kubernetes_executor.py:629} INFO - Add task ('kubernetes_sample', 'run_this_first', datetime.datetime(2019, 8, 1, 13, 45, 51, 874679, tzinfo=<Timezone [UTC]>), 1) with command airflow run kubernetes_sample run_this_first 2019-08-01T13:45:51.874679+00:00 --local -sd /usr/local/airflow/dags/airflow/development/dags/k8s_dag.py with executor_config {}
[2019-08-01 14:44:22,949] {kubernetes_executor.py:379} INFO - Kubernetes job is (('kubernetes_sample', 'run_this_first', datetime.datetime(2019, 8, 1, 13, 45, 51, 874679, tzinfo=<Timezone [UTC]>), 1), 'airflow run kubernetes_sample run_this_first 2019-08-01T13:45:51.874679+00:00 --local -sd /usr/local/airflow/dags/airflow/development/dags/k8s_dag.py', KubernetesExecutorConfig(image=None, image_pull_policy=None, request_memory=None, request_cpu=None, limit_memory=None, limit_cpu=None, gcp_service_account_key=None, node_selectors=None, affinity=None, annotations={}, volumes=[], volume_mounts=[], tolerations=None))
[2019-08-01 14:44:23,042] {kubernetes_executor.py:292} INFO - Event: kubernetessamplerunthisfirst-7fe05ddb34aa4cb9a5604e420d5b60a3 had an event of type ADDED
[2019-08-01 14:44:23,046] {kubernetes_executor.py:324} INFO - Event: kubernetessamplerunthisfirst-7fe05ddb34aa4cb9a5604e420d5b60a3 Pending
[2019-08-01 14:44:23,049] {kubernetes_executor.py:292} INFO - Event: kubernetessamplerunthisfirst-7fe05ddb34aa4cb9a5604e420d5b60a3 had an event of type MODIFIED
[2019-08-01 14:44:23,049] {kubernetes_executor.py:324} INFO - Event: kubernetessamplerunthisfirst-7fe05ddb34aa4cb9a5604e420d5b60a3 Pending
for reference, here is my PV and PVC;
kind: PersistentVolume
apiVersion: v1
metadata:
name: airflow-pv
labels:
mode: local
environment: development
spec:
persistentVolumeReclaimPolicy: Retain
storageClassName: airflow-pv
capacity:
storage: 4Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.105.225.217
path: "/"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: airflow-pvc
namespace: development
spec:
storageClassName: airflow-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
mode: local
environment: development
Using Airflow version: 1.10.3
Since no answer yet, I'll share my findings so far. In my airflow.conf under kubernetes section, we are to pass the following values
dags_volume_claim = airflow-pvc
dags_volume_subpath = airflow/development/dags
logs_volume_claim = airflow-pvc
logs_volume_subpath = airflow/development/logs
the way how scheduler creates a new pod from the above configs is as follows (only mentioning the volumes and volumeMounts);
"volumes": [
{
"name": "airflow-dags",
"persistentVolumeClaim": {
"claimName": "airflow-pvc"
}
},
{
"name": "airflow-logs",
"persistentVolumeClaim": {
"claimName": "airflow-pvc"
}
}],
"containers": [
{ ...
"volumeMounts": [
{
"name": "airflow-dags",
"readOnly": true,
"mountPath": "/usr/local/airflow/dags",
"subPath": "airflow/development/dags"
},
{
"name": "airflow-logs",
"mountPath": "/usr/local/airflow/logs",
"subPath": "airflow/development/logs"
}]
...}]
K8s DOESN'T likes multiple volumes pointing to same pvc (airflow-pvc). To fix this, I'd to create two PVC (and PV) for dags and logs dags_volume_claim = airflow-dags-pvc and logs_volume_claim = airflow-log-pvc which works fine.
I don't kow if this has already been addressed in newer version of airflow (I am using 1.10.3). The airflow scheduler should handle this case when ppl using same PVC then create a pod with single volume and 2 volumeMounts referring to that Volume e.g.
"volumes": [
{
"name": "airflow-dags-logs", <--just an example name
"persistentVolumeClaim": {
"claimName": "airflow-pvc"
}
}
"containers": [
{ ...
"volumeMounts": [
{
"name": "airflow-dags-logs",
"readOnly": true,
"mountPath": "/usr/local/airflow/dags",
"subPath": "airflow/development/dags" <--taken from configs
},
{
"name": "airflow-dags-logs",
"mountPath": "/usr/local/airflow/logs",
"subPath": "airflow/development/logs" <--taken from configs
}]
...}]
I deployed a pod with above configurations and it works!
Related
I am deploying a gitlab runner in K8s cluster and I need to pass data [enviroment,token,url] in configmap as secret.below are the deployment and configmap manifest files.
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitlab-runner
namespace: gitlab-runner
spec:
replicas: 1
selector:
matchLabels:
name: gitlab-runner
template:
metadata:
labels:
name: gitlab-runner
spec:
hostNetwork: true
serviceAccountName: default
containers:
- args:
- run
image: gitlab/gitlab-runner:latest
imagePullPolicy: Always
name: gitlab-runner
resources:
requests:
cpu: "100m"
limits:
cpu: "100m"
volumeMounts:
- name: config
mountPath: /etc/gitlab-runner/config.toml
readOnly: true
subPath: config.toml
volumes:
- name: config
configMap:
name: gitlab-runner-config
restartPolicy: Always
config.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner-config
namespace: gitlab-runner
data:
config.toml: |-
concurrent = 4
[[runners]]
name = "Kubernetes Runner"
url = "gitlab url"
token = "secrettoken"
executor = "kubernetes"
environment =
["TEST_VAR=THIS_IS_TEST_VAR_PRINTING", "SECOUND_TEST_VAR=This_is_2nd_test_var"]
[runners.kubernetes]
namespace = "gitlab-runner"
privileged = true
poll_timeout = 600
cpu_request = "1"
service_cpu_request = "200m"
[[runners.kubernetes.volumes.host_path]]
name = "docker"
mount_path = "/var/run/docker.sock"
host_path = "/var/run/docker.sock"
In the above configmap yaml I need to pass Enviroment ,token and url as kubernetes secret.
Unable to create Verne MQ pod in AWS EKS cluster with persistent volume claim for authentication and SSL. Below is my yaml file:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: vernemq-storage
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: verne-aws-pv
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: xfs
volumeID: aws://ap-south-1a/vol-xxxxx
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
storageClassName: vernemq-storage
volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: mysql
name: verne-aws-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: gp2-retain
volumeMode: Filesystem
volumeName: verne-aws-pv
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: vernemq
spec:
replicas: 1
selector:
matchLabels:
app: vernemq
serviceName: vernemq
template:
metadata:
labels:
app: vernemq
spec:
serviceAccountName: vernemq
terminationGracePeriodSeconds: 200
containers:
- name: vernemq
image: vernemq/vernemq:latest
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- /usr/sbin/vmq-admin cluster leave node=VerneMQ#${MY_POD_NAME}.vernemq.${DOCKER_VERNEMQ_KUBERNETES_NAMESPACE}.svc.cluster.local ; sleep 60 ; /usr/sbin/vmq-admin cluster leave node=VerneMQ#${MY_POD_NAME}.vernemq.${DOCKER_VERNEMQ_KUBERNETES_NAMESPACE}.svc.cluster.local -k; sleep 60;
ports:
- containerPort: 1883
name: mqtt
hostPort: 1883
- containerPort: 8883
- containerPort: 4369
name: epmd
- containerPort: 44053
name: vmq
- containerPort: 8888
name: health
- containerPort: 9100
- containerPort: 9101
- containerPort: 9102
- containerPort: 9103
- containerPort: 9104
- containerPort: 9105
- containerPort: 9106
- containerPort: 9107
- containerPort: 9108
- containerPort: 9109
- containerPort: 8888
resources:
limits:
cpu: "2"
memory: 3Gi
requests:
cpu: "1"
memory: 1Gi
env:
- name: DOCKER_VERNEMQ_ACCEPT_EULA
value: "yes"
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
value: "9100"
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
value: "9109"
- name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
value: "on"
- name: DOCKER_VERNEMQ_LISTENER__TCP__DEFAULT
value: "0.0.0.0:1883"
- name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__POOL_timeout
value: "6000"
- name: DOCKER_VERNEMQ_LISTENER__HTTP__DEFAULT
value: "0.0.0.0:8888"
- name: DOCKER_VERNEMQ_LISTENER__MAX_CONNECTIONS
value: "infinity"
- name: DOCKER_VERNEMQ_LISTENER__NR_OF_ACCEPTORS
value: "10000"
- name: DOCKER_VERNEMQ_MAX_INFLIGHT_MESSAGES
value: "0"
- name: DOCKER_VERNEMQ_ALLOW_MULTIPLE_SESSIONS
value: "off"
- name: DOCKER_VERNEMQ_ALLOW_REGISTER_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_ALLOW_PUBLISH_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_ALLOW_SUBSCRIBE_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_ALLOW_UNSUBSCRIBE_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
value: "/etc/vernemq/vmq.passwd"
- name: DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT
value: "0.0.0.0:8883"
- name: DOCKER_VERNEMQ_LISTENER__SSL__CAFILE
value: "/etc/ssl/ca.crt"
- name: DOCKER_VERNEMQ_LISTENER__SSL__CERTFILE
value: "/etc/ssl/server.crt"
- name: DOCKER_VERNEMQ_LISTENER__SSL__KEYFILE
value: "/etc/ssl/server.key"
volumeMounts:
- mountPath: /etc/ssl
name: vernemq-certifications
readOnly: true
- mountPath: /etc/vernemq-passwd
name: vernemq-passwd
readOnly: true
volumes:
- name: vernemq-certifications
persistentVolumeClaim:
claimName: verne-aws-pvc
secret:
secretName: vernemq-certifications
- name: vernemq-passwd
persistentVolumeClaim:
claimName: verne-aws-pvc
secret:
secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
labels:
app: vernemq
spec:
clusterIP: None
selector:
app: vernemq
ports:
- port: 4369
name: empd
- port: 44053
name: vmq
---
apiVersion: v1
kind: Service
metadata:
name: mqtt
labels:
app: mqtt
spec:
type: LoadBalancer
selector:
app: vernemq
ports:
- name: mqtt
port: 1883
targetPort: 1883
- name: health
port: 8888
targetPort: 8888
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vernemq
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["endpoints", "deployments", "replicasets", "pods", "statefulsets", "persistentvolumeclaims"]
verbs: ["get", "patch", "list", "watch", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
subjects:
- kind: ServiceAccount
name: vernemq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: endpoint-reader
Created an AWS EBS volume in the same region and subnet as in the node group and added it to the persistent volume storage.
Pod is not getting created instead when we do kubectl describe statefulset vernemq getting below error:
Volumes:
vernemq-certifications:
Type: Secret (a volume populated by a Secret)
SecretName: vernemq-certifications
Optional: false
vernemq-passwd:
Type: Secret (a volume populated by a Secret)
SecretName: vernemq-passwd
Optional: false
Volume Claims: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 2m2s (x5 over 2m2s) statefulset-controller create Pod vernemq-0 in StatefulSet vernemq failed error: pods "vernemq-0" is forbidden: error looking up service account default/vernemq: serviceaccount "vernemq" not found
Warning FailedCreate 40s (x10 over 2m2s) statefulset-controller create Pod vernemq-0 in StatefulSet vernemq failed error: Pod "vernemq-0" is invalid: [spec.volumes[0].persistentVolumeClaim: Forbidden: may not specify more than 1 volume type, spec.volumes[1].persistentVolumeClaim: Forbidden: may not specify more than 1 volume type, spec.containers[0].volumeMounts[0].name: Not found: "vernemq-certifications", spec.containers[0].volumeMounts[1].name: Not found: "vernemq-passwd"]
How can I use ConfigMap to write cluster node information to a JSON file?
The below gives me Node information :
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(#.type=="Hostname")].address}'
How can I use Configmap to write the above output to a text file?
You can save the output of command in any file.
Then use the file or data inside file to create configmap.
After creating the configmap you can mount it as a file in your deployment/pod.
For example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: appname
name: appname
namespace: development
spec:
selector:
matchLabels:
app: appname
tier: sometier
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
image: someimage
imagePullPolicy: Always
name: appname
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
I added the following configuration in deployment:
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
To create ConfigMap from file:
kubectl create configmap your-configmap-name --from-file=your-file-path
Or just create ConfigMap with the output of your command:
apiVersion: v1
kind: ConfigMap
metadata:
name: your-configmap-name
namespace: your-namespace
data:
your-filename-inside-pod: |
output of command
At first save output of kubect get nodes command into JSON file:
$ exampleCommand > node-info.json
Then create proper ConfigMap.
Here is an example:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
node-info.json: |
{
"array": [
1,
2
],
"boolean": true,
"number": 123,
"object": {
"a": "egg",
"b": "egg1"
},
"string": "Welcome"
}
Then remember to add following lines below specification section in pod configuration file:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
You can also use PodPresent.
PodPreset is an object that enable to inject information egg. environment variables into pods during creation time.
Look at the example below:
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: example
spec:
selector:
matchLabels:
app: your-pod
env:
- name: DB_PORT
value: "6379"
envFrom:
- configMapRef:
name: etcd-env-config
key: node-info.json
but remember that you have to also add:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
section to your pod definition proper to your PodPresent and ConfigMap configuration.
More information you can find here: podpresent, pod-present-configuration.
I have create docker registry as a pod with a service and it's working login, push and pull. But when I would like to create a pod that use an image from this registry, the kubelet can't get the image from the registry.
My pod registry:
apiVersion: v1
kind: Pod
metadata:
name: registry-docker
labels:
registry: docker
spec:
containers:
- name: registry-docker
image: registry:2
volumeMounts:
- mountPath: /opt/registry/data
name: data
- mountPath: /opt/registry/auth
name: auth
ports:
- containerPort: 5000
env:
- name: REGISTRY_AUTH
value: htpasswd
- name: REGISTRY_AUTH_HTPASSWD_PATH
value: /opt/registry/auth/htpasswd
- name: REGISTRY_AUTH_HTPASSWD_REALM
value: Registry Realm
volumes:
- name: data
hostPath:
path: /opt/registry/data
- name: auth
hostPath:
path: /opt/registry/auth
pod I would like to create from registry:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: 10.96.81.252:5000/nginx:latest
imagePullSecrets:
- name: registrypullsecret
Error I get from my registry logs:
time="2018-08-09T07:17:21Z" level=warning msg="error authorizing
context: basic authentication challenge for realm \"Registry Realm\":
invalid authorization credential" go.version=go1.7.6
http.request.host="10.96.81.252:5000"
http.request.id=655f76a6-ef05-4cdc-a677-d10f70ed557e
http.request.method=GET http.request.remoteaddr="10.40.0.0:59088"
http.request.uri="/v2/" http.request.useragent="docker/18.06.0-ce
go/go1.10.3 git-commit/0ffa825 kernel/4.4.0-130-generic os/linux
arch/amd64 UpstreamClient(Go-http-client/1.1)"
instance.id=ec01566d-5397-4c90-aaac-f56d857d9ae4 version=v2.6.2
10.40.0.0 - - [09/Aug/2018:07:17:21 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/18.06.0-ce go/go1.10.3 git-commit/0ffa825
kernel/4.4.0-130-generic os/linux arch/amd64
UpstreamClient(Go-http-client/1.1)"
The secret I use created from cat ~/.docker/config.json | base64:
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: ewoJImF1dGhzIjogewoJCSJsb2NhbGhvc3Q6NTAwMCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZaRzlqYTJWeU1USXoiCgkJfQoJfSwKCSJIdHRwSGVhZGVycyI6IHsKCQkiVXNlci1BZ2VudCI6ICJEb2NrZXItQ2xpZW50LzE4LjA2$
type: kubernetes.io/dockerconfigjson
The modification I have made to my default serviceaccount:
cat ./sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2018-08-03T09:49:47Z
name: default
namespace: default
# resourceVersion: "51625"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: 8eecb592-9702-11e8-af15-02f6928eb0b4
secrets:
- name: default-token-rfqfp
imagePullSecrets:
- name: registrypullsecret
file ~/.docker/config.json:
{
"auths": {
"localhost:5000": {
"auth": "YWRtaW46ZG9ja2VyMTIz"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.06.0-ce (linux)"
}
The auths data has login credentials for "localhost:5000", but your image is at "10.96.81.252:5000/nginx:latest".
I try to deploy one pod by node. It works fine with the kind daemonSet and when the cluster is created with kubeup. But we migrated the cluster creation using kops and with kops the master node is part of the cluster.
I noticed the master node is defined with a specific label: kubernetes.io/role=master
and with a taint: scheduler.alpha.kubernetes.io/taints: [{"key":"dedicated","value":"master","effect":"NoSchedule"}]
But it does not stop to have a pod deployed on it with DaemonSet
So i tried to add scheduler.alpha.kubernetes.io/affinity:
- apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: elasticsearch-data
namespace: ess
annotations:
scheduler.alpha.kubernetes.io/affinity: >
{
"nodeAffinity": {
"requiredDuringSchedulingRequiredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/role",
"operator": "NotIn",
"values": ["master"]
}
]
}
]
}
}
}
spec:
selector:
matchLabels:
component: elasticsearch
type: data
provider: fabric8
template:
metadata:
labels:
component: elasticsearch
type: data
provider: fabric8
spec:
serviceAccount: elasticsearch
serviceAccountName: elasticsearch
containers:
- env:
- name: "SERVICE_DNS"
value: "elasticsearch-cluster"
- name: "NODE_MASTER"
value: "false"
image: "essearch/ess-elasticsearch:1.7.6"
name: elasticsearch
imagePullPolicy: Always
ports:
- containerPort: 9300
name: transport
volumeMounts:
- mountPath: "/usr/share/elasticsearch/data"
name: task-pv-storage
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
nodeSelector:
minion: true
But it does not work. Is anyone know why?
The workaround I have for now is to use nodeSelector and add a label to the nodes that are minion only but i would avoid to add a label during the cluster creation because it's an extra step and if i could avoid it, it would be for the best :)
EDIT:
I changed to that (given the answer) and i think it's right but it does not help, i still have a pod deployed on it:
- apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: elasticsearch-data
namespace: ess
spec:
selector:
matchLabels:
component: elasticsearch
type: data
provider: fabric8
template:
metadata:
labels:
component: elasticsearch
type: data
provider: fabric8
annotations:
scheduler.alpha.kubernetes.io/affinity: >
{
"nodeAffinity": {
"requiredDuringSchedulingRequiredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/role",
"operator": "NotIn",
"values": ["master"]
}
]
}
]
}
}
}
spec:
serviceAccount: elasticsearch
serviceAccountName: elasticsearch
containers:
- env:
- name: "SERVICE_DNS"
value: "elasticsearch-cluster"
- name: "NODE_MASTER"
value: "false"
image: "essearch/ess-elasticsearch:1.7.6"
name: elasticsearch
imagePullPolicy: Always
ports:
- containerPort: 9300
name: transport
volumeMounts:
- mountPath: "/usr/share/elasticsearch/data"
name: task-pv-storage
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
Just move the annotation into the pod template: section (under metadata:).
Alternatively taint the master node (and you can remove the annotation):
kubectl taint nodes nameofmaster dedicated=master:NoSchedule
I suggest you read up on taints and tolerations.