I'm trying to migrate from docker-compose to kubernetes. I had some issues with volume, so what I did is:
kompose convert --volumes hostPath
Then I had another issue
no matches for kind "Deployment" in version "extensions/v1beta1"
So I've changed ApiVersion from extensions/v1beta1 to app/v1 and added "selector". Now I can't manage with that issue:
Error from server (Invalid): error when creating "database-deployment.yaml": Deployment.apps "database" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"io.kompose.service":"database"}: `selector` does not match template `labels`
Error from server (Invalid): error when creating "phpmyadmin-deployment.yaml": Deployment.apps "phpmyadmin" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"io.kompose.service":"phpmyadmin"}: `selector` does not match template `labels`
Error from server (Invalid): error when creating "webserver-deployment.yaml": Deployment.apps "webserver" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"io.kompose.service":"webserver"}: `selector` does not match template `labels`
database-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: database
app: database
name: database
spec:
selector:
matchLabels:
app: database
template:
metadata:
labels:
io.kompose.service: database
replicas: 1
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: database
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: Bazadanerro
- name: MYSQL_PASSWORD
value: P#$$w0rd
- name: MYSQL_ROOT_PASSWORD
value: P#$$w0rd
- name: MYSQL_USER
value: dockerro
image: mariadb
name: mysql
resources: {}
restartPolicy: Always
status: {}
phpmyadmin-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: phpmyadmin
app: phpmyadmin
name: phpmyadmin
spec:
selector:
matchLabels:
app: phpmyadmin
template:
metadata:
labels:
io.kompose.service: database
replicas: 1
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: phpmyadmin
spec:
containers:
- env:
- name: MYSQL_PASSWORD
value: P#$$w0rd
- name: MYSQL_ROOT_PASSWORD
value: P#$$w0rd
- name: MYSQL_USER
value: dockerro
- name: PMA_HOST
value: database
- name: PMA_PORT
value: "3306"
image: phpmyadmin/phpmyadmin
name: phpmyadmins
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
status: {}
And webserver-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: webserver
app: webserverro
name: webserver
spec:
selector:
matchLabels:
app: webserverro
template:
metadata:
labels:
io.kompose.service: webserver
replicas: 1
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: webserver
spec:
containers:
- image: webserver
name: webserverro
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /var/www/html
name: webserver-hostpath0
restartPolicy: Always
volumes:
- hostPath:
path: /root/webserverro/root/webserverro
name: webserver-hostpath0
status: {}
What am I doing wrong?
The error is self-explanatory: "selector" does not match template "labels".
Edit your YAML files and set the same key-value pairs in both selector.matchLabels and metadata.labels.
spec:
selector:
matchLabels: # <---- This
app: database
template:
metadata:
labels: # <---- This
io.kompose.service: database
Why selector field is important?
The selector field defines how the Deployment finds which Pods to manage. In this case, you simply select a label that is defined in the Pod template (app: nginx). However, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule.
Reference
Update:
One possible sample can be:
spec:
selector:
matchLabels:
app.kubernetes.io/name: database
template:
metadata:
labels:
app.kubernetes.io/name: database
Visit recommended labels
Update2:
no matches for kind "Deployment" in version "extensions/v1beta1"
The apiVersion for Deployment object is now apps/v1.
apiVersion: apps/v1 # <-- update here.
kind: Deployment
... ... ...
In all of these files you have two copies of the pod spec template:. These don't get merged; the second one just replaces the first one.
spec:
selector: { ... }
template: # This will get ignored
metadata:
labels:
io.kompose.service: webserver
app: webserverro
template: # and completely replaced with this
metadata:
annotations:
kompose.cmd: kompose convert --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
labels: # without the app: label
io.kompose.service: webserver
spec: { ... }
Remove the first template: block and move the full set of labels into the one template: block that remains.
Related
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLables:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: bginx:1.7.9
ports:
- containerPort: 80
error is:
error validating "app.yaml": error validating data: [ValidationError(Deployment.spec.selector): unknown field "matchLables" in io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector, ValidationError(Deployment.spec): unknown field "spec" in io.k8s.api.apps.v1.DeploymentSpec];
There is a typo in matchLables, it should be matchLabels. Additionally, you have the wrong indentation of spec.template.spec and of its content.
Something like the following example should work:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: bginx:1.7.9
ports:
- containerPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: ngx-dep3
labels:
app: ngx
type: webservice
spec:
replicas: 1
selector:
matchLabels:
app: ngx
template:
metadata:
labels:
app: ngx
spec:
containers:
- name: nginx
image: nginx:1.8
kubectl apply -f ngx-dep.yaml
error: error validating "ngx-dep.yaml": error validating data: [ValidationError(Deployment.spec.selector): unknown field "template" in io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector, ValidationError(Deployment.spec): missing required field "template" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false
Change identation. template should be on the same level with replicas, etc
spec:
replicas:
selector:
template:
Correct yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ngx-dep3
labels:
app: ngx
type: webservice
spec:
replicas: 1
selector:
matchLabels:
app: ngx
template:
metadata:
labels:
app: ngx
spec:
containers:
- name: nginx
image: nginx:1.8
kubectl apply -f a.yaml
deployment.apps/ngx-dep3 created
For mere information and example please refer to Deployment v1 apps official docs
Whats the Problem?
I can't get my pods running which are using a volume. In the Kubernetes Dashboard I got the following error:
running "VolumeBinding" filter plugin for pod "influxdb-6979bff6f9-hpf89": pod has unbound immediate PersistentVolumeClaims
What did I do?
After running Kompose convert to my docker-compose.yml file I tried to start the pods with micro8ks kubectl apply -f . (I am using micro8ks) I had to replace the version of the networkpolicy yaml files with networking.k8s.io/v1 (see here) but except of this change, I didn't change anything.
YAML Files
influxdb-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: influxdb
name: influxdb
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: influxdb
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.network/cloud-net: "true"
io.kompose.network/default: "true"
io.kompose.service: influxdb
spec:
containers:
- env:
- name: INFLUXDB_HTTP_LOG_ENABLED
value: "false"
image: influxdb:1.8
imagePullPolicy: ""
name: influxdb
ports:
- containerPort: 8086
resources: {}
volumeMounts:
- mountPath: /var/lib/influxdb
name: influx
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: influx
persistentVolumeClaim:
claimName: influx
status: {}
influxdb-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: influxdb
name: influxdb
spec:
ports:
- name: "8087"
port: 8087
targetPort: 8086
selector:
io.kompose.service: influxdb
status:
loadBalancer: {}
influx-persistenvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: influx
name: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
The PersistentVolumeClaim will be unbound if either the cluster does not have a StorageClass which can dynamically provision a PersistentVolume or it does not have a manually created PersistentVolume to satisfy the PersistentVolumeClaim
Here is a guide on how to configure a pod to use PersistentVolume
To solve the current scenario you can manually create a PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
Please note usage of hostPath is only as an example. It's not recommended for production usage. Consider using external block or file storage from the supported types here
I have created an EC2 and install EKS on it.Then i created cluster and install docker image on it.
Now i'm trying to deploy this image to the docker container using given yaml and getting error.
Error in creating Deployment YAML on kubernetes
spec.template.spec.containers[1].image: Required value
spec.template.spec.containers[2].image: Required value
--i can see the image on ec2 docker.
my yaml is like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: premiumservice
labels:
app: premium-service
namespace:
annotations:
monitoring: "true"
spec:
replicas: 1
selector:
matchLabels:
app: premium-service
template:
metadata:
labels:
app: premium-service
spec:
containers:
- image: "mp3-image1:latest"
name: premiumservice
ports:
- containerPort: 80
env:
- name: type1
value: "xyz"
- name: type2
value: "abc"
The deployment yaml have indentation problem near the env section and should look like below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: premiumservice
labels:
app: premium-service
namespace:
annotations:
monitoring: "true"
spec:
replicas: 1
selector:
matchLabels:
app: premium-service
template:
metadata:
labels:
app: premium-service
spec:
containers:
- image: mp3-image1:latest
name: premiumservice
ports:
- containerPort: 80
env:
- name: type1
value: "xyz"
- name: type2
value: "abc"
This may be totally unrelated, but I had the same issue with a k8s deployment file that had variable substitution in the image but the env variable it was referencing wasn't defined.
...
spec:
containers:
- name: indexing-queue
image: ${K8S_IMAGE} #<--- here
Basically this error means "can't find/understand" the image you've set
I am trying to deploy this kubernetes deployment; however, when ever I do: kubectl apply -f es-deployment.yaml it throws the error: Error: `selector` does not match template `labels
I have already tried to add the selector, matchLabels under the specs section but it seems like that did not work. Below is my yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: elasticsearchconnector
name: elasticsearchconnector
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: elasticsearchconnector
spec:
selector:
matchLabels:
app: elasticsearchconnector
containers:
- env:
- [env stuff]
image: confluentinc/cp-kafka-connect:latest
name: elasticsearchconnector
ports:
- containerPort: 28082
resources: {}
volumeMounts:
- mountPath: /etc/kafka-connect
name: elasticsearchconnector-hostpath0
- mountPath: /etc/kafka-elasticsearch
name: elasticsearchconnector-hostpath1
- mountPath: /etc/kafka
name: elasticsearchconnector-hostpath2
restartPolicy: Always
volumes:
- hostPath:
path: /home/centos/Sprint0Demo/DockerScriptsAWS/ServersCompose/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-connect
name: elasticsearchconnector-hostpath0
- hostPath:
path: /home/centos/Sprint0Demo/DockerScriptsAWS/ServersCompose/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-elasticsearch
name: elasticsearchconnector-hostpath1
- hostPath:
path: /home/centos/Sprint0Demo/DockerScriptsAWS/ServersCompose/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafak
name: elasticsearchconnector-hostpath2
status: {}
Your labels and selectors are misplaced.
First, you need to specify which pods the deployment will control:
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearchconnector
Then you need to label the pod properly:
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: elasticsearchconnector
app: elasticsearchconnector
spec:
containers: