Scale deployment based on custom metric - kubernetes

I'm trying to scale a deployment based on a custom metric coming from a custom metric server. I deployed my server and when I do
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/services/kubernetes/test-metric"
I get back this JSON
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/services/kubernetes/test-metric"
},
"items": [
{
"describedObject": {
"kind": "Service",
"namespace": "default",
"name": "kubernetes",
"apiVersion": "/v1"
},
"metricName": "test-metric",
"timestamp": "2019-01-26T02:36:19Z",
"value": "300m",
"selector": null
}
]
}
Then I created my hpa.yml using this
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: test-all-deployment
namespace: default
spec:
maxReplicas: 10
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-all-deployment
metrics:
- type: Object
object:
target:
kind: Service
name: kubernetes
apiVersion: custom.metrics.k8s.io/v1beta1
metricName: test-metric
targetValue: 200m
but it doesn't scale and I'm not sure what is wrong. running get hpa returns
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
test-all-deployment Deployment/test-all-deployment <unknown>/200m 1 10 1 9m
The part I'm not sure about is the target object in the metrics collection in the hpa definition. Looking at the doc here https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
It has
describedObject:
apiVersion: extensions/v1beta1
kind: Ingress
name: main-route
target:
kind: Value
value: 10k
but that gives me a validation error for API v2beta1. and looking at the actual object here https://github.com/kubernetes/api/blob/master/autoscaling/v2beta1/types.go#L296 it doesn't seem to match. I don't know how to specify that with the v2beta1 API.

It looks like there is a mistake in the documentation. In the same example two diffierent API version are used.
autoscaling/v2beta1 notation:
- type: Pods
pods:
metric:
name: packets-per-second
targetAverageValue: 1k
autoscaling/v2beta2 notation:
- type: Resource
resource:
name: cpu
target:
type: AverageUtilization
averageUtilization: 50
There is a difference between autoscaling/v2beta1 and autoscaling/v2beta2 APIs:
kubectl get hpa.v2beta1.autoscaling -o yaml --export > hpa2b1-export.yaml
kubectl get hpa.v2beta2.autoscaling -o yaml --export > hpa2b2-export.yaml
diff -y hpa2b1-export.yaml hpa2b2-export.yaml
#hpa.v2beta1.autoscaling hpa.v2beta2.autoscaling
#-----------------------------------------------------------------------------------
apiVersion: v1 apiVersion: v1
items: items:
- apiVersion: autoscaling/v2beta1 | - apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler kind: HorizontalPodAutoscaler
metadata: metadata:
creationTimestamp: "2019-03-21T13:17:47Z" creationTimestamp: "2019-03-21T13:17:47Z"
name: php-apache name: php-apache
namespace: default namespace: default
resourceVersion: "8441304" resourceVersion: "8441304"
selfLink: /apis/autoscaling/v2beta1/namespaces/default/ho | selfLink: /apis/autoscaling/v2beta2/namespaces/default/ho
uid: b8490a0a-4bdb-11e9-9043-42010a9c0003 uid: b8490a0a-4bdb-11e9-9043-42010a9c0003
spec: spec:
maxReplicas: 10 maxReplicas: 10
metrics: metrics:
- resource: - resource:
name: cpu name: cpu
targetAverageUtilization: 50 | target:
> averageUtilization: 50
> type: Utilization
type: Resource type: Resource
minReplicas: 1 minReplicas: 1
scaleTargetRef: scaleTargetRef:
apiVersion: extensions/v1beta1 apiVersion: extensions/v1beta1
kind: Deployment kind: Deployment
name: php-apache name: php-apache
status: status:
conditions: conditions:
- lastTransitionTime: "2019-03-21T13:18:02Z" - lastTransitionTime: "2019-03-21T13:18:02Z"
message: recommended size matches current size message: recommended size matches current size
reason: ReadyForNewScale reason: ReadyForNewScale
status: "True" status: "True"
type: AbleToScale type: AbleToScale
- lastTransitionTime: "2019-03-21T13:18:47Z" - lastTransitionTime: "2019-03-21T13:18:47Z"
message: the HPA was able to successfully calculate a r message: the HPA was able to successfully calculate a r
resource utilization (percentage of request) resource utilization (percentage of request)
reason: ValidMetricFound reason: ValidMetricFound
status: "True" status: "True"
type: ScalingActive type: ScalingActive
- lastTransitionTime: "2019-03-21T13:23:13Z" - lastTransitionTime: "2019-03-21T13:23:13Z"
message: the desired replica count is increasing faster message: the desired replica count is increasing faster
rate rate
reason: TooFewReplicas reason: TooFewReplicas
status: "True" status: "True"
type: ScalingLimited type: ScalingLimited
currentMetrics: currentMetrics:
- resource: - resource:
currentAverageUtilization: 0 | current:
currentAverageValue: 1m | averageUtilization: 0
> averageValue: 1m
name: cpu name: cpu
type: Resource type: Resource
currentReplicas: 1 currentReplicas: 1
desiredReplicas: 1 desiredReplicas: 1
kind: List kind: List
metadata: metadata:
resourceVersion: "" resourceVersion: ""
selfLink: "" selfLink: ""
Here is how the object definition is supposed to look like:
#hpa.v2beta1.autoscaling hpa.v2beta2.autoscaling
#-----------------------------------------------------------------------------------
type: Object type: Object
object: object:
metric: metric:
name: requests-per-second name: requests-per-second
describedObject: describedObject:
apiVersion: extensions/v1beta1 apiVersion: extensions/v1beta1
kind: Ingress kind: Ingress
name: main-route name: main-route
targetValue: 2k target:
type: Value
value: 2k

Related

Strimzi KafkaConnect & Connector Error, Won't Load

I am not sure where else to turn as all example I have seen I have pretty much copied and still cannot get it to work. The connector will not install and states empty password. I have validted each step and cannot get it to work. Here are the steps I have taken.
Container
FROM strimzi/kafka:0.16.1-kafka-2.4.0
USER root:root
RUN mkdir -p /opt/kafka/plugins/debezium
COPY ./debezium-connector-mysql/ /opt/kafka/plugins/debezium/
USER 1001
Next I create the secret to use with mySQL.
cat <<EOF | kubectl apply -n kafka-cloud -f -
apiVersion: v1
kind: Secret
metadata:
name: mysql-auth
type: Opaque
stringData:
mysql-auth.properties: |-
username: root
password: supersecret
EOF
Validate
% kubectl -n kafka-cloud get secrets | grep mysql-auth
mysql-auth Opaque 1 14m
Double check to make sure the user and password are not empty as the error in the connector state.
% kubectl -n kafka-cloud get secret mysql-auth -o yaml
apiVersion: v1
data:
mysql-auth.properties: dXNlcm5hbWU6IHJvb3QKcGFzc3dvcmQ6IHN1cGVyc2VjcmV0
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{},"name":"mysql-auth","namespace":"kafka-cloud"},"stringData":{"mysql-auth.properties":"username: root\npassword: supersecret"},"type":"Opaque"}
creationTimestamp: "2022-03-02T23:48:55Z"
name: mysql-auth
namespace: kafka-cloud
resourceVersion: "4041"
uid: 14a7a878-d01f-4899-8dc7-81b515278f32
type: Opaque
Add Connect Cluster
cat <<EOF | kubectl apply -n kafka-cloud -f -
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: my-connect-cluster
annotations:
# # use-connector-resources configures this KafkaConnect
# # to use KafkaConnector resources to avoid
# # needing to call the Connect REST API directly
strimzi.io/use-connector-resources: "true"
spec:
version: 3.1.0
image: connect-debezium
replicas: 1
bootstrapServers: my-kafka-cluster-kafka-bootstrap:9092
config:
group.id: connect-cluster
offset.storage.topic: connect-cluster-offsets
config.storage.topic: connect-cluster-configs
status.storage.topic: connect-cluster-status
config.storage.replication.factor: 1
offset.storage.replication.factor: 1
status.storage.replication.factor: 1
config.providers: file
config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider
externalConfiguration:
volumes:
- name: mysql-auth-config
secret:
secretName: mysql-auth
EOF
Add Connector
cat <<EOF | kubectl apply -n kafka-cloud -f -
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnector
metadata:
name: mysql-test-connector
labels:
strimzi.io/cluster: my-connect-cluster
spec:
class: io.debezium.connector.mysql.MySqlConnector
tasksMax: 1
config:
database.hostname: 172.17.0.13
database.port: 3306
database.user: "${file:/opt/kafka/external-configuration/mysql-auth-config/mysql-auth.properties:username}"
database.password: "${file:/opt/kafka/external-configuration/mysql-auth-config/mysql-auth.properties:password}"
database.server.id: 184054
database.server.name: mysql-pod
database.whitelist: sample
database.history.kafka.bootstrap.servers: my-kafka-cluster-kafka-bootstrap:9092
database.history.kafka.topic: "schema-changes.sample"
key.converter: "org.apache.kafka.connect.storage.StringConverter"
value.converter: "org.apache.kafka.connect.storage.StringConverter"
EOF
Error
And no matter what I have tried to get this error. I have no idea what I am missing. I know it a simple config, but I cannot figure it out. I'm stuck.
% kubectl -n kafka-cloud describe kafkaconnector mysql-test-connector
Name: mysql-test-connector
Namespace: kafka-cloud
Labels: strimzi.io/cluster=my-connect-cluster
Annotations: <none>
API Version: kafka.strimzi.io/v1beta2
Kind: KafkaConnector
Metadata:
Creation Timestamp: 2022-03-02T23:44:20Z
Generation: 1
Managed Fields:
API Version: kafka.strimzi.io/v1beta2
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:labels:
.:
f:strimzi.io/cluster:
f:spec:
.:
f:class:
f:config:
.:
f:database.history.kafka.bootstrap.servers:
f:database.history.kafka.topic:
f:database.hostname:
f:database.password:
f:database.port:
f:database.server.id:
f:database.server.name:
f:database.user:
f:database.whitelist:
f:key.converter:
f:value.converter:
f:tasksMax:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-03-02T23:44:20Z
API Version: kafka.strimzi.io/v1beta2
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:conditions:
f:observedGeneration:
f:tasksMax:
f:topics:
Manager: okhttp
Operation: Update
Subresource: status
Time: 2022-03-02T23:44:20Z
Resource Version: 3874
UID: c70ffe4e-3777-4524-af82-dad3a57ca25e
Spec:
Class: io.debezium.connector.mysql.MySqlConnector
Config:
database.history.kafka.bootstrap.servers: my-kafka-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.sample
database.hostname: 172.17.0.13
database.password:
database.port: 3306
database.server.id: 184054
database.server.name: mysql-pod
database.user:
database.whitelist: sample
key.converter: org.apache.kafka.connect.storage.StringConverter
value.converter: org.apache.kafka.connect.storage.StringConverter
Tasks Max: 1
Status:
Conditions:
Last Transition Time: 2022-03-02T23:45:00.097311Z
Message: PUT /connectors/mysql-test-connector/config returned 400 (Bad Request): Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
Reason: ConnectRestException
Status: True
Type: NotReady
Observed Generation: 1
Tasks Max: 1
Topics:
Events: <none>
The config param needed for the mySQL connector is:
database.allowPublicKeyRetrieval: true
That resolved the issue.

Debugging why Reconcile triggers on Kubernetes Custom Operator

I've a custom operator that listens to changes in a CRD I've defined in a Kubernetes cluster.
Whenever something changed in the defined custom resource, the custom operator would reconcile and idempotently create a secret (that would be owned by the custom resource).
What I expect is for the operator to Reconcile only when something changed in the custom resource or in the secret owned by it.
What I observe is that for some reason the Reconcile function triggers for every CR on the cluster in strange intervals without observable changes to related entities. I've tried focusing on a specific instance of the CR and follow the times in which Reconcile was called for it. The intervals of these calls are very strange. It seems that the calls are alternating between two series - one starts at 10 hours and diminishes seven minutes at a time. The other starts at 7 minutes and grows by 7 minutes a time.
To demonstrate, Reconcile triggered at these times (give or take a few seconds):
00:00
09:53 (10 hours - 1*7 minute interval)
10:00 (0 hours + 1*7 minute interval)
19:46 (10 hours - 2*7 minute interval)
20:00 (0 hours + 2*7 minute interval)
29:39 (10 hours - 3*7 minute interval)
30:00 (0 hours + 3*7 minute interval)
Whenever the diminishing intervals become less than 7 hours, it resets back to 10 hour intervals. The same with the growing series - as soon as the intervals are higher than 3 hours it resets back to 7 minutes.
My main question is how can I investigating why Reconcile is being triggered?
I'm attaching here the manifests for the CRD, the operator and a sample manifest for a CR:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.4.1
creationTimestamp: "2021-10-13T11:04:42Z"
generation: 1
name: databaseservices.operators.talon.one
resourceVersion: "245688703"
uid: 477f8d3e-c19b-43d7-ab59-65198b3c0108
spec:
conversion:
strategy: None
group: operators.talon.one
names:
kind: DatabaseService
listKind: DatabaseServiceList
plural: databaseservices
singular: databaseservice
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
description: DatabaseService is the Schema for the databaseservices API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: DatabaseServiceSpec defines the desired state of DatabaseService
properties:
cloud:
type: string
databaseName:
description: Foo is an example field of DatabaseService. Edit databaseservice_types.go
to remove/update
type: string
serviceName:
type: string
servicePlan:
type: string
required:
- cloud
- databaseName
- serviceName
- servicePlan
type: object
status:
description: DatabaseServiceStatus defines the observed state of DatabaseService
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: DatabaseService
listKind: DatabaseServiceList
plural: databaseservices
singular: databaseservice
conditions:
- lastTransitionTime: "2021-10-13T11:04:42Z"
message: no conflicts found
reason: NoConflicts
status: "True"
type: NamesAccepted
- lastTransitionTime: "2021-10-13T11:04:42Z"
message: the initial names have been accepted
reason: InitialNamesAccepted
status: "True"
type: Established
storedVersions:
- v1alpha1
----
apiVersion: operators.talon.one/v1alpha1
kind: DatabaseService
metadata:
creationTimestamp: "2021-10-13T11:14:08Z"
generation: 1
labels:
app: talon
company: amber
repo: talon-service
name: db-service-secret
namespace: amber
resourceVersion: "245692590"
uid: cc369297-6825-4fbf-aa0b-58c24be427b0
spec:
cloud: google-australia-southeast1
databaseName: amber
serviceName: pg-amber
servicePlan: business-4
----
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "75"
secret.reloader.stakater.com/reload: db-credentials
simpledeployer.talon.one/image: <path_to_image>/production:latest
creationTimestamp: "2020-06-22T09:20:06Z"
generation: 77
labels:
simpledeployer.talon.one/enabled: "true"
name: db-operator
namespace: db-operator
resourceVersion: "245688814"
uid: 900424cd-b469-11ea-b661-4201ac100014
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
name: db-operator
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
name: db-operator
spec:
containers:
- command:
- app/db-operator
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: OPERATOR_NAME
value: db-operator
- name: AIVEN_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: db-credentials
- name: AIVEN_PROJECT
valueFrom:
secretKeyRef:
key: projectname
name: db-credentials
- name: AIVEN_USERNAME
valueFrom:
secretKeyRef:
key: username
name: db-credentials
- name: SENTRY_URL
valueFrom:
secretKeyRef:
key: sentry_url
name: db-credentials
- name: ROTATION_INTERVAL
value: monthly
image: <path_to_image>/production#sha256:<some_sha>
imagePullPolicy: Always
name: db-operator
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: db-operator
serviceAccountName: db-operator
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-06-22T09:20:06Z"
lastUpdateTime: "2021-09-07T11:56:07Z"
message: ReplicaSet "db-operator-cb6556b76" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2021-09-12T03:56:19Z"
lastUpdateTime: "2021-09-12T03:56:19Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 77
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Note:
When Reconcile finishes, I return:
return ctrl.Result{Requeue: false, RequeueAfter: 0}
So that shouldn't be the reason for the repeated triggers.
I will add that I have recently updated the Kubernetes cluster version to v1.20.8-gke.2101.
This would require more info on how your controller is set up. For example what is the sync period you have set. This could be due to default sync period set which reconciles all the objects at given interval of time.
SyncPeriod determines the minimum frequency at which watched resources are
reconciled. A lower period will correct entropy more quickly, but reduce
responsiveness to change if there are many watched resources. Change this
value only if you know what you are doing. Defaults to 10 hours if unset.
there will a 10 percent jitter between the SyncPeriod of all controllers
so that all controllers will not send list requests simultaneously.
For more information check this: https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.2/pkg/manager/manager.go#L134
same problem
my Reconcile triggered at these times
00:00
09:03 (9 hours + 3 min)
18:06 (9 hours + 3 min)
00:09 (9 hours + 3 min)
sync period is not set so it should be default.
kubernetes 1.20.11 version

Using rabbitmq's queue to do hpa, access to custom.metrics fails

Can be successfully accessed through api,It can clearly obtain information,by /apis/custom.metrics.k8s.io/v1beta1
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/default/services/rabbitmq-exporter/rabbitmq_queue_messages_ready?metricLabelSelector=queue%3Dtest-1 | jq .
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/services/rabbitmq-exporter/rabbitmq_queue_messages_ready"
},
"items": [
{
"describedObject": {
"kind": "Service",
"namespace": "default",
"name": "rabbitmq-exporter",
"apiVersion": "/v1"
},
"metricName": "rabbitmq_queue_messages_ready",
"timestamp": "2020-02-17T13:50:20Z",
"value": "14",
"selector": null
}
]
}
HPA file
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: test-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: test
minReplicas: 1
maxReplicas: 5
metrics:
- type: Object
object:
metric:
name: "rabbitmq_queue_messages_ready"
selector:
matchLabels:
"queue": "test-1"
describedObject:
apiVersion: "custom.metrics.k8s.io/v1beta1"
kind: Service
name: rabbitmq-exporter
target:
type: Value
value: 4
Error message
Name: test-hpa
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"autoscaling/v2beta2","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"test-hpa","namespace":"defa...
CreationTimestamp: Mon, 17 Feb 2020 21:38:08 +0800
Reference: Deployment/test
Metrics: ( current / target )
"rabbitmq_queue_messages_ready" on Service/rabbitmq-exporter (target value): <unknown> / 4
Min replicas: 1
Max replicas: 5
Deployment pods: 1 current / 0 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetObjectMetric the HPA was unable to compute the replica count: unable to get metric rabbitmq_queue_messages_ready: Service on default rabbitmq-exporter/object metrics are not yet supported
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedComputeMetricsReplicas 97s (x12 over 4m22s) horizontal-pod-autoscaler Invalid metrics (1 invalid out of 1), last error was: failed to get object metric value: unable to get metric rabbitmq_queue_messages_ready: Service on default rabbitmq-exporter/object metrics are not yet supported
Warning FailedGetObjectMetric 82s (x13 over 4m22s) horizontal-pod-autoscaler unable to get metric rabbitmq_queue_messages_ready: Service on default rabbitmq-exporter/object metrics are not yet supported

CustomResource Spec value returning null

Hi I have created following CustomResourceDefinition - crd.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: test.demo.k8s.com
namespace: testns
spec:
group: demo.k8s.com
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: testpod
singular: testpod
kind: testpod
The corresponding resource is as below - cr.yaml
kind: testpod
metadata:
name: testpodcr
namespace: testns
spec:
containers:
- name: testname
image: test/img:v5.16
env:
- name: TESTING_ON
valueFrom:
configMapKeyRef:
name: kubernetes-config
key: type
volumeMounts:
- name: testvol
mountPath: "/test/vol"
readOnly: true
When i use client-go program to fetch the spec value of cr object 'testpodcr' The value comes as null.
func (c *TestConfigclient) AddNewPodForCR(obj *TestPodConfig) *v1.Pod {
log.Println("logging obj \n", obj.Name) // Prints the name as testpodcr
log.Println("Spec value: \n", obj.Spec) //Prints null
dep := &v1.Pod{
ObjectMeta: meta_v1.ObjectMeta{
//Labels: labels,
GenerateName: "test-pod-",
},
Spec: obj.Spec,
}
return dep
}
Can anyone please help in figuring this out why the spec value is resulting to null
There is an error with Your crd.yaml file. I am getting the following error:
$ kubectl apply -f crd.yaml
The CustomResourceDefinition "test.demo.k8s.com" is invalid: metadata.name: Invalid value: "test.demo.k8s.com": must be spec.names.plural+"."+spec.group
In Your configuration the name: test.demo.k8s.com does not match plurar: testpod found in spec.names.
I modified Your crd.yaml and now it works:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: testpods.demo.k8s.com
namespace: testns
spec:
group: demo.k8s.com
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: testpods
singular: testpod
kind: Testpod
$ kubectl apply -f crd.yaml
customresourcedefinition.apiextensions.k8s.io/testpods.demo.k8s.com created
After that Your cr.yaml also had to be fixed:
apiVersion: "demo.k8s.com/v1"
kind: Testpod
metadata:
name: testpodcr
namespace: testns
spec:
containers:
- name: testname
image: test/img:v5.16
env:
- name: TESTING_ON
valueFrom:
configMapKeyRef:
name: kubernetes-config
key: type
volumeMounts:
- name: testvol
mountPath: "/test/vol"
readOnly: true
After that I created namespace testns and finally created Testpod object successfully:
$ kubectl create namespace testns
namespace/testns created
$ kubectl apply -f cr.yaml
testpod.demo.k8s.com/testpodcr created

kubernetes: Unable to change deployment strategy

I have a deployment up and running.
Here is its export:
apiVersion: v1items:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations: deployment.kubernetes.io/revision: "2" kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{"deployment.kubernetes.io/revision":"1"},"creationTimestamp":"2019-04-14T15:32:12Z","generation":1,"name":"frontend","namespace":"default","resourceVersion":"3138","selfLink":"/apis/extensions/v1beta1/namespaces/default/deployments/frontend","uid":"796046e1-5eca-11e9-a16c-0242ac110033"},"spec":{"minReadySeconds":20,"progressDeadlineSeconds":600,"replicas":4,"revisionHistoryLimit":10,"selector":{"matchLabels":{"name":"webapp"}},"strategy":{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"},"template":{"metadata":{"creationTimestamp":null,"labels":{"name":"webapp"}},"spec":{"containers":[{"image":"kodekloud/webapp-color:v2","imagePullPolicy":"IfNotPresent","name":"simple-webapp","ports":[{"containerPort":8080,"protocol":"TCP"}],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30}}},"status":{"availableReplicas":4,"conditions":[{"lastTransitionTime":"2019-04-14T15:33:00Z","lastUpdateTime":"2019-04-14T15:33:00Z","message":"Deployment has minimum availability.","reason":"MinimumReplicasAvailable","status":"True","type":"Available"},{"lastTransitionTime":"2019-04-14T15:32:12Z","lastUpdateTime":"2019-04-14T15:33:00Z","message":"ReplicaSet \"frontend-7965b86db7\" has successfully progressed.","reason":"NewReplicaSetAvailable","status":"True","type":"Progressing"}],"observedGeneration":1,"readyReplicas":4,"replicas":4,"updatedReplicas":4}} creationTimestamp: 2019-04-14T15:32:12Z generation: 2 labels: name: webapp name: frontend namespace: default resourceVersion: "3653" selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/frontend
uid: 796046e1-5eca-11e9-a16c-0242ac110033 spec:
minReadySeconds: 20
progressDeadlineSeconds: 600
replicas: 4
revisionHistoryLimit: 10
selector:
matchLabels:
name: webapp
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
name: webapp
spec:
containers:
- image: kodekloud/webapp-color:v2
imagePullPolicy: IfNotPresent
name: simple-webapp
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 4
conditions:
- lastTransitionTime: 2019-04-14T15:33:00Z
lastUpdateTime: 2019-04-14T15:33:00Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2019-04-14T15:32:12Z
lastUpdateTime: 2019-04-14T15:38:01Z
message: ReplicaSet "frontend-65998dcfd8" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 2
readyReplicas: 4
replicas: 4
updatedReplicas: 4
kind: List
metadata:
resourceVersion: "
I am editing the metadata.strategy.type from RollingUpdate to Recreate.
However, running kubectl apply -f frontend.yml yields the following error:
Why is that?
Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"deployment.kubernetes.io/revision":"1","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployme
nt\",\"metadata\":{\"annotations\":{\"deployment.kubernetes.io/revision\":\"1\"},\"creationTimestamp\":\"2019-04-14T15:32:12Z\",\"generation\":1,\"name\":\"frontend\",\"namespace
\":\"default\",\"resourceVersion\":\"3138\",\"selfLink\":\"/apis/extensions/v1beta1/namespaces/default/deployments/frontend\",\"uid\":\"796046e1-5eca-11e9-a16c-0242ac110033\"},\"
spec\":{\"minReadySeconds\":20,\"progressDeadlineSeconds\":600,\"replicas\":4,\"revisionHistoryLimit\":10,\"selector\":{\"matchLabels\":{\"name\":\"webapp\"}},\"strategy\":{\"rol
lingUpdate\":{\"maxSurge\":\"25%\",\"maxUnavailable\":\"25%\"},\"type\":\"Recreate\"},\"template\":{\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"name\":\"webapp\"}},\"s
pec\":{\"containers\":[{\"image\":\"kodekloud/webapp-color:v2\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"simple-webapp\",\"ports\":[{\"containerPort\":8080,\"protocol\":\"
TCP\"}],\"resources\":{},\"terminationMessagePath\":\"/dev/termination-log\",\"terminationMessagePolicy\":\"File\"}],\"dnsPolicy\":\"ClusterFirst\",\"restartPolicy\":\"Always\",\
"schedulerName\":\"default-scheduler\",\"securityContext\":{},\"terminationGracePeriodSeconds\":30}}},\"status\":{\"availableReplicas\":4,\"conditions\":[{\"lastTransitionTime\":
\"2019-04-14T15:33:00Z\",\"lastUpdateTime\":\"2019-04-14T15:33:00Z\",\"message\":\"Deployment has minimum availability.\",\"reason\":\"MinimumReplicasAvailable\",\"status\":\"Tru
e\",\"type\":\"Available\"},{\"lastTransitionTime\":\"2019-04-14T15:32:12Z\",\"lastUpdateTime\":\"2019-04-14T15:33:00Z\",\"message\":\"ReplicaSet \\\"frontend-7965b86db7\\\" has
successfully progressed.\",\"reason\":\"NewReplicaSetAvailable\",\"status\":\"True\",\"type\":\"Progressing\"}],\"observedGeneration\":1,\"readyReplicas\":4,\"replicas\":4,\"upda
tedReplicas\":4}}\n"},"generation":1,"resourceVersion":"3138"},"spec":{"strategy":{"$retainKeys":["rollingUpdate","type"],"type":"Recreate"}},"status":{"$setElementOrder/conditio
ns":[{"type":"Available"},{"type":"Progressing"}],"conditions":[{"lastUpdateTime":"2019-04-14T15:33:00Z","message":"ReplicaSet \"frontend-7965b86db7\" has successfully progressed
.","type":"Progressing"}],"observedGeneration":1}}
to:
Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
Name: "frontend", Namespace: "default"
Object: &{map["kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensi
ons/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{\"deployment.kubernetes.io/revision\":\"1\"},\"creationTimestamp\":\"2019-04-14T15:32:12Z\",\"generation\":1,
\"name\":\"frontend\",\"namespace\":\"default\",\"resourceVersion\":\"3138\",\"selfLink\":\"/apis/extensions/v1beta1/namespaces/default/deployments/frontend\",\"uid\":\"796046e1-
5eca-11e9-a16c-0242ac110033\"},\"spec\":{\"minReadySeconds\":20,\"progressDeadlineSeconds\":600,\"replicas\":4,\"revisionHistoryLimit\":10,\"selector\":{\"matchLabels\":{\"name\"
:\"webapp\"}},\"strategy\":{\"rollingUpdate\":{\"maxSurge\":\"25%\",\"maxUnavailable\":\"25%\"},\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"creationTimestamp\":null
,\"labels\":{\"name\":\"webapp\"}},\"spec\":{\"containers\":[{\"image\":\"kodekloud/webapp-color:v2\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"simple-webapp\",\"ports\":[{
\"containerPort\":8080,\"protocol\":\"TCP\"}],\"resources\":{},\"terminationMessagePath\":\"/dev/termination-log\",\"terminationMessagePolicy\":\"File\"}],\"dnsPolicy\":\"Cluster
First\",\"restartPolicy\":\"Always\",\"schedulerName\":\"default-scheduler\",\"securityContext\":{},\"terminationGracePeriodSeconds\":30}}},\"status\":{\"availableReplicas\":4,\"
conditions\":[{\"lastTransitionTime\":\"2019-04-14T15:33:00Z\",\"lastUpdateTime\":\"2019-04-14T15:33:00Z\",\"message\":\"Deployment has minimum availability.\",\"reason\":\"Minim
umReplicasAvailable\",\"status\":\"True\",\"type\":\"Available\"},{\"lastTransitionTime\":\"2019-04-14T15:32:12Z\",\"lastUpdateTime\":\"2019-04-14T15:33:00Z\",\"message\":\"Repli
caSet \\\"frontend-7965b86db7\\\" has successfully progressed.\",\"reason\":\"NewReplicaSetAvailable\",\"status\":\"True\",\"type\":\"Progressing\"}],\"observedGeneration\":1,\"readyReplicas\":4,\"replicas\":4,\"updatedReplicas\":4}}\n" "deployment.kubernetes.io/revision":"2"] "name":"frontend" "namespace":"default" "selfLink":"/apis/extensions/v1beta1/namespaces/default/deployments/frontend" "resourceVersion":"3653" "labels":map["name":"webapp"] "uid":"796046e1-5eca-11e9-a16c-0242ac110033" "generation":'\x02' "creationTimestamp":"2019-04-14T15:32:12Z"] "spec":map["selector":map["matchLabels":map["name":"webapp"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"webapp"]] "spec":map["dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler" "containers":[map["terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"simple-webapp" "image":"kodekloud/webapp-color:v2" "ports":[map["containerPort":'\u1f90' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e']] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"]] "minReadySeconds":'\x14' "revisionHistoryLimit":'\n' "progressDeadlineSeconds":'\u0258' "replicas":'\x04'] "status":map["observedGeneration":'\x02' "replicas":'\x04' "updatedReplicas":'\x04' "readyReplicas":'\x04' "availableReplicas":'\x04' "conditions":[map["type":"Available" "status":"True" "lastUpdateTime":"2019-04-14T15:33:00Z" "lastTransitionTime":"2019-04-14T15:33:00Z" "reason":"MinimumReplicasAvailable" "message":"Deployment has minimum availability."] map["lastUpdateTime":"2019-04-14T15:38:01Z" "lastTransitionTime":"2019-04-14T15:32:12Z" "reason":"NewReplicaSetAvailable" "message":"ReplicaSet \"frontend-65998dcfd8\" has successfully progressed." "type":"Progressing" "status":"True"]]]]}
for: "frontend.yml": Operation cannot be fulfilled on deployments.extensions "frontend": the object has been modified; please apply your changes to the latest version and try again
This is a known issue with the Kubernetes Terraform Provider. It has been present since at least version 0.11.7.
The issue has been fixed with the latest version, as a result of this merge request.