ArgoCD Helm chart - Repository not accessible - kubernetes-helm

I'm trying to add a helm chart (https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) to ArgoCD.
When I do this, I get the following error:
Unable to save changes: application spec is invalid: InvalidSpecError: repository not accessible: repository not found
Can you guys help me out please? I think I did everything right but it seems something's wrong...
Here's the Project yaml.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prom-oper
namespace: argocd
spec:
project: prom-oper
source:
repoURL: https://prometheus-community.github.io/helm-charts
targetRevision: "13.2.1"
path: prometheus-community/kube-prometheus-stack
helm:
# Release name override (defaults to application name)
releaseName: prom-oper
version: v3
values: |
... redacted
directory:
recurse: false
destination:
server: https://kubernetes.default.svc
namespace: prom-oper
syncPolicy:
automated: # automated sync by default retries failed attempts 5 times with following delays between attempts ( 5s, 10s, 20s, 40s, 80s ); retry controlled using `retry` field.
prune: false # Specifies if resources should be pruned during auto-syncing ( false by default ).
selfHeal: false # Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected ( false by default ).
allowEmpty: false # Allows deleting all application resources during automatic syncing ( false by default ).
syncOptions: # Sync options which modifies sync behavior
- CreateNamespace=true # Namespace Auto-Creation ensures that namespace specified as the application destination exists in the destination cluster.
# The retry feature is available since v1.7
retry:
limit: 5 # number of failed sync attempt retries; unlimited number of attempts if less than 0
backoff:
duration: 5s # the amount to back off. Default unit is seconds, but could also be a duration (e.g. "2m", "1h")
factor: 2 # a factor to multiply the base duration after each failed retry
maxDuration: 3m # the maximum amount of time allowed for the backoff strategy
and also the configmap where I added the helm repo
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
name: argocd-cm
namespace: argocd
data:
admin.enabled: "false"
repositories: |
- type: helm
url: https://prometheus-community.github.io/helm-charts
name: prometheus-community

The reason you are getting this error is because the way the Application is defined, Argo thinks it's a Git repository instead of Helm.
Define the source object with a "chart" property instead of "path" like so:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prom-oper
namespace: argocd
spec:
project: prom-oper
source:
repoURL: https://prometheus-community.github.io/helm-charts
targetRevision: "13.2.1"
chart: kube-prometheus-stack
You can see it defined on line 128 in Argo's application-crd.yaml

Related

Argo Workflow + Spark operator + App logs not generated

Am in very early stages of exploring Argo with Spark operator to run Spark samples on the minikube setup on my EC2 instance.
Following are the resources details, not sure why am not able to see the spark app logs.
WORKFLOW.YAML
kind: Workflow
metadata:
name: spark-argo-groupby
spec:
entrypoint: sparkling-operator
templates:
- name: spark-groupby
resource:
action: create
manifest: |
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
generateName: spark-argo-groupby
spec:
type: Scala
mode: cluster
image: gcr.io/spark-operator/spark:v3.0.3
imagePullPolicy: Always
mainClass: org.apache.spark.examples.GroupByTest
mainApplicationFile: local:///opt/spark/spark-examples_2.12-3.1.1-hadoop-2.7.jar
sparkVersion: "3.0.3"
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 3.0.0
executor:
cores: 1
instances: 1
memory: "512m"
labels:
version: 3.0.0
- name: sparkling-operator
dag:
tasks:
- name: SparkGroupBY
template: spark-groupby
ROLES
# Role for spark-on-k8s-operator to create resources on cluster
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: spark-cluster-cr
labels:
rbac.authorization.kubeflow.org/aggregate-to-kubeflow-edit: "true"
rules:
- apiGroups:
- sparkoperator.k8s.io
resources:
- sparkapplications
verbs:
- '*'
---
# Allow airflow-worker service account access for spark-on-k8s
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: argo-spark-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: spark-cluster-cr
subjects:
- kind: ServiceAccount
name: default
namespace: argo
ARGO UI
To dig deep I tried all the steps that's listed on https://dev.to/crenshaw_dev/how-to-debug-an-argo-workflow-31ng yet could not get app logs.
Basically when I run these examples am expecting spark app logs to be printed - in this case output of following Scala example
https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/GroupByTest.scala
Interesting when I list PODS, I was expecting to see driver pods and executor pods but always see only one POD and it has above logs as in the image attached. Please help me to understand why logs are not generated and how can I get it?
RAW LOGS
$ kubectl logs spark-pi-dag-739246604 -n argo
time="2021-12-10T13:28:09.560Z" level=info msg="Starting Workflow Executor" version="{v3.0.3 2021-05-11T21:14:20Z 02071057c082cf295ab8da68f1b2027ff8762b5a v3.0.3 clean go1.15.7 gc linux/amd64}"
time="2021-12-10T13:28:09.581Z" level=info msg="Creating a docker executor"
time="2021-12-10T13:28:09.581Z" level=info msg="Executor (version: v3.0.3, build_date: 2021-05-11T21:14:20Z) initialized (pod: argo/spark-pi-dag-739246604) with template:\n{\"name\":\"sparkpi\",\"inputs\":{},\"outputs\":{},\"metadata\":{},\"resource\":{\"action\":\"create\",\"manifest\":\"apiVersion: \\\"sparkoperator.k8s.io/v1beta2\\\"\\nkind: SparkApplication\\nmetadata:\\n generateName: spark-pi-dag\\nspec:\\n type: Scala\\n mode: cluster\\n image: gjeevanm/spark:v3.1.1\\n imagePullPolicy: Always\\n mainClass: org.apache.spark.examples.SparkPi\\n mainApplicationFile: local:///opt/spark/spark-examples_2.12-3.1.1-hadoop-2.7.jar\\n sparkVersion: 3.1.1\\n driver:\\n cores: 1\\n coreLimit: \\\"1200m\\\"\\n memory: \\\"512m\\\"\\n labels:\\n version: 3.0.0\\n executor:\\n cores: 1\\n instances: 1\\n memory: \\\"512m\\\"\\n labels:\\n version: 3.0.0\\n\"},\"archiveLocation\":{\"archiveLogs\":true,\"s3\":{\"endpoint\":\"minio:9000\",\"bucket\":\"my-bucket\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"my-minio-cred\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"my-minio-cred\",\"key\":\"secretkey\"},\"key\":\"spark-pi-dag/spark-pi-dag-739246604\"}}}"
time="2021-12-10T13:28:09.581Z" level=info msg="Loading manifest to /tmp/manifest.yaml"
time="2021-12-10T13:28:09.581Z" level=info msg="kubectl create -f /tmp/manifest.yaml -o json"
time="2021-12-10T13:28:10.348Z" level=info msg=argo/SparkApplication.sparkoperator.k8s.io/spark-pi-daghhl6s
time="2021-12-10T13:28:10.348Z" level=info msg="Starting SIGUSR2 signal monitor"
time="2021-12-10T13:28:10.348Z" level=info msg="No output parameters"
As Michael mentioned in his answer, Argo Workflows does not know how other CRDs (such as SparkApplication that you used) work and thus could not pull the logs from the pods created by that particular CRD.
However, you can add the label workflows.argoproj.io/workflow: {{workflow.name}} to the pods generated by SparkApplication to let Argo Workflows know and then use argo logs -c <container-name> to pull the logs from those pods.
You can find an example here but Kubeflow CRD but in your case you'll want to add labels to the executor and driver to your SparkApplication CRD in the resource template: https://github.com/argoproj/argo-workflows/blob/master/examples/k8s-resource-log-selector.yaml
Argo Workflows' resource templates (like your spark-groupby template) are simplistic. The Workflow controller is running kubectl create, and that's where its involvement in the SparkApplication ends.
The logs you're seeing from the Argo Workflow pod describe the kubectl create process. Your resource is written to a temporary yaml file and then applied to the cluster.
time="2021-12-10T13:28:09.581Z" level=info msg="Loading manifest to /tmp/manifest.yaml"
time="2021-12-10T13:28:09.581Z" level=info msg="kubectl create -f /tmp/manifest.yaml -o json"
time="2021-12-10T13:28:10.348Z" level=info msg=argo/SparkApplication.sparkoperator.k8s.io/spark-pi-daghhl6s
Old answer:
To view the logs generated by your SparkApplication, you'll need to
follow the Spark docs. I'm not familiar, but I'm guessing the
application gets run in a Pod somewhere. If you can find that pod, you
should be able to view the logs with kubectl logs.
It would be really cool if Argo Workflows could pull Spark logs into
its UI. But building a generic solution would probably be
prohibitively difficult.
Update:
Check Yuan's answer. There's a way to pull the Spark logs into the Workflows CLI!

stable/prometheus-operator - adding persistent grafana dashboards

I am trying to add a new dashboard to the below helm chart
https://github.com/helm/charts/tree/master/stable/prometheus-operator
The documentation is not very clear.
I have added a config map to the name space like the below -
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-grafana-dashboard
namespace: monitoring
labels:
grafana_dashboard: "1"
data:
etcd-dashboard.json: |-
{JSON}
According to the documentation, this should just be "picked" up and added, but its not.
https://github.com/helm/charts/tree/master/stable/grafana#configuration
The sidecar option in my values.yaml looks like -
grafana:
enabled: true
## Deploy default dashboards.
##
defaultDashboardsEnabled: true
adminPassword: password
ingress:
## If true, Grafana Ingress will be created
##
enabled: false
## Annotations for Grafana Ingress
##
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
## Labels to be added to the Ingress
##
labels: {}
## Hostnames.
## Must be provided if Ingress is enable.
##
# hosts:
# - grafana.domain.com
hosts: []
## Path for grafana ingress
path: /
## TLS configuration for grafana Ingress
## Secret must be manually created in the namespace
##
tls: []
# - secretName: grafana-general-tls
# hosts:
# - grafana.example.com
#dashboardsConfigMaps:
#sidecarProvider: sample-grafana-dashboard
sidecar:
dashboards:
enabled: true
label: grafana_dashboard
I have also tried adding this to the value.yml
dashboardsConfigMaps:
- sample-grafana-dashboard
Which, doesn't work.
Does anyone have any experience with adding your own dashboards to this helm chart as I really am at my wits end.
To sum up:
For sidecar you need only one option set to true - grafana.sidecar.dashboards.enabled
Install prometheus-operator witch sidecard enabled:
helm install stable/prometheus-operator --name prometheus-operator --set grafana.sidecar.dashboards.enabled=true --namespace monitoring
Add new dashboard, for example
MongoDB_Overview:
wget https://raw.githubusercontent.com/percona/grafana-dashboards/master/dashboards/MongoDB_Overview.json
kubectl -n monitoring create cm grafana-mongodb-overview --from-file=MongoDB_Overview.json
Now the tricky part, you have to set a correct label for your
configmap, by default grafana.sidecar.dashboards.label is set
tografana_dashboard, so:
kubectl -n monitoring label cm grafana-mongodb-overview grafana_dashboard=mongodb-overview
Now you should find your newly added dashboard in grafana, moreover every confimap with label grafana_dashboard will be processed as dashboard.
The dashboard is persisted and safe, stored in configmap.
UPDATE:
January 2021:
Prometheus operator chart was migrated from stable repo to Prometheus Community Kubernetes Helm Charts and helm v3 was released so:
Create namespace:
kubectl create namespace monitoring
Install prometheus-operator from helm chart:
helm install prometheus-operator prometheus-community/kube-prometheus-stack --namespace monitoring
Add Mongodb dashboard as an example
wget https://raw.githubusercontent.com/percona/grafana-dashboards/master/dashboards/MongoDB_Overview.json
kubectl -n monitoring create cm grafana-mongodb-overview --from-file=MongoDB_Overview.json
Lastly, label the dashboard:
kubectl -n monitoring label cm grafana-mongodb-overview grafana_dashboard=mongodb-overview
You have to:
define you dashboard json as a configmap (as you have done, but see below for an easier way)
define a provider: to tell where to load the dashboard
map the two together
from values.yml:
dashboardsConfigMaps:
application: application
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: application
orgId: 1
folder: "Application Metrics"
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/application
Now the application config map should create files in this directory in the pod, and as has been discussed the sidecar should load them into an Application Metrics folder, seen in the GUI.
That probably answers your issue as written, but as long as your dashboards aren't too big using kustonmise mean you can have the json on disk without needing to include the json in another file thus:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# May choose to enable this if need to refer to configmaps outside of kustomize
generatorOptions:
disableNameSuffixHash: true
namespace: monitoring
configMapGenerator:
- name: application
files:
- grafana-dashboards/application/api01.json
- grafana-dashboards/application/api02.json
For completeness sake you can also load dashboards from url or from the Grafana site, although I don't believe mixing method in the same folder works.
So:
dashboards:
kafka:
kafka01:
url: https://raw.githubusercontent.com/kudobuilder/operators/master/repository/kafka/docs/latest/resources/grafana-dashboard.json
folder: "KUDO Kafka"
datasource: Prometheus
nginx:
nginx1:
gnetId: 9614
datasource: Prometheus
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: kafka
orgId: 1
folder: "KUDO Kafka"
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/kafka
- name: nginx
orgId: 1
folder: Nginx
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/nginx
Creates two new folders containing a dashboard each, from external sources, or maybe you point this at your git repo you de-couple your dashboard commits from your deployment.
If you do not change the settings in the helm chart. The default user/password for grafana is:
user: admin
password: prom-operator

How to specify a GKE node pool configuration in a YAML file instead of using gcloud container node-pools create?

It seems that the only way to create node pools on Google Kubernetes Engine is with the command gcloud container node-pools create. I would like to have all the configuration in a YAML file instead. What I tried is the following:
apiVersion: v1
kind: NodeConfig
metadata:
annotations:
cloud.google.com/gke-nodepool: ares-pool
spec:
diskSizeGb: 30
diskType: pd-standard
imageType: COS
machineType: n1-standard-1
metadata:
disable-legacy-endpoints: 'true'
oauthScopes:
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
- https://www.googleapis.com/auth/service.management.readonly
- https://www.googleapis.com/auth/servicecontrol
- https://www.googleapis.com/auth/trace.append
serviceAccount: default
But kubectl apply fails with:
error: unable to recognize "ares-pool.yaml": no matches for kind "NodeConfig" in version "v1"
I am surprised that Google yields almost no relevant results for all my searches. The only documentation that I found was the one on Google Cloud, which is quite incomplete in my opinion.
Node pools are not Kubernetes objects, they are part of the Google Cloud API. Therefore Kubernetes does not know about them, and kubectl apply will not work.
What you actually need is a solution called "infrastructure as code" - a code that will tell GCP what kind of node pool it wants.
If you don't strictly need YAML, you can check out Terraform that handles this use case. See: https://terraform.io/docs/providers/google/r/container_node_pool.html
You can also look into Google Deployment Manager or Ansible (it has GCP module, and uses YAML syntax), they also address your need.
I don' know if it answers accurately your needs but if you want to do IAC in general with Kubernetes, you can use Crossplane CRDs. If you already have a running cluster, you just have to install their helm chart and you can provision a cluster this way:
apiVersion: container.gcp.crossplane.io/v1beta1
kind: GKECluster
metadata:
name: gke-crossplane-cluster
spec:
forProvider:
initialClusterVersion: "1.19"
network: "projects/development-labs/global/networks/opsnet"
subnetwork: "projects/development-labs/regions/us-central1/subnetworks/opsnet"
ipAllocationPolicy:
useIpAliases: true
defaultMaxPodsConstraint:
maxPodsPerNode: 110
And then you can define an associated node pool as follows:
apiVersion: container.gcp.crossplane.io/v1alpha1
kind: NodePool
metadata:
name: gke-crossplane-np
spec:
forProvider:
autoscaling:
autoprovisioned: false
enabled: true
maxNodeCount: 2
minNodeCount: 1
clusterRef:
name: gke-crossplane-cluster
config:
diskSizeGb: 100
# diskType: pd-ssd
imageType: cos_containerd
labels:
test-label: crossplane-created
machineType: n1-standard-4
oauthScopes:
- "https://www.googleapis.com/auth/devstorage.read_only"
- "https://www.googleapis.com/auth/logging.write"
- "https://www.googleapis.com/auth/monitoring"
- "https://www.googleapis.com/auth/servicecontrol"
- "https://www.googleapis.com/auth/service.management.readonly"
- "https://www.googleapis.com/auth/trace.append"
initialNodeCount: 2
locations:
- us-central1-a
management:
autoRepair: true
autoUpgrade: true
If you want you can find a full example of a GKE provisionning with Crossplane here.

Kubernetes HPA fails to detect a successfully published custom metric from Stackdriver

I'm trying to scale a Kubernetes Deployment using a HorizontalPodAutoscaler, which listens to a custom metrics through Stackdriver.
I'm having a GKE cluster, with a Stackdriver adapter enabled.
I'm able to publish the custom metric type to Stackdriver, and following is the way it's being displayed in Stackdriver's Metric Explorer.
This is how I have defined my HPA:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: example-hpa
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metricName: custom.googleapis.com|worker_pod_metrics|baz
targetValue: 400
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-app-group-1-1
After successfully creating example-hpa, executing kubectl get hpa example-hpa, always shows TARGETS as <unknown>, and never detects the value from custom metrics.
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
example-hpa Deployment/test-app-group-1-1 <unknown>/400 1 10 1 18m
I'm using a Java client which runs locally to publish my custom metrics.
I have given the appropriate resource labels as mentioned here (hard coded - so that it can run without a problem in local environment). I have followed this document to create the Java client.
private static MonitoredResource prepareMonitoredResourceDescriptor() {
Map<String, String> resourceLabels = new HashMap<>();
resourceLabels.put("project_id", "<<<my-project-id>>>);
resourceLabels.put("pod_id", "<my pod UID>");
resourceLabels.put("container_name", "");
resourceLabels.put("zone", "asia-southeast1-b");
resourceLabels.put("cluster_name", "my-cluster");
resourceLabels.put("namespace_id", "mynamespace");
resourceLabels.put("instance_id", "");
return MonitoredResource.newBuilder()
.setType("gke_container")
.putAllLabels(resourceLabels)
.build();
}
What am I doing wrong in the above-mentioned steps please? Thank you in advance for any answers provided!
EDIT [RESOLVED]:
I think I have had some misconfigurations, since kubectl describe hpa [NAME] --v=9 showed me some 403 status code, as well as I was using type: External instead of type: Pods (Thanks MWZ for your answer, pointing out this mistake).
I managed to fix it by creating a new project, a new service account, and a new GKE cluster (basically everything from the beginning again). Then I changed my yaml file as follows, exactly as this document explains.
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: test-app-group-1-1
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: test-app-group-1-1
minReplicas: 1
maxReplicas: 5
metrics:
- type: Pods # Earlier this was type: External
pods: # Earlier this was external:
metricName: baz # metricName: custom.googleapis.com|worker_pod_metrics|baz
targetAverageValue: 20
I'm now exporting as custom.googleapis.com/baz, and NOT as custom.googleapis.com/worker_pod_metrics/baz. Also, now I'm explicitly specifying the namespace for my HPA in the yaml.
Since you can see your custom metric in Stackdriver GUI I'm guessing metrics are correctly exported. Based on Autoscaling Deployments with Custom Metrics I believe you wrongly defined metric to be used by HPA to scale the deployment.
Please try using this YAML:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: example-hpa
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- type: Pods
pods:
metricName: baz
targetAverageValue: 400
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-app-group-1-1
Please have in mind that:
The HPA uses the metrics to compute an average and compare it to the
target average value. In the application-to-Stackdriver export
example, a Deployment contains Pods that export metric. The following
manifest file describes a HorizontalPodAutoscaler object that scales a
Deployment based on the target average value for the metric.
Troubleshooting steps described on the page above can also be useful.
Side-note
Since above HPA is using beta API autoscaling/v2beta1 I got error when running kubectl describe hpa [DEPLOYMENT_NAME]. I ran kubectl describe hpa [DEPLOYMENT_NAME] --v=9 and got response in JSON.
It is a good practice to put some unique labels to target your metrics. Right now, based on metrics labelled in your java client, only pod_id looks unique which can't be used due to its stateless nature.
So, I would suggest you try introducing a deployment/metrics wide unqiue identifier.
resourceLabels.put("<identifier>", "<could-be-deployment-name>");
After this, you can try modifying your HPA with something similar to following:
kind: HorizontalPodAutoscaler
metadata:
name: example-hpa
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metricName: custom.googleapis.com|worker_pod_metrics|baz
metricSelector:
matchLabels:
# define labels to target
metric.labels.identifier: <deployment-name>
# scale +1 whenever it crosses multiples of mentioned value
targetAverageValue: "400"
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-app-group-1-1
Apart from this, this setup has no issues and should work smooth.
Helper command to see what metrics are exposed to HPA :
kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/custom.googleapis.com|worker_pod_metrics|baz" | jq

Kubectl error: the object has been modified; please apply your changes to the latest version and try again

I am getting below error while trying to apply patch :
core#dgoutam22-1-coreos-5760 ~ $ kubectl apply -f ads-central-configuration.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Error from server (Conflict): error when applying patch:
{"data":{"default":"{\"dedicated_redis_cluster\": {\"nodes\": [{\"host\": \"192.168.1.94\", \"port\": 6379}]}}"},"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"data\":{\"default\":\"{\\\"dedicated_redis_cluster\\\": {\\\"nodes\\\": [{\\\"host\\\": \\\"192.168.1.94\\\", \\\"port\\\": 6379}]}}\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"creationTimestamp\":\"2018-06-27T07:19:13Z\",\"labels\":{\"acp-app\":\"acp-discovery-service\",\"version\":\"1\"},\"name\":\"ads-central-configuration\",\"namespace\":\"acp-system\",\"resourceVersion\":\"1109832\",\"selfLink\":\"/api/v1/namespaces/acp-system/configmaps/ads-central-configuration\",\"uid\":\"64901676-79da-11e8-bd65-fa163eaa7a28\"}}\n"},"creationTimestamp":"2018-06-27T07:19:13Z","resourceVersion":"1109832","uid":"64901676-79da-11e8-bd65-fa163eaa7a28"}}
to:
&{0xc4200bb380 0xc420356230 acp-system ads-central-configuration ads-central-configuration.yaml 0xc42000c970 4434 false}
**for: "ads-central-configuration.yaml": Operation cannot be fulfilled on configmaps "ads-central-configuration": the object has been modified; please apply your changes to the latest version and try again**
core#dgoutam22-1-coreos-5760 ~ $
It seems likely that your yaml configurations were copy pasted from what was generated, and thus contains fields such as creationTimestamp (and resourceVersion, selfLink, and uid), which don't belong in a declarative configuration file.
Go through your yaml and clean it up. Remove things that are instance specific. Your final yaml should be simple enough that you can easily understand it.
Remove these lines from the file:
creationTimestamp:
resourceVersion:
selfLink:
uid:
Then try to apply again.
Give attention to put the last resourceVersion in your update, you can get it running:
kubectl get deployment <DEPLOYMENT-NAME> -o yaml | grep resourceVersion
you may have been edited the same exported deployment file..
1 - try to reexport it with:
kubectl get deployment <DEPLOYMENT-NAME> -o yaml > deployment-file.yaml
2 - make the needed modifications in "deployment-file.yaml"
3 - apply the changes with:
kubectl apply -f deployment-file.yaml
OR:
you may want to edit the deployment directly.. use :
kubectl edit deployment <DEPLOYMENT-NAME> -o yaml
change the default editor if you aren't familiar with VI editor : export EDITOR=nano
I am able to reproduce the issue in my test environment. Steps to reproduce:
Create a deployment from Kubernetes Engine > Workloads > Deploy
Input your Application Name, Namespace, Labels
Select cluster or create new cluster
You are able to view the YAML file here and here is the sample:
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "nginx-1"
namespace: "default"
labels:
app: "nginx-1"
spec:
replicas: 3
selector:
matchLabels:
app: "nginx-1"
template:
metadata:
labels:
app: "nginx-1"
spec:
containers:
- name: "nginx"
image: "nginx:latest"
---
apiVersion: "autoscaling/v2beta1"
kind: "HorizontalPodAutoscaler"
metadata:
name: "nginx-1-hpa"
namespace: "default"
labels:
app: "nginx-1"
spec:
scaleTargetRef:
kind: "Deployment"
name: "nginx-1"
apiVersion: "apps/v1"
minReplicas: 1
maxReplicas: 5
metrics:
- type: "Resource"
resource:
name: "cpu"
targetAverageUtilization: 80
After deployment if you go to Kubernetes Engine > Workloads > nginx-1 (click on it)
a.) You will get Deployment details (Overview, Details, Revision history, events, YAML)
b.) click on YAML and copy the content from YAML tab
c.) create new YAML file and paste the content and save the file
d.) Now if you run the command $kubectl apply -f newyamlfile.yaml, it will shows you the below error:
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{\"deployment.kubernetes.io/revision\":\"1\"},\"creationTimestamp\":\"2019-09-17T21:34:39Z\",\"generation\":1,\"labels\":{\"app\":\"nginx-1\"},\"name\":\"nginx-1\",\"namespace\":\"default\",\"resourceVersion\":\"218884\",\"selfLink\":\"/apis/apps/v1/namespaces/default/deployments/nginx-1\",\"uid\":\"f41c5b6f-d992-11e9-9adc-42010a80023b\"},\"spec\":{\"progressDeadlineSeconds\":600,\"replicas\":3,\"revisionHistoryLimit\":10,\"selector\":{\"matchLabels\":{\"app\":\"nginx-1\"}},\"strategy\":{\"rollingUpdate\":{\"maxSurge\":\"25%\",\"maxUnavailable\":\"25%\"},\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"app\":\"nginx-1\"}},\"spec\":{\"containers\":[{\"image\":\"nginx:latest\",\"imagePullPolicy\":\"Always\",\"name\":\"nginx\",\"resources\":{},\"terminationMessagePath\":\"/dev/termination-log\",\"terminationMessagePolicy\":\"File\"}],\"dnsPolicy\":\"ClusterFirst\",\"restartPolicy\":\"Always\",\"schedulerName\":\"default-scheduler\",\"securityContext\":{},\"terminationGracePeriodSeconds\":30}}},\"status\":{\"availableReplicas\":3,\"conditions\":[{\"lastTransitionTime\":\"2019-09-17T21:34:47Z\",\"lastUpdateTime\":\"2019-09-17T21:34:47Z\",\"message\":\"Deployment has minimum availability.\",\"reason\":\"MinimumReplicasAvailable\",\"status\":\"True\",\"type\":\"Available\"},{\"lastTransitionTime\":\"2019-09-17T21:34:39Z\",\"lastUpdateTime\":\"2019-09-17T21:34:47Z\",\"message\":\"ReplicaSet \\\"nginx-1-7b4bb7fbf8\\\" has successfully progressed.\",\"reason\":\"NewReplicaSetAvailable\",\"status\":\"True\",\"type\":\"Progressing\"}],\"observedGeneration\":1,\"readyReplicas\":3,\"replicas\":3,\"updatedReplicas\":3}}\n"},"generation":1,"resourceVersion":"218884"},"spec":{"replicas":3},"status":{"availableReplicas":3,"observedGeneration":1,"readyReplicas":3,"replicas":3,"updatedReplicas":3}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx-1", Namespace: "default"
Object: &{map["apiVersion":"apps/v1" "metadata":map["name":"nginx-1" "namespace":"default" "selfLink":"/apis/apps/v1/namespaces/default/deployments/nginx-1" "uid":"f41c5b6f-d992-11e9-9adc-42010a80023b" "generation":'\x02' "labels":map["app":"nginx-1"] "annotations":map["deployment.kubernetes.io/revision":"1"] "resourceVersion":"219951" "creationTimestamp":"2019-09-17T21:34:39Z"] "spec":map["replicas":'\x01' "selector":map["matchLabels":map["app":"nginx-1"]] "template":map["metadata":map["labels":map["app":"nginx-1"] "creationTimestamp":<nil>] "spec":map["containers":[map["imagePullPolicy":"Always" "name":"nginx" "image":"nginx:latest" "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler"]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":"25%" "maxSurge":"25%"]] "revisionHistoryLimit":'\n' "progressDeadlineSeconds":'\u0258'] "status":map["observedGeneration":'\x02' "replicas":'\x01' "updatedReplicas":'\x01' "readyReplicas":'\x01' "availableReplicas":'\x01' "conditions":[map["message":"Deployment has minimum availability." "type":"Available" "status":"True" "lastUpdateTime":"2019-09-17T21:34:47Z" "lastTransitionTime":"2019-09-17T21:34:47Z" "reason":"MinimumReplicasAvailable"] map["lastTransitionTime":"2019-09-17T21:34:39Z" "reason":"NewReplicaSetAvailable" "message":"ReplicaSet \"nginx-1-7b4bb7fbf8\" has successfully progressed." "type":"Progressing" "status":"True" "lastUpdateTime":"2019-09-17T21:34:47Z"]]] "kind":"Deployment"]}
for: "test.yaml": Operation cannot be fulfilled on deployments.apps "nginx-1": the object has been modified; please apply your changes to the latest version and try again
To solve the problem, you need to find the exact yaml file and then edit it as per your requirement, after that you can run $kubectl apply -f nginx-1.yaml
Hope this information finds you well.
This error is coming because the deployment.yaml has an entry for resourceVersion. Remove it as it's not needed and you will be able to apply the new configuration.