ArgoCD : x509: certificate has expired or is not yet valid - github

I am using argoCD to deploy my application. I stored the code in GitHub at: https://github.com/vamshipulluri94/argoCD-simple.git
My argocd application configuration file is as below:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: myapp
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/vamshipulluri94/argoCD-simple.git
targetRevision: HEAD
path: dev
destination:
server: https://kubernetes.default.svc
namespace: myapp
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
selfHeal: true
prune: true
When I deploy my application, I get the error as below:
rpc error: code = Unknown desc = Get "https://github.com/vamshipulluri94/argoCD-simple.git/info/refs?service=git-upload-pack":
x509: certificate has expired or is not yet valid: current time 2022-06-17T09:06:03Z is after 2021-06-12T15:18:58Z
My host timezone is correct as the current timezone. Is the error due to a mismatch in timezone or any other problem.

Related

Argocd image updater giving Could not get tags from registry: denied: Failed to read tags for host 'gcr.io' Error

I am trying to use the ArgoCd-image-updater , but it is giving me the below error
time="2022-09-13T15:40:02Z" level=debug msg="Using version constraint '^0.1' when looking for a new tag" alias= application=ms-echoserver-imageupdate-test image_name=test-build/argo-imageupdater-test image_tag=0.9 registry=gcr.io
time="2022-09-13T15:40:02Z" level=error msg="Could not get tags from registry: denied: Failed to read tags for host 'gcr.io', repository '/v2/test-build/argo-imageupdater-test/tags/list'" alias= application=ms-echoserver-imageupdate-test image_name=test-build/argo-imageupdater-test image_tag=0.9 registry=gcr.io
time="2022-09-13T15:40:02Z" level=info msg="Processing results: applications=1 images_considered=1 images_skipped=0 images_updated=0 errors=1"
My argocd image updater config file:-
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-image-updater-config
labels:
app.kubernetes.io/name: argocd-image-updater-config
app.kubernetes.io/part-of: argocd-image-updater
data:
log.level: debug
registries.conf: |
registries:
- name: Google Container Registry
api_url: https://gcr.io
ping: no
prefix: gcr.io
credentials: pullsecret:argocd/gcr-imageupdater
#credentials: secret:argocd/sundayy#creds
Note:- Secret is having owner permission.

ArgoCD stuck in deleting resource

I’m having an issue where ArgoCD when deleting the resources is getting stuck because it tries to delete the child’s first and then the parents.
This works well for some cases but I have cases where this doesn’t work for instance, certificates.. it deletes the certificate request but because the certificate still exists it recreated the certificate request.
And it just stays there deleting and recreating :/
Is there a a way to specify an order or just tell Argo to delete it all at once?
Thanks!
Yup so... here is the all thing:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: previews
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: previews
source:
repoURL: git#github.com:myrepo.git
targetRevision: HEAD
path: helm
destination:
server: https://kubernetes.default.svc
namespace: previews
syncPolicy:
automated:
selfHeal: true
prune: true

Failed calling webhook "mutate.runner.actions.summerwind.dev": x509: certificate signed by unknown authority

Kubernetes: v1.19.9-gke.1900
Helm actions-runner-controller: 0.12.7
I have CRDs created by Github Actions Controller:
❯ kubectl api-resources | grep summerwind.dev
horizontalrunnerautoscalers actions.summerwind.dev/v1alpha1 true HorizontalRunnerAutoscaler
runnerdeployments actions.summerwind.dev/v1alpha1 true RunnerDeployment
runnerreplicasets actions.summerwind.dev/v1alpha1 true RunnerReplicaSet
runners actions.summerwind.dev/v1alpha1 true Runner
runnersets actions.summerwind.dev/v1alpha1 true RunnerSet
And also I have a sample file with two simplified resources: pod and runner
❯ cat test.yml
apiVersion: v1
kind: Pod
metadata:
name: pod-1
spec:
containers:
- name: main
image: busybox
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
metadata:
name: runner-1
spec:
organization: my-org
env: []
Now, when I run apply to both these resources, the Pod works good but the Runner fails:
❯ kubectl apply -f test.yml
pod/pod-1 created
Error from server (InternalError): error when creating "test.yml": Internal error occurred: failed calling webhook "mutate.runner.actions.summerwind.dev": Post "https://actions-runner-controller-webhook.tools.svc:443/mutate-actions-summerwind-dev-v1alpha1-runner?timeout=30s": x509: certificate signed by unknown authority
As you see, this call goes to the MutatingWebhookConfiguration. And this webhook sends request to the Controller that prints only:
❯ kubectl -n tools logs actions-runner-controller-6cd6fbdd56-qlzrd -c manager
...
http: TLS handshake error from 10.128.0.3:59736: remote error: tls: bad certificate
QUESTION:
What is the next step for troubleshooting?

ArgoCD Helm chart - Repository not accessible

I'm trying to add a helm chart (https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) to ArgoCD.
When I do this, I get the following error:
Unable to save changes: application spec is invalid: InvalidSpecError: repository not accessible: repository not found
Can you guys help me out please? I think I did everything right but it seems something's wrong...
Here's the Project yaml.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prom-oper
namespace: argocd
spec:
project: prom-oper
source:
repoURL: https://prometheus-community.github.io/helm-charts
targetRevision: "13.2.1"
path: prometheus-community/kube-prometheus-stack
helm:
# Release name override (defaults to application name)
releaseName: prom-oper
version: v3
values: |
... redacted
directory:
recurse: false
destination:
server: https://kubernetes.default.svc
namespace: prom-oper
syncPolicy:
automated: # automated sync by default retries failed attempts 5 times with following delays between attempts ( 5s, 10s, 20s, 40s, 80s ); retry controlled using `retry` field.
prune: false # Specifies if resources should be pruned during auto-syncing ( false by default ).
selfHeal: false # Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected ( false by default ).
allowEmpty: false # Allows deleting all application resources during automatic syncing ( false by default ).
syncOptions: # Sync options which modifies sync behavior
- CreateNamespace=true # Namespace Auto-Creation ensures that namespace specified as the application destination exists in the destination cluster.
# The retry feature is available since v1.7
retry:
limit: 5 # number of failed sync attempt retries; unlimited number of attempts if less than 0
backoff:
duration: 5s # the amount to back off. Default unit is seconds, but could also be a duration (e.g. "2m", "1h")
factor: 2 # a factor to multiply the base duration after each failed retry
maxDuration: 3m # the maximum amount of time allowed for the backoff strategy
and also the configmap where I added the helm repo
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
name: argocd-cm
namespace: argocd
data:
admin.enabled: "false"
repositories: |
- type: helm
url: https://prometheus-community.github.io/helm-charts
name: prometheus-community
The reason you are getting this error is because the way the Application is defined, Argo thinks it's a Git repository instead of Helm.
Define the source object with a "chart" property instead of "path" like so:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prom-oper
namespace: argocd
spec:
project: prom-oper
source:
repoURL: https://prometheus-community.github.io/helm-charts
targetRevision: "13.2.1"
chart: kube-prometheus-stack
You can see it defined on line 128 in Argo's application-crd.yaml

Error: validation failed: unable to recognize "": no matches for kind "FrontendConfig" in version "networking.k8s.io/v1beta1"

I am using fronendconfig.yaml file to enable http to https redirection, but it is giving me chart validation failed error. Listing the content of my yaml file. This issue is I am facing GKE ingress. My GKE master version is "1.17.14-gke.1600".
apiVersion: networking.k8s.io/v1beta1
kind: FrontendConfig
metadata:
name: "abcd"
spec:
redirectToHttps:
enabled: true
responseCodeName: "301"
Using annotations in values.yaml file like this.
ingress:
enabled: true
annotations:
networking.k8s.io/v1beta1.FrontendConfig: "abcd"
As of now, HTTP-to-HTTPS redirect is in beta and only available for GKE 1.18.10-gke.600 or greater as per the documentation.
Since you stated to be using GKE 1.17.14-gke.1600, this won't be available for your cluster.