Tested a official test code in a Argo CD reference.
https://argo-cd.readthedocs.io/en/stable/operator-manual/applicationset/Generators-Git/
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: cluster-addons
namespace: argocd
spec:
generators:
- git:
repoURL: https://github.com/argoproj/argo-cd.git
revision: HEAD
directories:
- path: applicationset/examples/git-generator-directory/cluster-addons/*
template:
metadata:
name: '{{path.basename}}'
spec:
project: "my-project"
source:
repoURL: https://github.com/argoproj/argo-cd.git
targetRevision: HEAD
path: '{{path}}'
destination:
server: https://kubernetes.default.svc
namespace: '{{path.basename}}'
syncPolicy:
syncOptions:
- CreateNamespace=true
And when I applied an ApplicationSet to my cluster and it deploy to local cluster.
The error is like below.
Status:
Conditions:
Last Transition Time: 2022-12-23T00:53:42Z
Message: Error during fetching repo: `git fetch origin HEAD --tags --force --prune` failed exit status 128: fatal: unable to access 'https://github.com/argoproj/argo-cd.git/': getaddrinfo() thread failed to start
Reason: ApplicationGenerationFromParamsError
Status: True
Type: ErrorOccurred
Last Transition Time: 2022-12-23T00:53:42Z
Message: Error during fetching repo: `git fetch origin HEAD --tags --force --prune` failed exit status 128: fatal: unable to access 'https://github.com/argoproj/argo-cd.git/': getaddrinfo() thread failed to start
Reason: ErrorOccurred
Status: False
Type: ParametersGenerated
Last Transition Time: 2022-12-23T00:53:42Z
Message: Error during fetching repo: `git fetch origin HEAD --tags --force --prune` failed exit status 128: fatal: unable to access 'https://github.com/argoproj/argo-cd.git/': getaddrinfo() thread failed to start
Reason: ApplicationGenerationFromParamsError
Status: False
Type: ResourcesUpToDate
Events: <none>
Is there any idea who know about it?
It would be a bug and it can be solved with a downgrading ArgoCD version.
My version was v2.5.5 and it is a bug. ArgoCD v2.2.11 works well.
https://github.com/argoproj/argo-cd/issues/11818.
Related
I'm trying to use Skaffold to deploy some services onto my local minikube cluster, however I am running into issues when it comes to pulling the images. I've specified the Dockerfile and would assume that it would check my local registry and upon not finding the image, proceed to build it and then pull that built image upon pod init.
But it appears as if skaffold is building the image successfully, but when the pod starts up it fails and gives an Failed to pull image "my-app-image": rpc error: code = Unknown desc = Error response from daemon: pull access denied for my-app-image, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
I'm a little confused because I thought this would all happening within my local registry, so I'm not sure why it is getting access denined when the image is being built successfully?
Example:
apiVersion: skaffold/v2
kind: Config
build:
artifacts:
- image: my-app-image
context: './'
sync:
manual:
- src: 'my-app/**/*'
dest: '/my-app/'
docker:
dockerfile: Dockerfile
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 2
strategy:
type: RollingUpdate
template:
metadata:
labels:
deploy: example
spec:
containers:
- name: my-app
image: my-app-image
I've been attempting to use Tekton to deploy some AWS infrastructure via Terraform but not having much success.
The pipeline clones a Github repo containing TF code , it then attempts to use the terraform-cli task to provision the AWS infrastructure. For initial testing I just want to perform the initial TF init and provision the AWS VPC.
Expected behaviour
Clone Github Repo
Perform Terraform Init
Create the VPC using targeted TF apply
Actual Result
task terraform-init has failed: failed to create task run pod "my-infra-pipelinerun-terraform-init": Pod "my-infra-pipelinerun-terraform-init-pod" is invalid: spec.initContainers[1].name: Duplicate value: "step-init". Maybe missing or invalid Task default/terraform-cli
pod for taskrun my-infra-pipelinerun-terraform-init not available yet
Tasks Completed: 2 (Failed: 1, Cancelled 0), Skipped: 1
Steps to Reproduce the Problem
Prerequisites: Install Tekton command line tool, git-clone and terraform-cli
create this pipeline in Minikube
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: my-infra-pipeline
spec:
description: Pipeline for TF deployment
params:
- name: repo-url
type: string
description: Git repository URL
- name: branch-name
type: string
description: The git branch
workspaces:
- name: tf-config
description: The workspace where the tf config code will be stored
tasks:
- name: clone-repo
taskRef:
name: git-clone
workspaces:
- name: output
workspace: tf-config
params:
- name: url
value: $(params.repo-url)
- name: revision
value: $(params.branch-name)
- name: terraform-init
runAfter: ["clone-repo"]
taskRef:
name: terraform-cli
workspaces:
- name: source
workspace: tf-config
params:
- name: terraform-secret
value: "tf-auth"
- name: ARGS
value:
- init
- name: build-vpc
runAfter: ["terraform-init"]
taskRef:
name: terraform-cli
workspaces:
- name: source
workspace: tf-config
params:
- name: terraform-secret
value: "tf-auth"
- name: ARGS
value:
- apply
- "-target=aws_vpc.vpc -auto-approve"
Run the pipeline by creating a pipelinerun resource in k8s
Review the logs > tkn pipelinerun logs my-tf-pipeline -a
Additional Information
Pipeline version: v0.35.1
There is a known issue regarding "step-init" in some earlier versions - I suggest you upgrade to latest version (0.36.0) and try again.
I want to deploy helm charts, which are stored in a repository in AWS ECR, in the kubernetes cluster using ArgoCD. But I am getting a 401 unauthorized issue. I have pasted the entire issue below
Unable to create application: application spec is invalid: InvalidSpecError: Unable to get app details: rpc error: code = Unknown desc = `helm chart pull <aws account id>.dkr.ecr.<region>.amazonaws.com/testrepo:1.1.0` failed exit status 1: Error: unexpected status code [manifests 1.1.0]: 401 Unauthorized
Yes, you can use ECR for storing helm charts (https://docs.aws.amazon.com/AmazonECR/latest/userguide/push-oci-artifact.html)
I have managed to add the repo to ArgoCD, but the token expires so it is not a complete solution.
argocd repo add XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com --type helm --name some-helmreponame --enable-oci --username AWS --password $(aws ecr get-login-password --region us-east-1)
Using the declarative repository definition (see https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#repositories, or just override .argo-cd.configs.repositories in the Helm chart) it is actually quite easy to create a cron-job that updates the ECR credentials:
apiVersion: batch/v1
kind: CronJob
metadata:
name: argocd-ecr-credentials
spec:
schedule: '0 */6 * * *' # every 6 hours, since credentials expire every 12 hours
jobTemplate:
metadata:
name: argocd-ecr-credentials
spec:
template:
spec:
serviceAccountName: argocd-server
restartPolicy: OnFailure
containers:
- name: update-secret
image: alpine/k8s # Anything that contains kubectl + aws cli
command:
- /bin/bash
- "-c"
- |
PASSWORD=$(aws ecr get-login-password --region [your aws region] | base64 -w 0)
kubectl patch secret -n argocd argocd-repo-[name of your repository] --type merge -p "{\"data\": {\"password\": \"$PASSWORD\"}}"
ArgoCD repository secrets are usually called argocd-repo-* suffixed with the key of the repository entry in the values.yaml.
This will start a pod every 6 hours to do an ECR login and update the secret in kubernetes, that contains the repository definition for ArgoCD.
Make sure to use the argocd-server service account (or create your own) since the container will not be able to modify the secret otherwise.
I'm experimenting with the following (Not yet complete)
Create a secret for an AWS IAM role that allows you to get an ECR login password.
apiVersion: v1
kind: Secret
metadata:
name: aws-ecr-get-login-password-creds
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repository
stringData:
AWS_ACCESS_KEY_ID: <Fill In>
AWS_SECRET_ACCESS_KEY: <Fill In>
Now create an ArgoCD workflow that either runs every 12 hours or runs on PreSync Hook (Completely untested, will try to keep this updated, anyone can update this for me).
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: aws-ecr-get-login-password-
annotations:
argocd.argoproj.io/hook: PreSync
spec:
entrypoint: update-ecr-login-password
templates:
# This is what will run.
# First the awscli
# Then the resource creation using the stdout of the previous step
- name: update-ecr-login-password
steps:
- - name: awscli
template: awscli
- - name: argocd-ecr-credentials
template: argocd-ecr-credentials
arguments:
parameters:
- name: password
value: "{{steps.awscli.outputs.result}}"
# Create a container that has awscli in it
# and run it to get the password using `aws ecr get-login-password`
- name: awscli
script:
image: amazon/aws-cli:latest
command: [bash]
source: |
aws ecr get-login-password --region us-east-1
# We need aws secrets that can run `aws ecr get-login-password`
envFrom:
- secretRef:
name: aws-ecr-get-login-password-creds
# Now we can create the secret that has the password in it
- name: argocd-ecr-credentials
inputs:
parameters:
- name: password
resource:
action: create
manifest: |
apiVersion: v1
kind: Secret
metadata:
name: argocd-ecr-credentials
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repository
stringData:
url: 133696059149.dkr.ecr.us-east-1.amazonaws.com
username: AWS
password: {{inputs.parameters.password}}
My skaffold.yaml
apiVersion: skaffold/v1
kind: Config
build:
artifacts:
- image: tons/whoami-mn
jib: {}
tagPolicy:
gitCommit: {}
deploy:
helm:
releases:
- name: whoami-mn
chartPath: ./k8s/helm/whoami-mn
artifactOverrides:
image.repository: tons/whoami-mn
The command
skaffold dev --port-forward --namespace whoami-mn
The error
parsing skaffold config: unable to parse config: yaml: unmarshal errors:
line 11: field artifactOverrides not found in type v1.HelmRelease
Skaffold version: v1.13.1
Helm version: v3.3.0
Any idea why I'm getting the above error? Please let me know if I should post other parts of my code
apiVersion: skaffold/v2beta6 was the key to it.
In the future you can also try the skaffold fix command to find ways to update your schema automatically.
I'm trying to build and push a docker image with Knative. I have a maven java application and a multistaging Dockerfile that builds and runs the application:
WORKDIR /usr/app
COPY pom.xml ./
COPY src/ ./src/
RUN mvn package
FROM openjdk:8-jdk-alpine
WORKDIR /usr/app
ENV PORT 8080
COPY --from=build /usr/app/target/*.jar ./app.jar
CMD ["java", "-jar", "/usr/app/app.jar"]
I want to build and push the application to the gcr repository. So I have a ServiceAccount and a Build:
apiVersion: v1
data:
password: ENCODED_PASS
username: ENCODED_USERNAME
kind: Secret
metadata:
annotations:
build.knative.dev/docker-0: https://gcr.io
name: knative-build-auth
namespace: default
resourceVersion: "3001"
selfLink: /api/v1/namespaces/default/secrets/knative-build-auth
type: kubernetes.io/basic-auth
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: knative-build
secrets:
- name: knative-build-auth
---
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
name: example-build
spec:
serviceAccountName: knative-build
source:
git:
url: https://github.com/pathtorepo.git
revision: master
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:v0.1.0
args:
- --dockerfile=/workspace/Dockerfile
- --destination=gcr.io/$projectid/my-build
I tried to use kaniko-project for this. However, there are some problems with using it. Version 0.1.0 works with a simple Dockerfile:
FROM ubuntu
CMD ["/bin/sh", "-c", "echo Hiiiiiii"]
But does not support the multistaging Dockerfiles and fils with the access denied error. Any other version of the kaniko does not work, and fails.
In the logs for version 0.1.0 of the multistaging build I can see the following error:
2019/07/02 14:43:13 No matching credentials found for index.docker.io, falling back on anonymous
time="2019-07-02T14:43:15Z" level=info msg="saving dependencies []"
time="2019-07-02T14:43:15Z" level=error msg="copy failed: no source files specified"
and the status of the build:
conditions:
- lastTransitionTime: "2019-07-02T14:43:16Z"
message: 'build step "build-step-build-and-push" exited with code 1 (image: "docker-pullable://gcr.io/kaniko-project/executor#sha256:501056bf52f3a96f151ccbeb028715330d5d5aa6647e7572ce6c6c55f91ab374");
for logs run: kubectl -n default logs example-build-pod-7d95a9 -c build-step-build-and-push'
status: "False"
type: Succeeded
For any other versions of kaniko higher than 0.1.0 here is the error:
error pushing image: failed to push to destination gcr.io/star-wars-istio/reverse-function:latest: DENIED: Access denied.
Also in logs there is something like:
ERROR: logging before flag.Parse: E0702 14:54:23.003241 1 metadata.go:142] while reading 'google-dockercfg' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg
I found an issue in their repo which is closed. However it's still reproducible.
Here is the github issue
I can confirm that my ServiceAccount is correct, since I'm able to build and push a simple docker image with this configuration.
I've also tried different images for build and push. For example the one that is described here.
Even though I've followed all the steps described there (creating my ServiceAccount following the instructions, which works with a simple Dockerfile), it still fails when I try to build and push my application. So when I apply the following Build:
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
name: reverse-build
spec:
serviceAccountName: knative-build
source:
git:
url: https://github.com/lvivJavaClub/spring-cloud-functions.git
revision: init-knative
subPath: reverse-function
steps:
- name: build-and-push
image: gcr.io/cloud-builders/mvn
args: ["compile", "jib:build", "-Dimage=gcr.io/star-wars-istio/reverse-function"]
The build fails and I'm getting the error in logs:
[ERROR] Failed to execute goal com.google.cloud.tools:jib-maven-plugin:0.9.3:build (default-cli) on project reverse: Build image failed, perhaps you should set a credential helper name with the configuration '<from><credHelper>' or set credentials for 'gcr.io' in your Maven settings: com.google.api.client.http.HttpResponseException: 401 Unauthorized
[ERROR] {"errors":[{"code":"UNAUTHORIZED","message":"You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication"}]}