terraform-cli : Duplicate value: "step-init". Maybe missing or invalid Task default/terraform-cli error - kubernetes

I've been attempting to use Tekton to deploy some AWS infrastructure via Terraform but not having much success.
The pipeline clones a Github repo containing TF code , it then attempts to use the terraform-cli task to provision the AWS infrastructure. For initial testing I just want to perform the initial TF init and provision the AWS VPC.
Expected behaviour
Clone Github Repo
Perform Terraform Init
Create the VPC using targeted TF apply
Actual Result
task terraform-init has failed: failed to create task run pod "my-infra-pipelinerun-terraform-init": Pod "my-infra-pipelinerun-terraform-init-pod" is invalid: spec.initContainers[1].name: Duplicate value: "step-init". Maybe missing or invalid Task default/terraform-cli
pod for taskrun my-infra-pipelinerun-terraform-init not available yet
Tasks Completed: 2 (Failed: 1, Cancelled 0), Skipped: 1
Steps to Reproduce the Problem
Prerequisites: Install Tekton command line tool, git-clone and terraform-cli
create this pipeline in Minikube
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: my-infra-pipeline
spec:
description: Pipeline for TF deployment
params:
- name: repo-url
type: string
description: Git repository URL
- name: branch-name
type: string
description: The git branch
workspaces:
- name: tf-config
description: The workspace where the tf config code will be stored
tasks:
- name: clone-repo
taskRef:
name: git-clone
workspaces:
- name: output
workspace: tf-config
params:
- name: url
value: $(params.repo-url)
- name: revision
value: $(params.branch-name)
- name: terraform-init
runAfter: ["clone-repo"]
taskRef:
name: terraform-cli
workspaces:
- name: source
workspace: tf-config
params:
- name: terraform-secret
value: "tf-auth"
- name: ARGS
value:
- init
- name: build-vpc
runAfter: ["terraform-init"]
taskRef:
name: terraform-cli
workspaces:
- name: source
workspace: tf-config
params:
- name: terraform-secret
value: "tf-auth"
- name: ARGS
value:
- apply
- "-target=aws_vpc.vpc -auto-approve"
Run the pipeline by creating a pipelinerun resource in k8s
Review the logs > tkn pipelinerun logs my-tf-pipeline -a
Additional Information
Pipeline version: v0.35.1

There is a known issue regarding "step-init" in some earlier versions - I suggest you upgrade to latest version (0.36.0) and try again.

Related

How to create dependency between releases in helmfile

I have a following helmfile and I want for nexus, teamcity-server, nexus, hub to be depended on certificates chart
releases:
- name: certificates
createNamespace: true
chart: ./charts/additional-dep
namespace: system
values:
- ./environments/default/system-values.yaml
- ./environments/{{ .Environment.Name }}/system-values.yaml
- name: hub
chart: ./charts/hub
namespace: system
values:
- ./environments/default/system-values.yaml
- name: nexus
chart: ./charts/nexus
namespace: system
values:
- ./environments/default/system-values.yaml
- ./environments/{{ .Environment.Name }}/system-values.yaml
dependsOn:
- certificates
- name: teamcity-server
chart: ./charts/teamcity-server
namespace: system
values:
- ./environments/default/system-values.yaml
- ./environments/{{ .Environment.Name }}/system-values.yaml
dependsOn:
- certificates
I have tried to use dependsOn in helmfile.yaml, however it has resulted in errors
Helmfile calls this functionality needs:, so
releases:
- name: certificates
...
- name: nexus
needs:
- certificates
...
This means the certificates: chart needs to be successfully installed before Helmfile will move on to nexus or teamcity-server. This is specific to Helmfile, so you're allowed to helm uninstall certificates and Helm itself won't know about the dependency. It also doesn't establish any sort of runtime dependency between the two charts, so if something happens later that causes certificates to fail, nexus and the other dependents won't be automatically stopped.

Templates and Values in different repos via ArgoCD

I'm looking for insights for the following situation...
I have one ArgoCD application pointing to a Git repo (A), where there's a values.yaml;
I would like to use the Helm templates stored in a different repo (B);
Any suggestions/alternatives on how to make this work?
I think helm dependency can help solve your problem.
In file Chart.yaml of repo (A), declares dependency (chart of repo B)
# Chart.yaml
dependencies:
- name: chartB
version: "0.0.1"
repository: "https://link_to_chart_B"
Link references:
https://github.com/argoproj/argocd-example-apps/tree/master/helm-dependency
P/s: You need add repo chart into ArgoCD.
The way we solved it is by writing a very simple helm plugin
and pass to it the URL where the Helm chart location (chartmuseum in our case) as an env variable
server:
name: server
config:
configManagementPlugins: |
- name: helm-yotpo
generate:
command: ["sh", "-c"]
args: ["helm template --version ${HELM_CHART_VERSION} --repo ${HELM_REPO_URL} --namespace ${NAMESPACE} $HELM_CHART_NAME --name-template=${HELM_RELEASE_NAME} -f $(pwd)/${HELM_VALUES_FILE} "]
you can run the helm command with the flag of --repo
and in the ArgoCD Application yaml you call the new plugin
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: application-test
namespace: infra
spec:
destination:
namespace: infra
server: https://kubernetes.default.svc
project: infra
source:
path: "helm-values-files/telegraf"
repoURL: https://github.com/YotpoLtd/argocd-example.git
targetRevision: HEAD
plugin:
name: helm-yotpo
env:
- name: HELM_RELEASE_NAME
value: "telegraf-test"
- name: HELM_CHART_VERSION
value: "1.8.18"
- name: NAMESPACE
value: "infra"
- name: HELM_REPO_URL
value: "https://helm.influxdata.com/"
- name: HELM_CHART_NAME
value: "telegraf"
- name: HELM_VALUES_FILE
value: "telegraf.yaml"
you can read more about it in the following blog
post

Tekton: yq Task gives safelyRenameFile [ERRO] Failed copying from /tmp/temp & [ERRO] open /workspace/source permission denied error

We have a Tekton pipeline and want to replace the image tags contents of our deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-api-spring-boot
spec:
replicas: 3
revisionHistoryLimit: 3
selector:
matchLabels:
app: microservice-api-spring-boot
template:
metadata:
labels:
app: microservice-api-spring-boot
spec:
containers:
- image: registry.gitlab.com/jonashackt/microservice-api-spring-boot#sha256:5d8a03755d3c45a3d79d32ab22987ef571a65517d0edbcb8e828a4e6952f9bcd
name: microservice-api-spring-boot
ports:
- containerPort: 8098
imagePullSecrets:
- name: gitlab-container-registry
Our Tekton pipeline uses the yq Task from Tekton Hub to replace the .spec.template.spec.containers[0].image with the "$(params.IMAGE):$(params.SOURCE_REVISION)" name like this:
- name: substitute-config-image-name
taskRef:
name: yq
runAfter:
- fetch-config-repository
workspaces:
- name: source
workspace: config-workspace
params:
- name: files
value:
- "./deployment/deployment.yml"
- name: expression
value: .spec.template.spec.containers[0].image = \"$(params.IMAGE)\":\"$(params.SOURCE_REVISION)\"
Sadly the yq Task doesn't seem to work, it produces a green
Step completed successfully, but shows the following errors:
16:50:43 safelyRenameFile [ERRO] Failed copying from /tmp/temp3555913516 to /workspace/source/deployment/deployment.yml
16:50:43 safelyRenameFile [ERRO] open /workspace/source/deployment/deployment.yml: permission denied
Here's also a screenshot from our Tekton Dashboard:
Any idea on how to solve the error?
The problem seems to be related to the way how the Dockerfile of https://github.com/mikefarah/yq now handles file permissions (for example this fix among others). The 0.3 version of the Tekton yq Task uses the image https://hub.docker.com/layers/mikefarah/yq/4.16.2/images/sha256-c6ef1bc27dd9cee57fa635d9306ce43ca6805edcdab41b047905f7835c174005 which produces the error.
One work-around to the problem could be the usage of the yq Task version 0.2 which you can apply via:
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/yq/0.2/yq.yaml
This one uses the older docker.io/mikefarah/yq:4#sha256:34f1d11ad51dc4639fc6d8dd5ade019fe57cf6084bb6a99a2f11ea522906033b and works without the error.
Alternatively you can simply create your own yq based Task that won't have the problem like this:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: replace-image-name-with-yq
spec:
workspaces:
- name: source
description: A workspace that contains the file which need to be dumped.
params:
- name: IMAGE_NAME
description: The image name to substitute
- name: FILE_PATH
description: The file path relative to the workspace dir.
- name: YQ_VERSION
description: Version of https://github.com/mikefarah/yq
default: v4.2.0
steps:
- name: substitute-with-yq
image: alpine
workingDir: $(workspaces.source.path)
command:
- /bin/sh
args:
- '-c'
- |
set -ex
echo "--- Download yq & add to path"
wget https://github.com/mikefarah/yq/releases/download/$(params.YQ_VERSION)/yq_linux_amd64 -O /usr/bin/yq &&\
chmod +x /usr/bin/yq
echo "--- Run yq expression"
yq e ".spec.template.spec.containers[0].image = \"$(params.IMAGE_NAME)\"" -i $(params.FILE_PATH)
echo "--- Show file with replacement"
cat $(params.FILE_PATH)
resources: {}
This custom Task simple uses the alpine image as base and installs yq using the Plain binary wget download. Also it uses yq exactly as you would do on the command line locally, which makes development of your expression so much easier!
As a bonus it outputs the file contents so you can check the replacement results right in the Tekton pipeline!
You need to apply it with
kubectl apply -f tekton-ci-config/replace-image-name-with-yq.yml
And should now be able to use it like this:
- name: replace-config-image-name
taskRef:
name: replace-image-name-with-yq
runAfter:
- dump-contents
workspaces:
- name: source
workspace: config-workspace
params:
- name: IMAGE_NAME
value: "$(params.IMAGE):$(params.SOURCE_REVISION)"
- name: FILE_PATH
value: "./deployment/deployment.yml"
Inside the Tekton dashboard it will look somehow like this and output the processed file:

Knative image build and push fails with acces denied

I'm trying to build and push a docker image with Knative. I have a maven java application and a multistaging Dockerfile that builds and runs the application:
WORKDIR /usr/app
COPY pom.xml ./
COPY src/ ./src/
RUN mvn package
FROM openjdk:8-jdk-alpine
WORKDIR /usr/app
ENV PORT 8080
COPY --from=build /usr/app/target/*.jar ./app.jar
CMD ["java", "-jar", "/usr/app/app.jar"]
I want to build and push the application to the gcr repository. So I have a ServiceAccount and a Build:
apiVersion: v1
data:
password: ENCODED_PASS
username: ENCODED_USERNAME
kind: Secret
metadata:
annotations:
build.knative.dev/docker-0: https://gcr.io
name: knative-build-auth
namespace: default
resourceVersion: "3001"
selfLink: /api/v1/namespaces/default/secrets/knative-build-auth
type: kubernetes.io/basic-auth
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: knative-build
secrets:
- name: knative-build-auth
---
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
name: example-build
spec:
serviceAccountName: knative-build
source:
git:
url: https://github.com/pathtorepo.git
revision: master
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:v0.1.0
args:
- --dockerfile=/workspace/Dockerfile
- --destination=gcr.io/$projectid/my-build
I tried to use kaniko-project for this. However, there are some problems with using it. Version 0.1.0 works with a simple Dockerfile:
FROM ubuntu
CMD ["/bin/sh", "-c", "echo Hiiiiiii"]
But does not support the multistaging Dockerfiles and fils with the access denied error. Any other version of the kaniko does not work, and fails.
In the logs for version 0.1.0 of the multistaging build I can see the following error:
2019/07/02 14:43:13 No matching credentials found for index.docker.io, falling back on anonymous
time="2019-07-02T14:43:15Z" level=info msg="saving dependencies []"
time="2019-07-02T14:43:15Z" level=error msg="copy failed: no source files specified"
and the status of the build:
conditions:
- lastTransitionTime: "2019-07-02T14:43:16Z"
message: 'build step "build-step-build-and-push" exited with code 1 (image: "docker-pullable://gcr.io/kaniko-project/executor#sha256:501056bf52f3a96f151ccbeb028715330d5d5aa6647e7572ce6c6c55f91ab374");
for logs run: kubectl -n default logs example-build-pod-7d95a9 -c build-step-build-and-push'
status: "False"
type: Succeeded
For any other versions of kaniko higher than 0.1.0 here is the error:
error pushing image: failed to push to destination gcr.io/star-wars-istio/reverse-function:latest: DENIED: Access denied.
Also in logs there is something like:
ERROR: logging before flag.Parse: E0702 14:54:23.003241 1 metadata.go:142] while reading 'google-dockercfg' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg
I found an issue in their repo which is closed. However it's still reproducible.
Here is the github issue
I can confirm that my ServiceAccount is correct, since I'm able to build and push a simple docker image with this configuration.
I've also tried different images for build and push. For example the one that is described here.
Even though I've followed all the steps described there (creating my ServiceAccount following the instructions, which works with a simple Dockerfile), it still fails when I try to build and push my application. So when I apply the following Build:
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
name: reverse-build
spec:
serviceAccountName: knative-build
source:
git:
url: https://github.com/lvivJavaClub/spring-cloud-functions.git
revision: init-knative
subPath: reverse-function
steps:
- name: build-and-push
image: gcr.io/cloud-builders/mvn
args: ["compile", "jib:build", "-Dimage=gcr.io/star-wars-istio/reverse-function"]
The build fails and I'm getting the error in logs:
[ERROR] Failed to execute goal com.google.cloud.tools:jib-maven-plugin:0.9.3:build (default-cli) on project reverse: Build image failed, perhaps you should set a credential helper name with the configuration '<from><credHelper>' or set credentials for 'gcr.io' in your Maven settings: com.google.api.client.http.HttpResponseException: 401 Unauthorized
[ERROR] {"errors":[{"code":"UNAUTHORIZED","message":"You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication"}]}

Not able to trigger jobs one after the other using gcs-resource in concourse

I have two jobs viz. build and publish. I want publish to trigger after build is done. So, I am using an external resource gcs-resourcehttps://github.com/frodenas/gcs-resource
Following is my pipeline.yml:
---
resource_types:
- name: gcs-resource
type: docker-image
source:
repository: frodenas/gcs-resource
resources:
- name: proj-repo
type: git
source:
uri: <my uri>
branch: develop
username: <username>
password: <password>
- name: proj-gcr
type: docker-image
source:
repository: asia.gcr.io/myproject/proj
tag: develop
username: _json_key
password: <my password>
- name: proj-build-output
type: gcs-resource
source:
bucket: proj-build-deploy
json_key: <my key>
regexp: Dockerfile
jobs:
- name: build
serial_groups: [proj-build-deploy]
plan:
- get: proj
resource: proj-repo
- task: build
config:
platform: linux
image_resource:
type: docker-image
source: {repository: node, tag: 10.13.0}
inputs:
- name: proj
run:
path: sh
args:
- -exc
- |
<do something>
- put: proj-build-output
params:
file: proj/Dockerfile
content_type: application/octet-stream
- name: publish
serial_groups: [proj-build-deploy]
plan:
- get: proj-build-output
trigger: true
passed: [build]
- put: proj-gcr
params:
build: proj-build-output
I am using the external resource proj-build-output to trigger the next job. I can run the individual jobs without any problem, however the the publish job doesn't automatically get triggered after completion of build job.
Am I missing something?
The regexp of the gcs-resource is misconfigured:
...
regexp: Dockerfile
...
while regexp, as the original S3 resource from which it comes from, wants:
regexp: the pattern to match filenames against within GCS. The first grouped match is used to extract the version, or if a group is explicitly named version, that group is used.
The https://github.com/frodenas/gcs-resource#example-configuration shows its correct usage:
regexp: directory_on_gcs/release-(.*).tgz
This is not specific to the GCS or S3 resource; Concourse needs a "version" to move artifacts from jobs to storage and back. It is one of the fundamental concepts of Concourse. See https://web.archive.org/web/20171205105324/http://concourse.ci:80/versioned-s3-artifacts.html for an example.
As Marco mentioned, the problem was with versioning.
I solved my issue using these two steps:
Enabled versioning on my GCS Bucket https://cloud.google.com/storage/docs/object-versioning#_Enabling
Replaces regexp with versioned_file as mentioned in the docs https://github.com/frodenas/gcs-resource#file-names