I'm trying to build and push a docker image with Knative. I have a maven java application and a multistaging Dockerfile that builds and runs the application:
WORKDIR /usr/app
COPY pom.xml ./
COPY src/ ./src/
RUN mvn package
FROM openjdk:8-jdk-alpine
WORKDIR /usr/app
ENV PORT 8080
COPY --from=build /usr/app/target/*.jar ./app.jar
CMD ["java", "-jar", "/usr/app/app.jar"]
I want to build and push the application to the gcr repository. So I have a ServiceAccount and a Build:
apiVersion: v1
data:
password: ENCODED_PASS
username: ENCODED_USERNAME
kind: Secret
metadata:
annotations:
build.knative.dev/docker-0: https://gcr.io
name: knative-build-auth
namespace: default
resourceVersion: "3001"
selfLink: /api/v1/namespaces/default/secrets/knative-build-auth
type: kubernetes.io/basic-auth
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: knative-build
secrets:
- name: knative-build-auth
---
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
name: example-build
spec:
serviceAccountName: knative-build
source:
git:
url: https://github.com/pathtorepo.git
revision: master
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:v0.1.0
args:
- --dockerfile=/workspace/Dockerfile
- --destination=gcr.io/$projectid/my-build
I tried to use kaniko-project for this. However, there are some problems with using it. Version 0.1.0 works with a simple Dockerfile:
FROM ubuntu
CMD ["/bin/sh", "-c", "echo Hiiiiiii"]
But does not support the multistaging Dockerfiles and fils with the access denied error. Any other version of the kaniko does not work, and fails.
In the logs for version 0.1.0 of the multistaging build I can see the following error:
2019/07/02 14:43:13 No matching credentials found for index.docker.io, falling back on anonymous
time="2019-07-02T14:43:15Z" level=info msg="saving dependencies []"
time="2019-07-02T14:43:15Z" level=error msg="copy failed: no source files specified"
and the status of the build:
conditions:
- lastTransitionTime: "2019-07-02T14:43:16Z"
message: 'build step "build-step-build-and-push" exited with code 1 (image: "docker-pullable://gcr.io/kaniko-project/executor#sha256:501056bf52f3a96f151ccbeb028715330d5d5aa6647e7572ce6c6c55f91ab374");
for logs run: kubectl -n default logs example-build-pod-7d95a9 -c build-step-build-and-push'
status: "False"
type: Succeeded
For any other versions of kaniko higher than 0.1.0 here is the error:
error pushing image: failed to push to destination gcr.io/star-wars-istio/reverse-function:latest: DENIED: Access denied.
Also in logs there is something like:
ERROR: logging before flag.Parse: E0702 14:54:23.003241 1 metadata.go:142] while reading 'google-dockercfg' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg
I found an issue in their repo which is closed. However it's still reproducible.
Here is the github issue
I can confirm that my ServiceAccount is correct, since I'm able to build and push a simple docker image with this configuration.
I've also tried different images for build and push. For example the one that is described here.
Even though I've followed all the steps described there (creating my ServiceAccount following the instructions, which works with a simple Dockerfile), it still fails when I try to build and push my application. So when I apply the following Build:
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
name: reverse-build
spec:
serviceAccountName: knative-build
source:
git:
url: https://github.com/lvivJavaClub/spring-cloud-functions.git
revision: init-knative
subPath: reverse-function
steps:
- name: build-and-push
image: gcr.io/cloud-builders/mvn
args: ["compile", "jib:build", "-Dimage=gcr.io/star-wars-istio/reverse-function"]
The build fails and I'm getting the error in logs:
[ERROR] Failed to execute goal com.google.cloud.tools:jib-maven-plugin:0.9.3:build (default-cli) on project reverse: Build image failed, perhaps you should set a credential helper name with the configuration '<from><credHelper>' or set credentials for 'gcr.io' in your Maven settings: com.google.api.client.http.HttpResponseException: 401 Unauthorized
[ERROR] {"errors":[{"code":"UNAUTHORIZED","message":"You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication"}]}
Related
I'm trying to use Skaffold to deploy some services onto my local minikube cluster, however I am running into issues when it comes to pulling the images. I've specified the Dockerfile and would assume that it would check my local registry and upon not finding the image, proceed to build it and then pull that built image upon pod init.
But it appears as if skaffold is building the image successfully, but when the pod starts up it fails and gives an Failed to pull image "my-app-image": rpc error: code = Unknown desc = Error response from daemon: pull access denied for my-app-image, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
I'm a little confused because I thought this would all happening within my local registry, so I'm not sure why it is getting access denined when the image is being built successfully?
Example:
apiVersion: skaffold/v2
kind: Config
build:
artifacts:
- image: my-app-image
context: './'
sync:
manual:
- src: 'my-app/**/*'
dest: '/my-app/'
docker:
dockerfile: Dockerfile
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 2
strategy:
type: RollingUpdate
template:
metadata:
labels:
deploy: example
spec:
containers:
- name: my-app
image: my-app-image
We have a Tekton pipeline and want to replace the image tags contents of our deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-api-spring-boot
spec:
replicas: 3
revisionHistoryLimit: 3
selector:
matchLabels:
app: microservice-api-spring-boot
template:
metadata:
labels:
app: microservice-api-spring-boot
spec:
containers:
- image: registry.gitlab.com/jonashackt/microservice-api-spring-boot#sha256:5d8a03755d3c45a3d79d32ab22987ef571a65517d0edbcb8e828a4e6952f9bcd
name: microservice-api-spring-boot
ports:
- containerPort: 8098
imagePullSecrets:
- name: gitlab-container-registry
Our Tekton pipeline uses the yq Task from Tekton Hub to replace the .spec.template.spec.containers[0].image with the "$(params.IMAGE):$(params.SOURCE_REVISION)" name like this:
- name: substitute-config-image-name
taskRef:
name: yq
runAfter:
- fetch-config-repository
workspaces:
- name: source
workspace: config-workspace
params:
- name: files
value:
- "./deployment/deployment.yml"
- name: expression
value: .spec.template.spec.containers[0].image = \"$(params.IMAGE)\":\"$(params.SOURCE_REVISION)\"
Sadly the yq Task doesn't seem to work, it produces a green
Step completed successfully, but shows the following errors:
16:50:43 safelyRenameFile [ERRO] Failed copying from /tmp/temp3555913516 to /workspace/source/deployment/deployment.yml
16:50:43 safelyRenameFile [ERRO] open /workspace/source/deployment/deployment.yml: permission denied
Here's also a screenshot from our Tekton Dashboard:
Any idea on how to solve the error?
The problem seems to be related to the way how the Dockerfile of https://github.com/mikefarah/yq now handles file permissions (for example this fix among others). The 0.3 version of the Tekton yq Task uses the image https://hub.docker.com/layers/mikefarah/yq/4.16.2/images/sha256-c6ef1bc27dd9cee57fa635d9306ce43ca6805edcdab41b047905f7835c174005 which produces the error.
One work-around to the problem could be the usage of the yq Task version 0.2 which you can apply via:
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/yq/0.2/yq.yaml
This one uses the older docker.io/mikefarah/yq:4#sha256:34f1d11ad51dc4639fc6d8dd5ade019fe57cf6084bb6a99a2f11ea522906033b and works without the error.
Alternatively you can simply create your own yq based Task that won't have the problem like this:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: replace-image-name-with-yq
spec:
workspaces:
- name: source
description: A workspace that contains the file which need to be dumped.
params:
- name: IMAGE_NAME
description: The image name to substitute
- name: FILE_PATH
description: The file path relative to the workspace dir.
- name: YQ_VERSION
description: Version of https://github.com/mikefarah/yq
default: v4.2.0
steps:
- name: substitute-with-yq
image: alpine
workingDir: $(workspaces.source.path)
command:
- /bin/sh
args:
- '-c'
- |
set -ex
echo "--- Download yq & add to path"
wget https://github.com/mikefarah/yq/releases/download/$(params.YQ_VERSION)/yq_linux_amd64 -O /usr/bin/yq &&\
chmod +x /usr/bin/yq
echo "--- Run yq expression"
yq e ".spec.template.spec.containers[0].image = \"$(params.IMAGE_NAME)\"" -i $(params.FILE_PATH)
echo "--- Show file with replacement"
cat $(params.FILE_PATH)
resources: {}
This custom Task simple uses the alpine image as base and installs yq using the Plain binary wget download. Also it uses yq exactly as you would do on the command line locally, which makes development of your expression so much easier!
As a bonus it outputs the file contents so you can check the replacement results right in the Tekton pipeline!
You need to apply it with
kubectl apply -f tekton-ci-config/replace-image-name-with-yq.yml
And should now be able to use it like this:
- name: replace-config-image-name
taskRef:
name: replace-image-name-with-yq
runAfter:
- dump-contents
workspaces:
- name: source
workspace: config-workspace
params:
- name: IMAGE_NAME
value: "$(params.IMAGE):$(params.SOURCE_REVISION)"
- name: FILE_PATH
value: "./deployment/deployment.yml"
Inside the Tekton dashboard it will look somehow like this and output the processed file:
I am trying to make Skaffold work with Helm.
Below is my skaffold.yml file:
apiVersion: skaffold/v2beta23
kind: Config
metadata:
name: test-app
build:
artifacts:
- image: test.common.repositories.cloud.int/manager/k8s
docker:
dockerfile: Dockerfile
deploy:
helm:
releases:
- name: my-release
artifactOverrides:
image: test.common.repositories.cloud.int/manager/k8s
imageStrategy:
helm: {}
Here is my values.yaml:
image:
repository: test.common.repositories.cloud.int/manager/k8s
tag: 1.0.0
Running the skaffold command results in:
...
Starting deploy...
Helm release my-release not installed. Installing...
Error: INSTALLATION FAILED: failed to download ""
deploying "my-release": install: exit status 1
Does anyone have an idea, what is missing here?!
I believe this is happening because you have not specified a chart to use for the helm release. I was able to reproduce your issue by commenting out the chartPath field in the skaffold.yaml file of the helm-deployment example in the Skaffold repo.
You can specify a local chart using the deploy.helm.release.chartPath field or a remote chart using the deploy.helm.release.remoteChart field.
I want to deploy helm charts, which are stored in a repository in AWS ECR, in the kubernetes cluster using ArgoCD. But I am getting a 401 unauthorized issue. I have pasted the entire issue below
Unable to create application: application spec is invalid: InvalidSpecError: Unable to get app details: rpc error: code = Unknown desc = `helm chart pull <aws account id>.dkr.ecr.<region>.amazonaws.com/testrepo:1.1.0` failed exit status 1: Error: unexpected status code [manifests 1.1.0]: 401 Unauthorized
Yes, you can use ECR for storing helm charts (https://docs.aws.amazon.com/AmazonECR/latest/userguide/push-oci-artifact.html)
I have managed to add the repo to ArgoCD, but the token expires so it is not a complete solution.
argocd repo add XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com --type helm --name some-helmreponame --enable-oci --username AWS --password $(aws ecr get-login-password --region us-east-1)
Using the declarative repository definition (see https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#repositories, or just override .argo-cd.configs.repositories in the Helm chart) it is actually quite easy to create a cron-job that updates the ECR credentials:
apiVersion: batch/v1
kind: CronJob
metadata:
name: argocd-ecr-credentials
spec:
schedule: '0 */6 * * *' # every 6 hours, since credentials expire every 12 hours
jobTemplate:
metadata:
name: argocd-ecr-credentials
spec:
template:
spec:
serviceAccountName: argocd-server
restartPolicy: OnFailure
containers:
- name: update-secret
image: alpine/k8s # Anything that contains kubectl + aws cli
command:
- /bin/bash
- "-c"
- |
PASSWORD=$(aws ecr get-login-password --region [your aws region] | base64 -w 0)
kubectl patch secret -n argocd argocd-repo-[name of your repository] --type merge -p "{\"data\": {\"password\": \"$PASSWORD\"}}"
ArgoCD repository secrets are usually called argocd-repo-* suffixed with the key of the repository entry in the values.yaml.
This will start a pod every 6 hours to do an ECR login and update the secret in kubernetes, that contains the repository definition for ArgoCD.
Make sure to use the argocd-server service account (or create your own) since the container will not be able to modify the secret otherwise.
I'm experimenting with the following (Not yet complete)
Create a secret for an AWS IAM role that allows you to get an ECR login password.
apiVersion: v1
kind: Secret
metadata:
name: aws-ecr-get-login-password-creds
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repository
stringData:
AWS_ACCESS_KEY_ID: <Fill In>
AWS_SECRET_ACCESS_KEY: <Fill In>
Now create an ArgoCD workflow that either runs every 12 hours or runs on PreSync Hook (Completely untested, will try to keep this updated, anyone can update this for me).
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: aws-ecr-get-login-password-
annotations:
argocd.argoproj.io/hook: PreSync
spec:
entrypoint: update-ecr-login-password
templates:
# This is what will run.
# First the awscli
# Then the resource creation using the stdout of the previous step
- name: update-ecr-login-password
steps:
- - name: awscli
template: awscli
- - name: argocd-ecr-credentials
template: argocd-ecr-credentials
arguments:
parameters:
- name: password
value: "{{steps.awscli.outputs.result}}"
# Create a container that has awscli in it
# and run it to get the password using `aws ecr get-login-password`
- name: awscli
script:
image: amazon/aws-cli:latest
command: [bash]
source: |
aws ecr get-login-password --region us-east-1
# We need aws secrets that can run `aws ecr get-login-password`
envFrom:
- secretRef:
name: aws-ecr-get-login-password-creds
# Now we can create the secret that has the password in it
- name: argocd-ecr-credentials
inputs:
parameters:
- name: password
resource:
action: create
manifest: |
apiVersion: v1
kind: Secret
metadata:
name: argocd-ecr-credentials
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repository
stringData:
url: 133696059149.dkr.ecr.us-east-1.amazonaws.com
username: AWS
password: {{inputs.parameters.password}}
I was trying to deploy a very basic Express app, a small server listening on 8080 on a EC2 server (Ubuntu 16.04) following this tutorial. On that server, it was created a Kubernetes cluster through kops 1.8.0.
After that, I created a Dockerfile like the following:
FROM node:carbon
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
# Create app directory
WORKDIR /usr/src/app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
# At the end, set the user to use when running this image
USER node
After that, I built the image with docker build -t ccastelli/stupid_server:test1, I specified my credentials with docker login -u ccastelli, I copied the imaged ID from docker images, tagged it docker tag c549618dcd86 org/test:first_try and pushed with docker push org/test on a private repository in cloud.docker.com.
After that I created a cluster secret with kubectl create secret docker-registry ccastelli-regcred --docker-server=docker.com --docker-username=ccastelli --docker-password='pass' --docker-email=myemail#gmail.com
After that I created a deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: stupid-server-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: stupid-server
spec:
containers:
- name: stupid-server
image: org/test:first_try
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: ccastelli-regcred
I see from kubectl get pods that the image transitioned from ErrPullImage to ImagePullBackOff and it's not ready. Anyway the docker container was working on the client instance but not in the cluster. At this point, I'm a bit lost. What am I doing wrong?
Thanks
Edit: message error:
Failed to pull image "org/test:first_try": rpc error: code =
Unknown desc = Error response from daemon: repository pycomio/test not
found: does not exist or no pull access
your --docker-server should be index.docker.io
DOCKER_REGISTRY_SERVER=https://index.docker.io/v1/
DOCKER_USER=Type your dockerhub username, same as when you `docker login`
DOCKER_EMAIL=Type your dockerhub email, same as when you `docker login`
DOCKER_PASSWORD=Type your dockerhub pw, same as when you `docker login`
kubectl create secret docker-registry myregistrykey \
--docker-server=$DOCKER_REGISTRY_SERVER \
--docker-username=$DOCKER_USER \
--docker-password=$DOCKER_PASSWORD \
--docker-email=$DOCKER_EMAIL