Spiffe error while deploying client-agent pods - kubernetes

I am using this guide for deploying Spiffe on K8s Cluster "https://spiffe.io/docs/latest/try/getting-started-k8s/"
One of the steps in this process is running the command "kubectl apply -f client-deployment.yaml" which deploys spiffe client agent.
But the pods keeps on getting in the error state
Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "sleep": executable file not found in $PATH: unknown
Image used : ghcr.io/spiffe/spire-agent:1.5.1

It seems connected to this PR from 3 days ago (there is no longer a "sleep" executable in the image).
SPIRE is moving away from the alpine Docker release images in favor of scratch images that contain only the release binary to minimize the size of the images and include only the software that is necessary to run in the container.
You should report the issue and use
gcr.io/spiffe-io/spire-agent:1.2.3
(the last image they used) meanwhile.

Related

CreateContainerError with microk8s, ghrc.io image

The error message is CreateContainerError
Error: failed to create containerd container: error unpacking image: failed to extract layer sha256:b9b5285004b8a3: failed to get stream processor for application/vnd.in-toto+json: no processor for media-type: unknown
Image pull was successful with the token I supplied (jmtoken)
I am testing on AWS EC2 t2.medium, the docker image is tested on local machine.
Anybody experience this issue ? How did you solve it ?
deployment yaml file
I found a bug in my yaml file.
I supply command and CMD in K8S and Dockerfile each. So the CMD in Dockerfile which is actual command doesn't run, and cause side effects including this issue.
Another tip. Adding sleep 3000 command in K8S sometimes solve other issues like crash.

Concourse Tutorial : create resource config: base resource type not found: docker-image

I was following along the tutorial for concourse from https://concoursetutorial.com/basics/task-hello-world/ after setting up the concourse version 7.1 using docker-compose up -d. Tried a few different hello world examples but all of them failed because of the same error message.
Command :
fly -t tutorial execute -c task_hello_world.yml
Output :
executing build 7 at http://localhost:8080/builds/7
initializing
create resource config: base resource type not found: docker-image
create resource config: base resource type not found: docker-image
errored
I am new and unable to understand the cause and how to fix it. I am on debian (5.10 kernel) with docker version 20.10.4
The key to understand what is going on is in the error message:
create resource config: base resource type not found: docker-image
^^^^
A "base" resource type is a resource embedded in the Concourse worker, so that the task needing it doesn't need to download the corresponding image.
Example of base resource types still embedded in Concourse workers of the 7.x series are git and s3.
The Concourse tutorial you are following is outdated and has been written for a version of Concourse that embedded the docker-image resource type.
Since you are following the examples in the tutorial with a new Concourse, you get this (confusing) error.
The fix is easy: in the pipeline, replace docker-image with registry-image. See https://github.com/concourse/registry-image-resource.
I take this opportunity also to mention a project of mine, marco-m/concourse-in-a-box, an all-in-one Concourse CI/CD system based on Docker Compose, with Minio S3-compatible storage and HashiCorp Vault secret manager. This enables to learn Concourse pipelines from scratch in a simple and complete environment.

ERROR: (gcloud.compute.instance-templates.create) Could not fetch image resource:

The cluster was running fine for 255 days. I brought down the cluster and after that, I was unable to run the cluster up. It gives the following error while running the cluster up.
Creating minions.
Attempt 1 to create kubernetes-minion-template
ERROR: (gcloud.compute.instance-templates.create) Could not fetch image resource:
- The resource 'projects/google-containers/global/images/container-vm-v20170627' was not found
Attempt 1 failed to create instance template kubernetes-minion-template. Retrying.
This Attempt goes on and it always fails. Am I missing something?
The kubernetes version is v1.7.2.
It looks like the image you are trying to use to create the machines has been deprecated and/or is no longer available.
You should try specifying an alternative image to create these machines from Google's current public images.

Active Deploy `begin` step fails after upgrade to devops toolchain

We recently upgrade our IBM Bluemix devops project to a toolchain as recommended by IBM and it doesn't deploy anymore. The pipeline configuration seems to have migrated over correctly, and the first step of the process deploy process even works, creating a new instance of the app. However when it gets to the active-deploy-begin step it fails with the error:
--- ERROR: Unknown status:
--- ERROR: label: my-app_220-to-my-app_2 space: my-space routes: my-app.mybluemix.net
phase: rampup start group: my-app_220 app (1) successor group: my-app_2 app (1) algorithm: rb
deployment id: 84630da7-8663-466a-bb99-e02d2eb17a90 transition type: manual
rampup duration: 4% of 2m test duration: 1s
rampdown duration: 2m status: in_progress status messages: <none>
It appears to have started the build number from 1 instead of continuing from the previous number of 220. I've tried deleting the service at the app level from the Bluemix web interface to no avail. Any help or pointers will be much appreciated.
UPDATE:
Things I've tried:
Deleting the app and running the build process to create a new
instance. This worked the first time as it detected it was just the
initial build. But then the second time it ran it failed with the
same Unknown Status error.
Deleting all the previous deployment records in the to eliminate the possibility that it was caused due to a deployment label name
conflict. i.e. my-app_1-to-my-app_2
Also interestingly the active deploy command works from the cf command line using the active-deploy-create my-app_1 my-app_2 command. So it seems that the issue might be with the script that runs the active deploy commands for the pipeline.
This issue was reported also at https://github.com/Osthanes/update_service/issues/54. There you will find instructions how to get the issue fixed.

Build multiple images with Docker Compose?

I have a repository which builds three different images:
powerpy-base
powerpy-web
powerpy-worker
Both powerpy-web and powerpy-worker inherit from powerpy-base using the FROM keyword in their Dockerfile.
I'm using Docker Compose in the project to run a Redis and RabbitMQ container. Is there a way for me to tell Docker Compose that I'd like to build the base image first and then the web and worker images?
You can use depends_on to enforce an order, however that order will also be applied during "runtime" (docker-compose up), which may not be correct.
If you're only using compose to build images it should be fine.
You could also split it into two compose files. a docker-compose.build.yml which has depends_on for build, and a separate one for running the images as services.
These is a related issue: https://github.com/docker/compose/issues/295
About run containers:
It was bug before, but they fixed it since docker-compose v1.10.
https://blog.docker.com/2016/02/docker-1-10/
Start linked containers in correct order when restarting daemon: This is a little thing, but if you’ve run into it you’ll know what a headache it is. If you restarted a daemon with linked containers, they sometimes failed to start up if the linked containers weren’t running yet. Engine will now attempt to start up containers in the correct order.
About build:
You need to build base image first.