Concourse Tutorial : create resource config: base resource type not found: docker-image - concourse

I was following along the tutorial for concourse from https://concoursetutorial.com/basics/task-hello-world/ after setting up the concourse version 7.1 using docker-compose up -d. Tried a few different hello world examples but all of them failed because of the same error message.
Command :
fly -t tutorial execute -c task_hello_world.yml
Output :
executing build 7 at http://localhost:8080/builds/7
initializing
create resource config: base resource type not found: docker-image
create resource config: base resource type not found: docker-image
errored
I am new and unable to understand the cause and how to fix it. I am on debian (5.10 kernel) with docker version 20.10.4

The key to understand what is going on is in the error message:
create resource config: base resource type not found: docker-image
^^^^
A "base" resource type is a resource embedded in the Concourse worker, so that the task needing it doesn't need to download the corresponding image.
Example of base resource types still embedded in Concourse workers of the 7.x series are git and s3.
The Concourse tutorial you are following is outdated and has been written for a version of Concourse that embedded the docker-image resource type.
Since you are following the examples in the tutorial with a new Concourse, you get this (confusing) error.
The fix is easy: in the pipeline, replace docker-image with registry-image. See https://github.com/concourse/registry-image-resource.
I take this opportunity also to mention a project of mine, marco-m/concourse-in-a-box, an all-in-one Concourse CI/CD system based on Docker Compose, with Minio S3-compatible storage and HashiCorp Vault secret manager. This enables to learn Concourse pipelines from scratch in a simple and complete environment.

Related

Spiffe error while deploying client-agent pods

I am using this guide for deploying Spiffe on K8s Cluster "https://spiffe.io/docs/latest/try/getting-started-k8s/"
One of the steps in this process is running the command "kubectl apply -f client-deployment.yaml" which deploys spiffe client agent.
But the pods keeps on getting in the error state
Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "sleep": executable file not found in $PATH: unknown
Image used : ghcr.io/spiffe/spire-agent:1.5.1
It seems connected to this PR from 3 days ago (there is no longer a "sleep" executable in the image).
SPIRE is moving away from the alpine Docker release images in favor of scratch images that contain only the release binary to minimize the size of the images and include only the software that is necessary to run in the container.
You should report the issue and use
gcr.io/spiffe-io/spire-agent:1.2.3
(the last image they used) meanwhile.

Spark Operator and jmx_exporter failing

I've just migrated k8s to 1.22 and with this version spark-operator:1.2.3 didn't work.
I've followed the info at the internet and upgraded to 1.3.3, however all my spark apps are failing with the same error:
Caused by: java.io.FileNotFoundException: /etc/metrics/conf/prometheus.yaml (No such file or directory) at java.base/java.io.FileInputStream.open0(Native Method) at java.base/java.io.FileInputStream.open(FileInputStream.java:219) at java.base/java.io.FileInputStream.<init>(FileInputStream.java:157) at java.base/java.io.FileReader.<init>(FileReader.java:75) at io.prometheus.jmx.shaded.io.prometheus.jmx.JmxCollector.<init>(JmxCollector.java:78) at io.prometheus.jmx.shaded.io.prometheus.jmx.JavaAgent.premain(JavaAgent.java:29) ... 6 more *** java.lang.instrument ASSERTION FAILED ***: "result" with message agent load/premain call failed at ./src/java.instrument/share/native/libinstrument/JPLISAgent.c line: 422 FATAL ERROR in native method: processing of -javaagent failed, processJavaStart failed
It used to work on previous version....
unfortunately, I cannot downgrade k8s.
Can you please assist?
PS: there are no additional options passed to executor, just a path to jmx_exporter_0.15
I think your new application requires that prometheus be running in your cluster and it also expects to find the configuration file for prometheus at the path /etc/metrics/conf/prometheus.yaml. Such files are generally setup by creating a ConfigMap in your cluster and then mounting it to every pod that needs it.
My guess is, during the upgrade of spark a step was missed/not provided which was to install prometheus in your cluster before installing your spark applications which used that installation as a dependency. This is the case since you are trying to use a prometheus exporter and if a prometheus installation doesn't exist already, it will not work.
You can try going through the installation again and checking where prometheus comes into play and ensure that this configuration file is provided to your applications.

gitlab-ci cache on kubernetes with minio-service not working

I'm running gitlab with current gitlab-runner 10.3.0 as a kubernetes deployment with a minio-server for caching. Everything is deployed using helm. The gitlab runner's helm is customized using this values.yml:
cache:
cacheType: s3
s3ServerAddress: http://wizened-tortoise-minio:9000
s3BucketName: runners
s3CacheInsecure: false
cacheShared: true
secretName: s3access
# s3CachePath: gitlab_runner
The s3access is defined as cluster secret, the runners bucket exists on minio. Problem is that the cache is not being populated although the build log doesn't show any issues:
Checking cache for onekey-6
Successfully extracted cache
...
Creating cache onekey-6...
.m2/repository/: found 5909 matching files
Created cache
Looking into the minio bucket it is empty. I'm confident that the gitlab runner s3ServerAddress is correct as changing it shows as errors in the build process (here e.g. when using https):
Checking cache for onekey-6...
WARNING: Retrying...
WARNING: Retrying...
Failed to extract cache
Creating cache onekey-6...
.m2/repository/: found 5909 matching files
Uploading cache.zip to https://wizened-tortoise-minio/runners/gitlab_runner/runner/b87d7697/project/1644/onekey-6
WARNING: Retrying...
Uploading cache.zip to https://wizened-tortoise-minio/runners/gitlab_runner/runner/b87d7697/project/1644/onekey-6
WARNING: Retrying...
Failed to create cache
Uploading cache.zip to https://wizened-tortoise-minio/runners/gitlab_runner/runner/b87d7697/project/1644/onekey-6
FATAL: Put https://wizened-tortoise-minio
I've also added echo $S3_SERVER_ADDRESS to the build and it's empty.
So: how do I need to configure gitlab-runner to use minio for caching?
Note: I'm aware of gitlab-ci cache on kubernetes with minio-service not working anymore
For sake of completeness the problem is with:
s3ServerAddress: http://wizened-tortoise-minio:9000
While gitlab apparently does some "presence" check where it accepts the http:// it doesn't when actually cloning the cache. Unfortunately it seems to silently swallow the error. Working version needs:
s3ServerAddress: wizened-tortoise-minio:9000
Opened gitlab issue at https://gitlab.com/gitlab-org/gitlab-runner/issues/3539#note_103371588

registering ECS tasks with codeship

I am trying to deploy an application to AWS ECS using codeship. I have my docker-compose file and everything is ready to be deployed. Codeship documentation says to do something like this in the codeship-steps.yml file:
aws ecs register-task-definition --cli-input-json file:///deploy/tasks/backend.json
aws ecs update-service --service my-backend-service --task-definition backend
My question is, is this file file:///deploy/tasks/backend.json something I have to provide manually or is it created automatically as well as the ECS task. because I keep getting this error from codeship
An error occurred (ClientException) when calling the RunTask operation: TaskDefinition not found.
file:///deploy/tasks/backend.json is something you provide to aws ecs register-task-definition.
Looked it up here, this will generate the structure of that file you want:
aws ecs register-task-definition --generate-cli-skeleton
You can then modify it, after throwing it in backend.json, for instance (provided you redirect the output of the command into a file called backend.json).
If you look at the suggested service definition:
awsdeployment:
image: codeship/aws-deployment
encrypted_env_file: aws-deployment.env.encrypted
environment:
- AWS_DEFAULT_REGION=us-east-1
volumes:
- ./:/deploy
You can see that ./ is mapped to the /deploy mount point. This means that if in your repo, you create a directory called tasks, then you place your json file there, and you should be all set.

Create VM Azure REST API in Java - The specified deployment slot Production is occupied

I got this error The specified deployment slot Production is occupied when I try to create a VM with REST API.
In my XML, I have <DeploymentSlot>Production</DeploymentSlot> but I can't find any information for resolving this issue.
Any ideas ?