gitlab-ci cache on kubernetes with minio-service not working - kubernetes

I'm running gitlab with current gitlab-runner 10.3.0 as a kubernetes deployment with a minio-server for caching. Everything is deployed using helm. The gitlab runner's helm is customized using this values.yml:
cache:
cacheType: s3
s3ServerAddress: http://wizened-tortoise-minio:9000
s3BucketName: runners
s3CacheInsecure: false
cacheShared: true
secretName: s3access
# s3CachePath: gitlab_runner
The s3access is defined as cluster secret, the runners bucket exists on minio. Problem is that the cache is not being populated although the build log doesn't show any issues:
Checking cache for onekey-6
Successfully extracted cache
...
Creating cache onekey-6...
.m2/repository/: found 5909 matching files
Created cache
Looking into the minio bucket it is empty. I'm confident that the gitlab runner s3ServerAddress is correct as changing it shows as errors in the build process (here e.g. when using https):
Checking cache for onekey-6...
WARNING: Retrying...
WARNING: Retrying...
Failed to extract cache
Creating cache onekey-6...
.m2/repository/: found 5909 matching files
Uploading cache.zip to https://wizened-tortoise-minio/runners/gitlab_runner/runner/b87d7697/project/1644/onekey-6
WARNING: Retrying...
Uploading cache.zip to https://wizened-tortoise-minio/runners/gitlab_runner/runner/b87d7697/project/1644/onekey-6
WARNING: Retrying...
Failed to create cache
Uploading cache.zip to https://wizened-tortoise-minio/runners/gitlab_runner/runner/b87d7697/project/1644/onekey-6
FATAL: Put https://wizened-tortoise-minio
I've also added echo $S3_SERVER_ADDRESS to the build and it's empty.
So: how do I need to configure gitlab-runner to use minio for caching?
Note: I'm aware of gitlab-ci cache on kubernetes with minio-service not working anymore

For sake of completeness the problem is with:
s3ServerAddress: http://wizened-tortoise-minio:9000
While gitlab apparently does some "presence" check where it accepts the http:// it doesn't when actually cloning the cache. Unfortunately it seems to silently swallow the error. Working version needs:
s3ServerAddress: wizened-tortoise-minio:9000
Opened gitlab issue at https://gitlab.com/gitlab-org/gitlab-runner/issues/3539#note_103371588

Related

CreateContainerError with microk8s, ghrc.io image

The error message is CreateContainerError
Error: failed to create containerd container: error unpacking image: failed to extract layer sha256:b9b5285004b8a3: failed to get stream processor for application/vnd.in-toto+json: no processor for media-type: unknown
Image pull was successful with the token I supplied (jmtoken)
I am testing on AWS EC2 t2.medium, the docker image is tested on local machine.
Anybody experience this issue ? How did you solve it ?
deployment yaml file
I found a bug in my yaml file.
I supply command and CMD in K8S and Dockerfile each. So the CMD in Dockerfile which is actual command doesn't run, and cause side effects including this issue.
Another tip. Adding sleep 3000 command in K8S sometimes solve other issues like crash.

Concourse Tutorial : create resource config: base resource type not found: docker-image

I was following along the tutorial for concourse from https://concoursetutorial.com/basics/task-hello-world/ after setting up the concourse version 7.1 using docker-compose up -d. Tried a few different hello world examples but all of them failed because of the same error message.
Command :
fly -t tutorial execute -c task_hello_world.yml
Output :
executing build 7 at http://localhost:8080/builds/7
initializing
create resource config: base resource type not found: docker-image
create resource config: base resource type not found: docker-image
errored
I am new and unable to understand the cause and how to fix it. I am on debian (5.10 kernel) with docker version 20.10.4
The key to understand what is going on is in the error message:
create resource config: base resource type not found: docker-image
^^^^
A "base" resource type is a resource embedded in the Concourse worker, so that the task needing it doesn't need to download the corresponding image.
Example of base resource types still embedded in Concourse workers of the 7.x series are git and s3.
The Concourse tutorial you are following is outdated and has been written for a version of Concourse that embedded the docker-image resource type.
Since you are following the examples in the tutorial with a new Concourse, you get this (confusing) error.
The fix is easy: in the pipeline, replace docker-image with registry-image. See https://github.com/concourse/registry-image-resource.
I take this opportunity also to mention a project of mine, marco-m/concourse-in-a-box, an all-in-one Concourse CI/CD system based on Docker Compose, with Minio S3-compatible storage and HashiCorp Vault secret manager. This enables to learn Concourse pipelines from scratch in a simple and complete environment.

How to sync user directory on bitbucket server to jira with both running on aks?

When trying to sync the user directories of Jira to other atlassian products (confluence and bitbucket server running on aks) a 403 error is returned.
Upon looking into this error the following steps have been attempted:
https://confluence.atlassian.com/stashkb/unable-to-connect-to-jira-for-authentication-forbidden-403-323391874.html
The IP adresses have been added to the whitelist of Jira. The next step in solutions online is to restart the Jira
service.
This however causes issues as upon running the stop/start-jira.sh files inside the pod the service returns
with none of the previous settings and all configurations including backups are gone. Taking us back to square one.
cluster size:
current set-up
3 x Standard D8 v3 (8 vcpus, 32 GiB memory) cluster on aks
Used the following images installed through UI:
atlassian/jira-software
cptactionhank/docker-atlassian-jira
Exec into pod and go to /opt/atlassian/jira/bin
run ./(start/stop)-jira.sh
What should happen is that when going back to the url the Jira instance is reset and all configuration files in the pod for the service are lost.
The logs of the pod give error no 137 as a common error when restarting.
update:
https://github.com/int128/devops-kompose/tree/master/atlassian-jira-software
The following helm chart has also been used and achieved the same result.

Google cloud build: No space left on device

I have been using google's cloud build for building my artifacts/docker for my deployment. But I am suddenly getting the following error when submitting a build:
Creating temporary tarball archive of 1103 file(s) totalling 99.5 MiB before compression.
ERROR: gcloud crashed (IOError): [Errno 28] No space left on device
I have increased the diskSizeGB size as well but still I am getting this error. Where does cloud build happen in the cloud or which VM ? How to get rid of this error ?
Cloud Build is a service. While its builds are on GCE VMs these are VMs managed by the service and opaque to you. You cannot access the build service's resources directly.
What value did you try for diskSizeGB?
Please updating your question to include the (salient parts of) cloudbuild.yaml and the gcloud command that you're using to submit the job.
I'm wondering whether the error corresponds to a lack of space locally (your host) rather than on the service's VM.

registering ECS tasks with codeship

I am trying to deploy an application to AWS ECS using codeship. I have my docker-compose file and everything is ready to be deployed. Codeship documentation says to do something like this in the codeship-steps.yml file:
aws ecs register-task-definition --cli-input-json file:///deploy/tasks/backend.json
aws ecs update-service --service my-backend-service --task-definition backend
My question is, is this file file:///deploy/tasks/backend.json something I have to provide manually or is it created automatically as well as the ECS task. because I keep getting this error from codeship
An error occurred (ClientException) when calling the RunTask operation: TaskDefinition not found.
file:///deploy/tasks/backend.json is something you provide to aws ecs register-task-definition.
Looked it up here, this will generate the structure of that file you want:
aws ecs register-task-definition --generate-cli-skeleton
You can then modify it, after throwing it in backend.json, for instance (provided you redirect the output of the command into a file called backend.json).
If you look at the suggested service definition:
awsdeployment:
image: codeship/aws-deployment
encrypted_env_file: aws-deployment.env.encrypted
environment:
- AWS_DEFAULT_REGION=us-east-1
volumes:
- ./:/deploy
You can see that ./ is mapped to the /deploy mount point. This means that if in your repo, you create a directory called tasks, then you place your json file there, and you should be all set.