registering ECS tasks with codeship - amazon-ecs

I am trying to deploy an application to AWS ECS using codeship. I have my docker-compose file and everything is ready to be deployed. Codeship documentation says to do something like this in the codeship-steps.yml file:
aws ecs register-task-definition --cli-input-json file:///deploy/tasks/backend.json
aws ecs update-service --service my-backend-service --task-definition backend
My question is, is this file file:///deploy/tasks/backend.json something I have to provide manually or is it created automatically as well as the ECS task. because I keep getting this error from codeship
An error occurred (ClientException) when calling the RunTask operation: TaskDefinition not found.

file:///deploy/tasks/backend.json is something you provide to aws ecs register-task-definition.
Looked it up here, this will generate the structure of that file you want:
aws ecs register-task-definition --generate-cli-skeleton
You can then modify it, after throwing it in backend.json, for instance (provided you redirect the output of the command into a file called backend.json).
If you look at the suggested service definition:
awsdeployment:
image: codeship/aws-deployment
encrypted_env_file: aws-deployment.env.encrypted
environment:
- AWS_DEFAULT_REGION=us-east-1
volumes:
- ./:/deploy
You can see that ./ is mapped to the /deploy mount point. This means that if in your repo, you create a directory called tasks, then you place your json file there, and you should be all set.

Related

Deregister oldest task definition in ecs with aws cli

I have to remove old task definitions from aws cli?
Does any one have idea or any useful links that i can follow.
From console we can do manually but is there any automation we can set here?
e.g. aws ecs deregister-task-definition --task-definition curler:1 ?
https://docs.aws.amazon.com/cli/latest/reference/ecs/deregister-task-definition.html

Concourse Tutorial : create resource config: base resource type not found: docker-image

I was following along the tutorial for concourse from https://concoursetutorial.com/basics/task-hello-world/ after setting up the concourse version 7.1 using docker-compose up -d. Tried a few different hello world examples but all of them failed because of the same error message.
Command :
fly -t tutorial execute -c task_hello_world.yml
Output :
executing build 7 at http://localhost:8080/builds/7
initializing
create resource config: base resource type not found: docker-image
create resource config: base resource type not found: docker-image
errored
I am new and unable to understand the cause and how to fix it. I am on debian (5.10 kernel) with docker version 20.10.4
The key to understand what is going on is in the error message:
create resource config: base resource type not found: docker-image
^^^^
A "base" resource type is a resource embedded in the Concourse worker, so that the task needing it doesn't need to download the corresponding image.
Example of base resource types still embedded in Concourse workers of the 7.x series are git and s3.
The Concourse tutorial you are following is outdated and has been written for a version of Concourse that embedded the docker-image resource type.
Since you are following the examples in the tutorial with a new Concourse, you get this (confusing) error.
The fix is easy: in the pipeline, replace docker-image with registry-image. See https://github.com/concourse/registry-image-resource.
I take this opportunity also to mention a project of mine, marco-m/concourse-in-a-box, an all-in-one Concourse CI/CD system based on Docker Compose, with Minio S3-compatible storage and HashiCorp Vault secret manager. This enables to learn Concourse pipelines from scratch in a simple and complete environment.

How to force fargate service to be launched only from AWS Lambda

I've created a simple task to print a hello world. I've created a ECR image, docker compose and ecs-params.yml.
I get the cloudwatch log for the print, but the task keeps launching every minute, which I guess it's due to REPLICA service type.
How can I stop this from happening, I want to launch this Fargate task ONLY from a lambda, and when it finishes I don't it to be relaunched.
Thanks in advance
If you want a one-shot / one-off / standalone task to be launched by ECS and have it run until it finishes, you wouldn't use an ECS service definition but merely a task.
You can run tasks on their own without packaging as an ECS service.
See: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_run_task.html
If you are using the ECS CLI, then there is also ecs-cli compose create. So, you would use that call and not the one also creating an ECS service along with it.
You can then use AWS Lambda and send an ecs:RunTask AWS API call to invoke/start the ECS task.

Spring Cloud DataFlow server runnning locally pointing to Skipper in Kubernetes

I am working on spring cloud dataflow stream app. I am able to run Spring cloud data flow server locally with the skipper running in Cloud Foundry with below configuration . Now i am trying to run the same with the skipper running in kubernetes cluster. How can i specify the same ?
manifest.yml
---
applications:
- name: poc-scdf-server
memory: 1G
instances: 1
path: ../target/scdf-server-1.0.0-SNAPSHOT.jar
buildpacks:
- java_buildpack
env:
JAVA_VERSION: 1.8.0_+
JBP_CONFIG_SPRING_AUTO_RECONFIGURATION: '{enabled: false}'
SPRING_CLOUD_DATAFLOW_TASK_PLATFORM_CLOUDFOUNDRY_ACCOUNTS[default]_CONNECTION_URL:
SPRING_CLOUD_DATAFLOW_TASK_PLATFORM_CLOUDFOUNDRY_ACCOUNTS[default]_CONNECTION_ORG: <org>
SPRING_CLOUD_DATAFLOW_TASK_PLATFORM_CLOUDFOUNDRY_ACCOUNTS[default]_CONNECTION_SPACE: <space>
SPRING_CLOUD_DATAFLOW_TASK_PLATFORM_CLOUDFOUNDRY_ACCOUNTS[default]_CONNECTION_DOMAIN: <url>
SPRING_CLOUD_DATAFLOW_TASK_PLATFORM_CLOUDFOUNDRY_ACCOUNTS[default]_CONNECTION_USERNAME: <user>
SPRING_CLOUD_DATAFLOW_TASK_PLATFORM_CLOUDFOUNDRY_ACCOUNTS[default]_CONNECTION_PASSWORD: <pwd>
SPRING_CLOUD_DATAFLOW_TASK_PLATFORM_CLOUDFOUNDRY_ACCOUNTS[default]_CONNECTION_SKIPSSLVALIDATION: true
SPRING_CLOUD_SKIPPER_CLIENT_SERVER_URI: <skipper_url> SPRING_CLOUD_GAIA_SERVICES_ENV_KEY_PREFIX:spring.cloud.dataflow.task.platform.cloudfoundry.accounts[default].connection.
In v2.3, we have recently added the platform-specific docker-compose.yml experience for the Local mode. You can find the new files here.
With this infrastructure, you could start SCDF locally, but also bring the platform accounts for CF, K8s, or even both! See docs.
You can also use the DockerComposeIT.java to bring thigs up and running with automation, as well.

gitlab-ci cache on kubernetes with minio-service not working

I'm running gitlab with current gitlab-runner 10.3.0 as a kubernetes deployment with a minio-server for caching. Everything is deployed using helm. The gitlab runner's helm is customized using this values.yml:
cache:
cacheType: s3
s3ServerAddress: http://wizened-tortoise-minio:9000
s3BucketName: runners
s3CacheInsecure: false
cacheShared: true
secretName: s3access
# s3CachePath: gitlab_runner
The s3access is defined as cluster secret, the runners bucket exists on minio. Problem is that the cache is not being populated although the build log doesn't show any issues:
Checking cache for onekey-6
Successfully extracted cache
...
Creating cache onekey-6...
.m2/repository/: found 5909 matching files
Created cache
Looking into the minio bucket it is empty. I'm confident that the gitlab runner s3ServerAddress is correct as changing it shows as errors in the build process (here e.g. when using https):
Checking cache for onekey-6...
WARNING: Retrying...
WARNING: Retrying...
Failed to extract cache
Creating cache onekey-6...
.m2/repository/: found 5909 matching files
Uploading cache.zip to https://wizened-tortoise-minio/runners/gitlab_runner/runner/b87d7697/project/1644/onekey-6
WARNING: Retrying...
Uploading cache.zip to https://wizened-tortoise-minio/runners/gitlab_runner/runner/b87d7697/project/1644/onekey-6
WARNING: Retrying...
Failed to create cache
Uploading cache.zip to https://wizened-tortoise-minio/runners/gitlab_runner/runner/b87d7697/project/1644/onekey-6
FATAL: Put https://wizened-tortoise-minio
I've also added echo $S3_SERVER_ADDRESS to the build and it's empty.
So: how do I need to configure gitlab-runner to use minio for caching?
Note: I'm aware of gitlab-ci cache on kubernetes with minio-service not working anymore
For sake of completeness the problem is with:
s3ServerAddress: http://wizened-tortoise-minio:9000
While gitlab apparently does some "presence" check where it accepts the http:// it doesn't when actually cloning the cache. Unfortunately it seems to silently swallow the error. Working version needs:
s3ServerAddress: wizened-tortoise-minio:9000
Opened gitlab issue at https://gitlab.com/gitlab-org/gitlab-runner/issues/3539#note_103371588