using aws cli how to get list of tasks id per ecs service - amazon-ecs

using AWS CLI how to get list of tasks ids per ecs services. When I use describe-services it does not list task id details only the count of number of tasks

You would use the aws ecs list-tasks --cluser <cluster-name> --service-name <service-name> command to get a list of tasks for a specific service.

Related

gcloud scheduler create jobs when already exist

I create my gcloud scheduler in command line with
gcloud scheduler jobs create
but when I already deploy my gitlabCI, i got already exist error.
is it possible to overwrite if already exist directly in my gitlabCI ?
Suppose you create a Cloud Schedule job with the following attribute values
gcloud scheduler jobs create JOB --location=LOCATION
JOB
LOCATION
my-job
us-west1
gcloud scheduler jobs create my-job --location=us-west1
In order to verify if the job already exists you may use the gcloud schedule jobs describe JOB command using gcloud CLI .e.g https://cloud.google.com/sdk/gcloud/reference/scheduler/jobs/describe
gcloud scheduler jobs describe my-job --location=us-west1
If it indeed already exists, there is no direct way of "overwriting" the existing one, what you can do is to
either delete the previous job and re-create it from scratch e.g.
gcloud scheduler jobs delete my-job
gcloud scheduler jobs create my-job
or you can modify the existing job, for instance when you deploy a new version of a service to AppEngine, you can simply reflect this on your existing Cloud Scheduler job without the need to re-creating it entirely.
gcloud scheduler jobs update app-engine my-job --version=VERSION
For more information, please refer to the official documentation for Cloud SDK on Cloud Scheduler https://cloud.google.com/sdk/gcloud/reference/scheduler

limt job create count to a pod (service account ? rbac? OPA?)

I want to limit the number of job creations in my Kubernetes namespace per specific service account. jobs are created by another pod using this service account. do you think this is something possible?
my kubernetes version: v1.21.11
I tried to do it by a quota resource but I can't apply it on a particular serviceaccount.

Jenkins cron job to run selenium & k8s

I am working on a project in which I have created a k8s cluster to run selenium grid locally. I want to schedule the tests to run and until now I have tried to create a Jenkins cron job to do so. For that I am using k8s plugin in Jenkins.
However I am not sure about the steps to follow. Where should I be uploading the kube config file? There are a few options here:
Build Environment in Jenkins
Any ideas or suggestions?
Thanks
Typically, you can choose any option, depending on how you want to manage the system, I believe:
secret text or file option will allow you to copy/paste a secret (with a token) in Jenkins which will be used to access the k8s cluster. Token based access works by adding an HTTP header to your requests to the k8s API server as follows: Authorization: Bearer $YOUR_TOKEN. This authenticates you to the server. This is the programmatic way to access the k8s API.
configure kubectl option will allow you to perhaps specify the config file within Jenkins UI where you can set the kubeconfig. This is the imperative/scriptive way of configuring access to the k8s API. The kubeconfig itself contains set of keypair based credentials that are issued to a username and signed by the API server's CA.
Any way would work fine! Hope this helps!
If Jenkins is running in Kubernetes as well, I'd create a service account, create the necessary Role and RoleBinding to only create CronJobs, and attach your service account to your Jenkins deployment or statefulset, then you can use the token of the service account (by default mounted under /var/run/secrets/kubernetes.io/serviceaccount/token) and query your API endpoint to create your CronJobs.
However, if Jenkins is running outside of your Kubernetes cluster, I'd authenticate against your cloud provider in Jenkins using one of the plugins available, using:
Service account (GCP)
Service principal (Azure)
AWS access and secret key or with an instance profile (AWS).
and then would run any of the CLI commands to generate a kubeconfig file:
gcloud container clusters get-credentials
az aks get-credentials
aws eks update-kubeconfig

Access AWS Tags inside ECS docker containers

I am creating a ECS Service with a few resource tags through a cloudformation template. Once the service is up and running, is there a way I can access these aws tags from within the container?
I was thinking if there is a way to make them available as environment variables in the container?
Run one of the following command from within the container:
list-tags-for-resource CLI commands with task ARN 3
aws ecs list-tags-for-resource --resource-arn arn:aws:ecs:<region>:<Account_Number>:task/test/186de825c8EXAMPLE10bf1c3bb142
list-tags-for-resource CLI commands with service ARN 3
aws ecs list-tags-for-resource --resource-arn arn:aws:ecs:<region>:<Account_Number>:service/test/service

AWS ecs scheduled task with cloudwatch

I am trying to create scheduled task with cloudwatch.
I am using this page
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-events-rule-target.html
The problem i see is when i run task normally then aws asks
vpc
subnets
Launchtype
BUT when i use cloudwatch target then it dont ask for vpc, subnets etc. why is that ?
CloudFormation has not been updated to accommodate some Fargate functionality yet. If you get an error while trying to deploy an ECS task from CloudFormation,
try using the command line interface (aws events put-target) instead, which allows you to add a target that contains the required ECS parameters for launch type and network config.
Here is an example of how I configured my ECS tasks to be deployed from the CLI instead of CloudFormation:
1. Add vpc/subnet config to a variable, NETWORK_CONFIGURATION:
NETWORK_CONFIGURATION='{"awsvpcConfiguration":{"AssignPublicIp":"ENABLED","SecurityGroups": \["'${AWS_NETWORKCONFIG_SECURITY_GROUP}'"],"Subnets":["'${AWS_NETWORKCONFIG_SUBNET}'"]}}'
Run the following command to deploy your task, which will take the vpc config from the variable declared above
aws events put-targets \
--rule events-rule--${TASK_NAME} \
--targets '{"Arn":"arn:aws:ecs:'${AWS_REGION}':'${AWS_ACCOUNT_ID}':cluster/ecs-cluster-1","EcsParameters":{"LaunchType":"FARGATE","NetworkConfiguration":'${NETWORK_CONFIGURATION}',"TaskCount": 1,"TaskDefinitionArn": "arn:aws:ecs:'${AWS_REGION}':'${AWS_ACCOUNT_ID}':task-definition/ecs-task-'${TASK_NAME}'"},"Id": "ecs-targets-'${TASK_NAME}'","RoleArn": "arn:aws:iam::'${AWS_ACCOUNT_ID}':role/ecsEventsRole"}' \
;