Execute AWS ECS run-task with network configuration overrides - amazon-ecs

I'm running a task in AWS ECS using the CLI command run-task.
I'm successfully running a task as follows:
aws ecs run-task --cluster ${stackName}-cluster \
--task-definition ${stackName}-${tag} \
--launch-type="FARGATE" \
--network-configuration '{ "awsvpcConfiguration": { "assignPublicIp":"DISABLED", "securityGroups": ["sg-......"], "subnets": ["subnet-.....","subnet-.....","subnet-......"]}}' \
--count 1 \
--profile ${profile} \
--overrides file://overrides.json
They way I understand it, if you're using FARGATE you must to have NetworkMode: awsvpc in your TaskDefinition, and you need to specify the awsvpcConfiguration every time you run a task. This is all fine.
However, to make the above invocation tidier, is there a way to pass the --networkConfiguration above as an override. The documentation says you can pass environment variables, but it's not clear if this includes network.
I would be very grateful to anybody who could shed some light on this.

No you can't do that. Here's the full list of things you can specify in ECS Task Overrides. Network configuration is not in that list.
The documentation says you can pass environment variables, but it's not clear if this includes network.
The network configuration is not an environment variable.
If you just want to be able to simplify the the command line by passing in more arguments from a file, you can use the --cli-input-json
or --cli-input-yaml arguments.

Related

running bash commands in cnf template

I have completed the first few steps as mentioned in this article.
https://aws.amazon.com/blogs/mt/running-bash-commands-in-aws-cloudformation-templates/
But I am getting an error at this:
aws cloudformation deploy \
--stack-name comandrunner-test-iops \
--template-file ./examples/commandrunner-example-iopscalc-template.yaml
The following resource(s) failed to create: [IopsCalculator]. Rollback
requested by user.
How do I know why the stack is not successfully created in this case?
I checked the documentation for this command:
https://docs.aws.amazon.com/cli/latest/reference/cloudformation/deploy/index.html
There is nothing really helpful there. Nothing in the parent command neither.
The best option is to look at the CloudFormation console. Sometimes Cloud Formation doesn't help too much around this type of errors.

AWS CLI 2 can't update-service use CLI

I have a cluster on ecs everything works well! When I used aws cli v.1, I could update my service used a command like this aws ecs update-service --cluster [cluster-name] --service [service-name] --task-definition [task-name] --force-new-deployment. After updating CLI to v.2 I try to use this command and everything just stuck! I didn't find any changes in aws documentation. Do you have any ideas?
update:
my screenshot
the problem is that everything starts well, without errors or warnings, it just gets stuck!
The AWS CLI version 2 uses a client-side pager. As stated in the documentation, you can "use the --no-cli-pager command line option to disable the pager for a single command use" (or use any of the other options described there).
It worked. What you are seeing is the output as documented. You can scroll through the output using up and down arrow keys or hit the Q key to dismiss the output.
Alternately, you can redirect the output to a file for review later:
aws ecs update-service ... > result.json
Or just ignore the output entirely:
aws ecs update-service ... > /dev/null

GCP Dataproc: Directly working with Spark over Yarn Cluster

I'm trying to minimize changes in my code so I'm wondering if there is a way to submit a spark-streaming job from my personal PC/VM as follows:
spark-submit --class path.to.your.Class --master yarn --deploy-mode client \
[options] <app jar> [app options]
without using GCP SDK.
I also have to specify a directory with configuration files HADOOP_CONF_DIR which I was able to download from Ambari.
Is there a way to do the same?
Thank you
Setting up an external machine as a YARN client node is generally difficult to do and not a workflow that will work easily with Dataproc.
In a comment you mention that what you really want to do is
Submit a Spark job to the Dataproc cluster.
Run a local script on each "batchFinish" (StreamingListener.onBatchCompleted?).
The script has dependencies that mean it cannot run inside of the Dataproc master node.
Again, configuring a client node outside of the Dataproc cluster and getting it to work with spark-submit is not going to work directly. However, if you can configure your network such that the Spark driver (running within Dataproc) has access to the service/script you need to run, and then invoke that when desired.
If you run your service on a VM that has access to the network of the Dataproc cluster, then your Spark driver should be able to access the service.

How to schedule ECS tasks on AWS Fargate

I have created a Task Definition on Elastic Container Service and have successfully run it in a Fargate cluster. However when I create a Scheduled Task in said cluster the option for "Launch Type" is hardcoded to EC2. Is there a way, perhaps through the command line to schedule the task to run on Fargate?
Heads up ! This is now supported in AWS:
https://aws.amazon.com/about-aws/whats-new/2018/08/aws-fargate-now-supports-time-and-event-based-task-scheduling/
Although not in some regions - As at Apr-19 it still wasn't supported in EU-west-2 (London). Check the table at the top of this page to see if it's supported in the region you want: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/scheduled_tasks.html
There seem to be no way of scheduling a task on FARGATE.
Only way it can be done right now seems to be by having your 'scheduler' external to ECS. I did it with a lambda. You can also use something like a jenkins or a simple cron task that fires the aws-cli command to ECS, in both these cases though you will need an instance always running.
I wrote a lambda that accepts the params (overrides) to be sent to the ECS task and has the schedule the task was supposed to have.
Update:
It seems there is a schedule tab in FARGATE Cluster details now that will allow you set cron schedules on ECS tasks.
While the AWS Documentation gives you ways to do this through CloudFormation, it seems like they've not yet released this feature anyway. I have been trying to do something similar and ran into the same issue.
Once it does become available, this link from the aws docs should be useful. Here's how they suggest doing it, but I keep running into errors saying NetworkConfiguration is not recognized and LaunchType is not recognized.
"EcsParameters": {
"Group": "string",
"LaunchType": "string",
"NetworkConfiguration": {
"awsvpcConfiguration": {
"AssignPublicIp": "string",
"SecurityGroups": [ "string" ],
"Subnets": [ "string" ]
}
},
Update: Here is an alternative that did end up working for me through the aws events put-targets command on the aws cli!
Make sure your aws cli is up to date. This method fails for older
versions of the cli. run this to update: pip install awscli --upgrade --user
After that, you should be good to go. Use the aws events put-targets --rule <value> --targets <value> command. Make sure that before you run this command you have a rule already defined on your cluster. If not, you can do that with the aws events put-rule cmd too. Refer to the AWS docs for put-rule, and for put-targets.
An example of a rule from the documentation is given below:
aws events put-rule --name "DailyLambdaFunction" --schedule-expression "cron(0 9 * * ? *)"
The put-targets command that worked for me is this:
aws events put-targets --rule cli-RS-rule --targets '{"Arn": "arn:aws:ecs:1234/cluster/clustername","EcsParameters": {"LaunchType": "FARGATE","NetworkConfiguration": {"awsvpcConfiguration": {"AssignPublicIp": "ENABLED", "SecurityGroups": [ "sg-id1233" ], "Subnets": [ "subnet-1234" ] }},"TaskCount": 1,"TaskDefinitionArn": "arn:aws:ecs:1234:task-definition/taskdef"},"Id": "sampleID111","RoleArn": "arn:aws:iam:1234:role/eventrole"}'
You can create a CloudWatch rule that uses a schedule as the event source and an ESC task as the target.
No this is not supported yet unfortunately. There is an open issue here. Hopefully it gets done soon as I would like to use it as well!
Disclosure: I work for SenseDeep that provides Powerdown # https://www.powerdown.io
Other services provide this functionality. PowerDown gives the ability to schedule Fargate services. This is at the service level, not the task level, but it is easy to create services for tasks. For example: you could schedule a CICD pipeline container to run 9-5 M-F.
It's not possible to have EC2 instances and Fargate instances at the same cluster.
It's possible to schedule a Fargate instance. Create a specific service and update that from aws tools. Ex:
aws ecs update-service --service my-http-service --task-definition
https://docs.aws.amazon.com/cli/latest/reference/ecs/update-service.html
Useful resources:
You could use the ECS aws tools and execute on lambda or travis.
Check out this medium post:
https://medium.com/#joseignaciocastelli92/how-to-create-a-continuous-deployment-process-using-ecs-fargate-docker-travis-410d84b4d99e
At the button has this repository that has the aws commands:
https://github.com/JicLotus/ecs-farate-scripts-to-deploy-and-build
Bests

How to run kubernetes e2e tests?

I have a kubernetes cluster running with one master and 2 nodes. I want to run e2e tests on this cluster. How should I run it? I tried doing go run hack/e2e.go -v --test but that command wants to create a cluster first and then run the test, while I want to run the tests on my already present cluster. Any idea how should I go ahead with it or what parameters should I pass to e2e tests?
TIA.
If what you want to do is run the conformance tests and verify your cluster, you might also consider looking into the tool that Heptio created called sonobuoy, which was created specifically to run the non-destructive conformance tests for Kubernetes 1.7 (or later) in a consistent fashion. Lachlan Everson posted a 6 minute youtube video showing how to use it that I thought was pretty easy to follow, and will get you up and running with it very quickly.
It's configuration driven, so you can turn on/off tests that interest you easily, and includes some plugin driven "get more data about this cluster" sort of setup if you find you want or need to dig more in specific areas.
Use the conformance test, described here:
https://github.com/kubernetes/community/blob/master/contributors/devel/e2e-tests.md#conformance-tests
An updated link you can find it here: https://github.com/kubernetes/community/blob/master/contributors/devel/e2e-tests.md or you can now use kubetest to run e2e tests.
Update: The easiest way to run e2e tests is by using Heptio's scanner
I use this command:
docker run -v $HOME/.kube/config:/kubeconfig \
--env KUBECONFIG=/kubeconfig \
k8s.gcr.io/conformance-amd64:v1.14.1 \
/usr/local/bin/ginkgo \
--focus="\[Conformance\]" \
--skip="Alpha|\[(Disruptive|Feature:[^\]]+|Flaky)\]" \
--noColor=false \
--flakeAttempts=2 \
/usr/local/bin/e2e.test -- \
--repo-root=/kubernetes \
--provider="skeleton" \
--kubeconfig="/kubeconfig" \
--allowed-not-ready-nodes=1
You can run the conformance e2e tests as described here:
https://github.com/cncf/k8s-conformance/blob/master/instructions.md
if your cluster is running 1.7.X or 1.8.x this approach is easy.
Basically you can run
curl -L https://raw.githubusercontent.com/cncf/k8s-conformance/master/sonobuoy-conformance.yaml | kubectl apply -f -