I have completed the first few steps as mentioned in this article.
https://aws.amazon.com/blogs/mt/running-bash-commands-in-aws-cloudformation-templates/
But I am getting an error at this:
aws cloudformation deploy \
--stack-name comandrunner-test-iops \
--template-file ./examples/commandrunner-example-iopscalc-template.yaml
The following resource(s) failed to create: [IopsCalculator]. Rollback
requested by user.
How do I know why the stack is not successfully created in this case?
I checked the documentation for this command:
https://docs.aws.amazon.com/cli/latest/reference/cloudformation/deploy/index.html
There is nothing really helpful there. Nothing in the parent command neither.
The best option is to look at the CloudFormation console. Sometimes Cloud Formation doesn't help too much around this type of errors.
Related
I'm running a task in AWS ECS using the CLI command run-task.
I'm successfully running a task as follows:
aws ecs run-task --cluster ${stackName}-cluster \
--task-definition ${stackName}-${tag} \
--launch-type="FARGATE" \
--network-configuration '{ "awsvpcConfiguration": { "assignPublicIp":"DISABLED", "securityGroups": ["sg-......"], "subnets": ["subnet-.....","subnet-.....","subnet-......"]}}' \
--count 1 \
--profile ${profile} \
--overrides file://overrides.json
They way I understand it, if you're using FARGATE you must to have NetworkMode: awsvpc in your TaskDefinition, and you need to specify the awsvpcConfiguration every time you run a task. This is all fine.
However, to make the above invocation tidier, is there a way to pass the --networkConfiguration above as an override. The documentation says you can pass environment variables, but it's not clear if this includes network.
I would be very grateful to anybody who could shed some light on this.
No you can't do that. Here's the full list of things you can specify in ECS Task Overrides. Network configuration is not in that list.
The documentation says you can pass environment variables, but it's not clear if this includes network.
The network configuration is not an environment variable.
If you just want to be able to simplify the the command line by passing in more arguments from a file, you can use the --cli-input-json
or --cli-input-yaml arguments.
There are Scala application Spark jobs that run daily in GCP. I am trying to set up a notification to be sent when run is compeleted. So, one way I thought of doing that was to get the logs and grep for a specific completion message from it (not sure if there's a better way). But I figured out the logs are just being shown in the console, inside the job details page and not being saved on a file.
Is there a way to route these logs to a file in a bucket so that I can search in it? Do I have to specify where to show these logs in the log4j properties file, like give a bucket location to
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
I tried to submit a job with this but it's giving me this error: grep:**-2022-07-08.log: No such file or directory
...
gcloud dataproc jobs submit spark \
--project $PROJECT --cluster=$CLUSTER --region=$REGION --class=***.spark.offer.Main \
--jars=gs://**.jar\
--properties=driver-memory=10G,spark.ui.filters="",spark.memory.fraction=0.6,spark.sql.files.maxPartitionBytes=5368709120,spark.memory.storageFraction=0.1,spark.driver.extraJavaOptions="-Dcq.config.name=gcp.conf",spark.executor.extraJavaOptions="-Dlog4j.configuration=log4j-executor.properties -Dcq.config.name=gcp.conf" \
--gcp.conf > gs://***-$date.log 2>&1
By default, Dataproc job driver logs are saved in GCS at the Dataproc-generated driverOutputResourceUri of the job. See this doc for more details.
But IMHO, a better way to determine if a job has finished is through gcloud dataproc jobs describe <job-id> 1, or the jobs.get REST API 2.
I have a cluster on ecs everything works well! When I used aws cli v.1, I could update my service used a command like this aws ecs update-service --cluster [cluster-name] --service [service-name] --task-definition [task-name] --force-new-deployment. After updating CLI to v.2 I try to use this command and everything just stuck! I didn't find any changes in aws documentation. Do you have any ideas?
update:
my screenshot
the problem is that everything starts well, without errors or warnings, it just gets stuck!
The AWS CLI version 2 uses a client-side pager. As stated in the documentation, you can "use the --no-cli-pager command line option to disable the pager for a single command use" (or use any of the other options described there).
It worked. What you are seeing is the output as documented. You can scroll through the output using up and down arrow keys or hit the Q key to dismiss the output.
Alternately, you can redirect the output to a file for review later:
aws ecs update-service ... > result.json
Or just ignore the output entirely:
aws ecs update-service ... > /dev/null
I am running CloudFormation updates to ECS. Triggered by CodePipeline. I would like to abort the CloudFormation deployment and rollback to the previous version after a timeout.
What is the best way to accomplish this? I saw something about WaitConditions but I'm not sure that is the right mechanism.
I also found that you can configure a TimeoutInMinutes on nested stacks https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-stack.html#cfn-cloudformation-stack-timeoutinminutes - but sounds like you cannot apply a similar property at the top level of the stack or to an arbitrary resource?
Is there another way that I can use the combination where I can abort the Codepipeline->Cloudformation->ECS deployment after a few minutes if it doesn't succeed?
This is a general gripe with CodePipeline ECS Deploy action (ECS, not ECS B/G) that if you push a bad image, you will have to wait 1hr for the timeout to occur before you can retry the pipeline.
At the moment, CodePipeline doesn't support rollbacks. You can detect a failed pipeline using CloudWatch [1] and take some action. The action will probably be roll-forward to a good version.
[1] Detect and React to Changes in Pipeline State with Amazon CloudWatch Events - https://docs.aws.amazon.com/codepipeline/latest/userguide/detect-state-changes-cloudwatch-events.html
We don't use CodePipeline, we're using Sceptre. But I guess my workaround could still work.
My workaround for this problem is that before triggering a deployment, run a script in the background.
./deployment-breaker.sh &
And for the script
#!/bin/bash
sleep 600
$deploymentStatus = (aws cloudformation describe-stack --stack-name STACK_NAME | jq XXX)
if [[ $deploymentStatus == YOUR_TERMINATE_CONDITION ]]then
aws cloudformation cancel-update-stack --stack-name STACK_NAME
fi
I've deployed a function to gcloud using the following command line script:
gcloud functions deploy my_new_function --runtime python37 \
--trigger-event providers/cloud.firestore/eventTypes/document.create \
--trigger-resource projects/my_project_name/databases/default/documents/experiences/{eventId}
This worked successfully, and my function was deployed. Here is what I expected to happen as a result:
Any time a new document was created within the experiences firestore collection, the function my_new_function would be invoked.
What is actually happening:
my_new_function is never being invoked as a result of a new document being created within experiences
The --source parameter is for deploying from source control, which you are not trying to do . You will want to deploy from your local machine instead. You run gcloud from the directory you want to deploy.