why autoscaling is not creating in the cloudformation - aws-cloudformation

While creating AutoScalingGroup by using CloudFormation template I am facing this error ,the error is :
Failed to receive 1 resource signal(s) for the current batch. Each resource
signal timeout is counted as a FAILURE.
How can I resolve this issue? please suggest me.
Thanks in advance

Related

ADF Dataflow stuck IN progress and fail with below errors

ADF Pipeline DF task is Stuck in Progress. It was working seamlessly last couple of months but suddenly Dataflow stuck in progress and Time out after certain time. We are using IR managed Virtual Network. I am using forereach loop to run data flow for multiple entities parallel, it always randomly get stuck on last Entity.
What can I try to resolve this?
Error in Dev Environment
Error Code 4508
Spark cluster not found
Error in Prod Environment:
Error code
5000
Failure type
User configuration issue
Details
[plugins.*** ADF.adf-ir-001 WorkspaceType:<ADF> CCID:<f289c067-7c6c-4b49-b0db-783e842a5675>] [Monitoring] Livy Endpoint=[https://hubservice1.eastus.azuresynapse.net:8001/api/v1.0/publish/815b62a1-7b45-4fe1-86f4-ae4b56014311]. Livy Id=[0] Job failed during run time with state=[dead].
Images:
I tried below steps:
By changing IR configuring as below
Tried DF Retry and retry Interval
Also, tried For each loop one batch at a time instead of 4 batch parallel. None of the above trouble-shooting steps worked. These PL is running last 3-4 months without a single failure, suddenly they started to fail last 3 days consistently. DF flow always stuck in progress randomly for different entity and times out in one point by throwing above errors.
Error Code 4508 Spark cluster not found.
This error can cause because of two reasons.
The debug session is getting closed till the dataflow finish its transformation in this case recommendation is to restart the debug session
the second reason is due to resource problem, or an outage in that particular region.
Error code 5000 Failure type User configuration issue Details [plugins.*** ADF.adf-ir-001 WorkspaceType: CCID:] [Monitoring] Livy Endpoint=[https://hubservice1.eastus.azuresynapse.net:8001/api/v1.0/publish/815b62a1-7b45-4fe1-86f4-ae4b56014311]. Livy Id=[0] Job failed during run time with state=[dead].
A temporary error is one that says "Livy job state dead caused by unknown error." At the backend of the dataflow, a spark cluster is used, and this error is generated by the spark cluster. to get the more information about error go to StdOut of sparkpool execution.
The backend cluster may be experiencing a network problem, a resource problem, or an outage.
If error persist my suggestion is to raise Microsoft support ticket here

ECS + ALB - My applications only respond a few times

I've developed two spring boot applications for microservices and I've used ECS to deploy these applications into containers.
To do this, I followed the official pet clinic example (https://github.com/aws-samples/amazon-ecs-java-microservices/tree/master/3_ECS_Java_Spring_PetClinic_CICD).
All seems to works correctly, but when I make a request to the ALB very often I receive the 502 or 503 HTTP error and a few times I can see the correct response of the applications.
Can someone help me?
Thanks in advance.
You receive a 502 when you have no healthy task running and 503 when task is starting/restarting.
All of this mean that your task got stopped and then your cluster restart it, so you should find what make your task failed.
It can be something directly in your code that make it crash. or it can be the cluster healthcheck defined in your target group that failed.
Firstly you should look your task in the AWS ECS Console and see what error your task receive when it's stopped.
But as you are able to make request for some time and then it failed. I pretty sure your problem come from your healthcheck. So go in your target group used by the task (in AWS EC2 Console) and make sure the healthcheck path configured exist and returned a 200 status code.

Error related to datastage master

I created a server job which is supplied from an oracle table
the job is related to a Master. When launching the master, it abort with the following error message "Abnormal termination ", I found a trick to solve the problem, recompling the job before launching the master was the solution.
I'd like to solve this issue, any help will be appreciated
Thank you

Failed to pull image "gcr.io/blah/blah":

We started getting an error when trying to update the image tag of a deployment and its pod
Failed to pull image "gcr.io/blah/blah": rpc error: code = Unknown desc = Error: Status 429 trying to pull repository gcr.io/blah/blah: "Quota Exceeded." Error syncing pod
Randomly it started yesterday in Google Container Builder twice (the same error anyway) and stopped. Then it started during our deployment to two different pods any ideas on how to debug? Its currently stopping all deployments
Thanks
Mark
according to the error message it's seems like one of your quota has exceeded...
select your project inside Google Cloud Platform and on the menu go to
IAM & admin -> Quotas
on the right you will see Used column and pick the service that has exceeded,
then press EDIT QUOTAS on the top and increase your demand.

A combination for monitoring system for container (Grafana+Heapster+InfluxDB+cAdvisor) based on baremetal

Expert
I have a question related monitoring system for container.
Below picture has my thinking for monitoring.
I'd like to run monitoring combination(Grafana,Heapster,InfluxDB,cAdvisor) on baremetal as a daemon process instead of running in containers.
When i configure those 4 units...i got the error below.
It might be comes out from linking influxdb to Heapster (ex:--sink)
Below is my commands to run heapster
"./heapster --source=cadvisor:external?cadvisorPort=8081 --sink=influxdb:http://192.168.56.20:8086/db=k8s&user=root&pw=root"
Then I get following message
driver.go:326 Database creation failed : Server returned (404): 404
page not found
What symptom it is?
Someone if have a solution? let me get an advice
Thanks in advance