pause the template execution for a few minutes - aws-cloudformation

I have a cloudformation template that works correctly. It initiates an elastic server along with cloud-trail.
datameetgeobk.s3.amazonaws.com/cftemplates/audit_trail.yaml
I have to wait for 10 minute to get the endpoint. The endpoint is used in Lambda function that is part of another template.
stream logs to elastic using cloudformation template
Is there WaitCondition in cloudformaton that I can use before joining these 2 templates?

Yes, there is a WaitCondition that uses properties like Count, Handle, Timeout.
More info about the same is Here
Example Code
Declare an `AWS::CloudFormation::WaitConditionHandle resource in the stack's template
"myWaitHandle" : {
"Type" : "AWS::CloudFormation::WaitConditionHandle",
"Properties" : {
}
}

Related

Terraform: add a message to queue

I create queue inside storage by the following way:
resource "azurerm_storage_queue" "myqueue" {
name = "myqueue"
storage_account_name = "${azurerm_storage_account.sto1.name}"
}
but for this queue I want to add init message (it's like a marker, when I sync with external resource last time and need to get data from the last execution). How can I configure it in terraform file?
This sounds like a use case for remote-exec.
Generally when using remote-exec, using a null provider is recommended. This lets you have closer control over what triggers the provider, rather than being dependent on the logic of the cloud service you're integrating it with.

Is there a way to capture the name of a task that has been executed in SnapLogic?

We have a lot of triggered Tasks that run on the same pipelines, but with different parameters.
My question regarding this - is there a possible way, like a function or an expression to capture the name of the triggered task so that we could use the information while writing the reports and e-mails of which Task started the error pipeline. I can't find anything even close to it.
Thank you in advance.
This answer addresses the requirement of uniquely identify the invoker task in the invoked pipeline
For triggered tasks, there isn't anything provided out of the box by SnapLogic. Although, in case of ULTRA tasks you can get $['task_name'] from the input to the pipeline.
Out of the box, SnapLogic provides the following headers that can be captured and used in the pipeline being initiated by the triggered task. These are as follows.
PATH_INFO - The path elements after the Task part of the URL.
REMOTE_USER - The name of the user that invoked this request.
REMOTE_ADDR - The IP address of the host that invoked this request.
REQUEST_METHOD - The method used to invoke this request.
None of these contains the task-name.
In your case, as a workaround, to uniquely identify the invoker task in the invoked pipeline you could do one of the following three things.
Pass the task-name as a parameter
Add the task-name in the URL like https://elastic.snaplogic.com/.../task-name
Add a custom header from the REST call
All the three above methods can help you capture the task-name in the invoked pipeline.
In your case, I would suggest you go for a custom header because the parameters you pass in the pipeline could be task-specific and it is redundant to add the task-name again in the URL.
Following is how you can add a custom header in your triggered task.
From SnapLogic Docs -
Custom Headers
To pass a custom HTTP header, specify a header and its value through the parameter fields in Pipeline Settings. The
request matches any header with Pipeline arguments and passes those to
the Task, while the Authorization header is never passed to the
Pipeline.
Guidelines
The header must be capitalized in its entirety. Headers are case-sensitive.
Hyphens must be changed to underscores.
The HTTP custom headers override both the Task and Pipeline parameters, but the query string parameter has the highest precedence.
For example, if you want to pass a tenant ID (X-TENANT-ID) in a
header, add the parameter X_TENANT_ID and provide a default or leave
it blank. When you configure the expression, refer to the Pipeline
argument following standard convention: _X_TENANT_ID. In the HTTP
request, you add the header X-TENANT-ID: 123abc, which results in the
value 123abc being substituted for the Pipeline argument X_TENANT_ID.
Creating a task-name parameter in the pipeline settings
Using the task-name parameter in the pipeline
Calling the triggered task
Note: Hyphens must be changed to underscores.
References:
SnapLogic Docs - Triggered Tasks
I'm adding this as a separate answer because it addresses the specific requirement of logging an executed triggered task separate from the pipeline. This solution has to be a separate process (or pipeline) instead of being part of the triggered pipeline itself.
The Pipeline Monitoring API doesn't have any explicit log entry for the task name of a triggered task. invoker is what you have to use.
However, the main API used by SnapLogic to populate the Dashboard is more verbose. Following is a screenshot of how the response looks on Google Chrome Developer Tools.
You can use the invoker_name and pipe_invoker fields for identifying a triggered task.
Following is the API that is being used.
POST https://elastic.snaplogic.com/api/2/<org snode
id>/rest/pm/runtime
Body:
{
"state": "Completed,Stopped,Failed,Queued,Started,Prepared,Stopping,Failing,Suspending,Suspended,Resuming",
"offset": 0,
"limit": 25,
"include_subpipelines": false,
"sort": {
"create_time": -1
},
"start_ts": null,
"end_ts": null,
"last_hours": 1
}
You can have a pipeline that periodically fires this API then parses the response and populates a log table (or creates a log file).

During update of an Azure Stream Analytics Job, I get HTTP 422 Unprocessable Entity

During an update of streaming jobs (via REST Api, we use the generic one that allows to update any kind of resource: https://learn.microsoft.com/en-us/rest/api/resources/resources/updatebyid), I get 422 without any additional information. Could anyone help with identifying the problem ?
Well, although there is very little useful information in your question, I eventually reproduce your issue on my side.
The reason has been described clearly by the error message :
PATCH of Inputs, Transformation, Functions, Outputs or Devices is not allowed using the Streaming Job level API. Please use the API for the corresponding resources.
This means you could not include the Inputs, Transformation, Functions, Outputs, Devices in your request body, because they are different resources form the streamingjobs.
Solution:
To fix the issue, just use the API for the corresponding resources as mentioed in the error message.
1.Update Input : PATCH https://managment.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.StreamAnalytics/streamingjobs/{job-name}/inputs/{input-name}?api-version={api-version}
2.Update Function : PATCH https://<endpoint>/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.StreamAnalytics/streamingjobs/{jobName}/function/{functionName}?api-version={api-version}
3.Update Output : PATCH https://managment.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.StreamAnalytics/streamingjobs/{job-name}/outputs/output?api-version={api-version}
4.Update Transformation : PATCH https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.StreamAnalytics/streamingjobs/{job-name}/transformations/{transformation-name}?api-version={api-version}
For more details, you could refer to Stream Analytics REST API.
Sample:
I test to Update Input.
PATCH https://managment.azure.com/subscriptions/xxxxxx/resourceGroups/joywebapp/providers/Microsoft.StreamAnalytics/streamingjobs/joyteststream/inputs/joyinput?api-version=2018-11-01
Request body:
{
"properties":{
"type":"Stream",
"serialization":{
"type":"JSON",
"properties":{
"encoding":"UTF8"
}
}
}
}
Result:

Adding Lambda#edge includebody field in cloudfront using cloudformation template?

I am trying to add Lambda#Edge association in cloudfront using cloudformation. As per aws docs they had only two fields like EventType and LambdaFunctionARN . But i want to add IncludeBody in cloudformation so that my Lambda#Edge will read the body of the request . When i try to add IncludeBody in cloudformation it is saying error like invalid property.
"LambdaFunctionAssociations":
[
{
"EventType": "origin-response",
"IncludeBody":"true" -- Invalid property error
"LambdaFunctionARN": "arn:aws:lambda:us-east-1:134952096518:function:LambdaEdge:1"
}
]
So, can't i add this through cloudformation . Or i need to do it manually from console ?
Any help is appreciated
Thanks
According to AWS docs, there is an IncludeBody property for LambdaFunctionAssociations. But they also say that it can only be used for "viewer-request" and "origin-request" EventTypes. It looks like you have an "origin-response" EventType, so IncludeBody shouldn't be applicable here. Yet, in the official CloudFormation reference, there is no mention of IncludeBody. So I can only guess that CloudFormation is missing this feature right now and you may only be able to set IncludeBody via the API.

SoapUI Pro : Transfering a XML node from a TestCase to another

When using the Property Transfert window to transfert a XML node (with children nodes) taken from the response of a first Soap request to a second Soap request, and that both requests are in the same TestCase, it works great :
TestCase 1 :
Source : [First Soap Request] Property : [Response]
declare namespace ns='http://xxx.com';
//ns:xxxxx[1]/ns:return[1]
-------------------------------------------
Target : [Second Soap Request] Property : [Request]
declare namespace ser='http://xxx.com';
//ser:xxxxx[1]/ser:someobject[1]
But if the two requests are in different TestCases, I guess it is required to save the XML node to a TestSuite property first, and then transfert this property to the new Soap request :
TestCase 1 :
Source : [First Soap Request] Property : [Response]
declare namespace ns='http://xxx.com';
//ns:xxxxx[1]/ns:return[1]
-------------------------------------------
Target : [TestSuite1] Property : [myVariableToTransfert]
TestCase 2 :
Source : [TestSuite1] Property : [myVariableToTransfert]
-------------------------------------------
Target : [Second Soap Request] Property : [Request]
declare namespace ser='http://xxx.com';
//ser:xxxxx[1]/ser:someobject[1]
This doesn't work!
It seems I'm unable to get valid XML in the second request when it is taken from the TestSuite as a property. Sometimes the value is null, sometimes it is wrapped in CDATA tags or the XML is entitized ("<" are "&lt", for example). I'm unable to get the value as real XML, like when both requests are in the same TestCase!
I played with the "Transfert Text Content", "Entitize tranfererred value(s)" and "Transfert Child Nodes" options but without success!
How can I tranfert a XML node from a request in a TestCase to a request in a second TestCase?
Set your response value as custom property at test suite and then you can use this for further testing..remember it will save your value as string data so if you are saving your integer data then you have to covert it into integer before further use.Like
testRunner.testCase.testSuite.getPropertyValue("").toInteger()
Here is the detailed explanation:
Create test suite 1 and the below steps
test case 1
DataGen step
test case 2
Within the DataGen step, open the editor, add new property via clicking on "+" button and select script as Type (you should also give a name to the property now assume the name is yourprop).
At the below side of the screen you should see the script editor. And create a script as below (please notie that you should change the variables according to your web service and XML)
def testXML = context.expand( '${Test Request#Response#declare namespace ns1=\'http://namespace/'; //ns1:WebServiceNameResponse[1]/ns1:nodeName[1]}' )
Now you have created a soapUI property via DataGen named yourprop. You can use this property within the test suite for the following test cases.
I hope this helps, if you are not satisfied or you face any problem please let me know I will do my best.