Unable to wait for VPC creation - pulumi

I am trying to create a VPC with Pulumi crosswalk and then use the output's vpc_id to pass as argument to fetch security groups. However, being natively async, vpc object is supposedly being queried before creation causing it to throw an error:
Exception: invoke of aws:ec2/getSecurityGroup:getSecurityGroup failed: invocation of aws:ec2/getSecurityGroup:getSecurityGroup returned an error: invoking aws:ec2/getSecurityGroup:getSecurityGroup: 1 error occurred:
* multiple Security Groups matched; use additional constraints to reduce matches to a single Security Group
I am unable to figure out the following:
Why does it say there are multiple matches when there aren't?
Why does it throw an error in preview? Does preview also make an AWS call?
how to put a hold on the query until VPC is created, considering 'depends_on' won't work for get_security_group method? Is there a Pulumi way to handle this situation?
Following is the code snippet:
vpc = awsx.ec2.Vpc("pulumi-test",cidr_block='10.2.0.0/16',subnet_specs=[
awsx.ec2.SubnetSpecArgs(
type=awsx.ec2.SubnetType.PRIVATE,
cidr_mask=26,
),
awsx.ec2.SubnetSpecArgs(
type=awsx.ec2.SubnetType.PUBLIC,
cidr_mask=26,
)
], number_of_availability_zones=1)
security_group = aws.ec2.get_security_group(vpc_id=vpc.vpc_id)

1.
You should probably not make any assumptions about there being only a single security group. Use the get_security_groups function to get them all. Example:
security_groups = aws.ec2.get_security_groups(filters=[aws.ec2.GetSecurityGroupsFilterArgs(name='vpc-id', values=[vpc.vpc_id])])
2.
Yes, pulumi preview will execute functions if possible (get_security_group in your case). Even function calls that are Output-based (see 3. for clarification) can be executed during preview.
This can happen if the Output that this function uses belongs to a resource that already exists (was created in some of the preceding pulumi up).
For example:
You add the VPC code.
You execute pulumi up successfully (VPC is created and its Pulumi state is stored in the backend).
You add the code that uses one of the VPC outputs (get_security_group(vpc.vpc_id)).
You execute pulumi preview and the above function is executed with the real VPC id (vpc.vpc_id).
3.
There is no need for depends_on. Pulumi functions are different than resources. In Python, two invocation forms are available. The one you are using is Output-based.
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

Related

Pulumi DigitalOcean: different name for droplet

I'm creating a droplet in DigitalOcean with Pulumi. I have the following code:
name = "server"
droplet = digitalocean.Droplet(
name,
image=_image,
region=_region,
size=_size,
)
The server gets created successfully on DigitalOcean but the name in the DigitalOcean console is something like server-0bbc405 (upon each execution, it's a different name).
Why isn't it just the name I provided? How can I achieve that?
This is a result of auto-naming, which is explained here in the Pulumi docs:
https://www.pulumi.com/docs/intro/concepts/resources/names/#autonaming
The extra characters tacked onto the end of the resource name allow you to use the same "logical" name (your "server") with multiple stacks without risk of a collision (as cloud providers often require resources of the same kind to ba named uniquely). Auto-naming looks a bit strange at first, but it's incredibly useful in practice, and once you start working with multiple stacks, you'll almost surely appreciate it.
That said, you can generally override this name by providing a name in your list of resource arguments:
...
name = "server"
droplet = digitalocean.Droplet(
name,
name="my-name-override", # <-- Override auto-naming
image="ubuntu-18-04-x64",
region="nyc2",
size="s-1vcpu-1gb")
.. which would yield the following result:
+ pulumi:pulumi:Stack: (create)
...
+ digitalocean:index/droplet:Droplet: (create)
...
name : "my-name-override" # <-- As opposed to "server-0bbc405"
...
.. but again, it's usually best to go with auto-naming for the reasons specified in the docs. Quoting here:
It ensures that two stacks for the same project can be deployed without their resources colliding. The suffix helps you to create multiple instances of your project more easily, whether because you want, for example, many development or testing stacks, or to scale to new regions.
It allows Pulumi to do zero-downtime resource updates. Due to the way some cloud providers work, certain updates require replacing resources rather than updating them in place. By default, Pulumi creates replacements first, then updates the existing references to them, and finally deletes the old resources.
Hope it helps!

gcloud deploy ... --trigger-resource for triggering on Firestore write 'CloudEvents' needs a bit of clarification

I read and followed the examples found on Google Cloud Firestore Triggers. I have been able to deploy these examples following.
gcloud functions deploy my-second-event \
--entry-point CloudEventFunction2.Function \
--runtime dotnet3 \
--trigger-event "providers/cloud.firestore/eventTypes/document.create" \
--trigger-resource "projects/my-projectId/databases/(default)/documents/messages/{pushId}" \
NOTE that the /documents/messages/{pushId} part of this resource aligns with the "Deploy your function" section.
HOWEVER a little further down --trigger-resource 'NAME' is described as...
The fully qualified database path to which the function will listen. This should conform to the following format: "projects/YOUR_PROJECT_ID/databases/(default)/documents/PATH" The {pushId} text is a wildcard parameter described above in Specifying the document path.*
Now we get to my confusion when we follow the link to "Specifying the document path". I believe I understand what is meant by "Functions only respond to document changes, and cannot monitor specific fields or collections.". HOWEVER if we look at the above /documents/messages/{pushId} - 'documents' is a collection and 'messages' is a document. Following from the above limitations about functions only responding to document changes, I would NOT expect the event to be triggered by the {pushId} event (because the pushId is EITHER a collection or a field (it sits directly on a document 'messages').
What seems to be to be indicated is that the {pushId} wild card be put directly under the collection 'documents'; resulting in...
--trigger-resource "projects/my-projectId/databases/(default)/documents/{pushId}"
Meaning that when a new message is pushed to the documents collection the cloud event is triggered.
However the above change yeilds below...
ERROR: (gcloud.functions.deploy) ResponseError: status=[400], code=[Ok], message=[
The request has errors
Problems:
event_trigger:
Expected value {pushId} to match regular expression [^/]+/[^/]+(/[^/]+/[^/]+)*
]
While I am sure I am doing something wrong, I am struggling to make sense out of the above observations, also my function is not being triggered.
I would really appreciate any hints as to how this is to be understood, and or at least how to get my function to trigger on create.
FYI; the 'Function' I am using is from the dotnet template provided by Visual Studio (2022).
I am late to the party, but if the collection in your database is literally named documents then your path is going to look like this:
--trigger-resource "projects/my-projectId/databases/(default)/documents/documents/{pushId}"
The second instance of documents is the literal name of the collection you are watching.

Rundeck implicit variable

Run dick gives us the ability to define options to be entered by the its GUI, is there any capability to have job variable based on the input without the end user of the job seeing it?
E.g, if the the user choose the product and the environment and the product behind LB, I want to use a script internally to define new job variable and assign it the port number that could be used later on within the job steps.
Yes, there is.
You can add a new option whith Allowed Values set to Remote URL, then put the URL to your script which will return the actual value based on other options.
For exmaple
http://localhost/cgi-bin/getPort.py?environment=${option.environment.value}&product=${option.product.value}
Rundeck Manual option-model-provider

How to suppress default outputs on serverless cloudformation yml?

I'm using serverless-stack-output to save my serverless output to a file with some custom values that I setup. Works well, but serverless has some other default outputs such as these:
FunctionQualifiedArn (one for each function)
ServiceEndpoint
ServerlessDeploymentBucketName
I don't want these to show on my file, how to disable serverless/cloudformation from outputting them?
This is not possible at this stage.
I've dug through the code and there's no switch that to suppress outputs.
Unfortunate, as I have the exact same requirement.

Can I enable / Disable an Azure Service Bus Topic using Powershell

I have spent a couple of hours search for a solution to disable my Azure Service Bus Topics using Powershell.
The background for this is we want to force a manual failover to our other region.
Obviously I could click in the Portal:
but I want to have a script to do this.
Here is my current attempt:
Any help would be great.
Assuming you're sure your $topic contains the full description, modify the status parameter in the array and then splat it back using the UpdateTopic method. I'm afraid I can't test this at present.
$topic.Status = "Disabled"
$topicdesc = $NamespaceManager.UpdateTopic($topic)
I don't think you'll need to set the entity type for the Status, nor do you require semi-colons after each line of code in your loop.
References
PowerShell Service Bus creation sample script (which this appears to be based off): https://blogs.msdn.microsoft.com/paolos/2014/12/02/how-to-create-service-bus-queues-topics-and-subscriptions-using-a-powershell-script/
UpdateTopic method: https://msdn.microsoft.com/en-us/library/azure/microsoft.servicebus.namespacemanager.updatetopic.aspx
Additional note: please don't screenshot the code - paste it in. I'd rather copy-and-paste than type things out.