Possible to associate an azure subnet in one resource group to a virtual network in another resource group? - powershell

In Azure, is it possible to associate an subnet in one resource group to a virtual network in a different resource group? I can't get my powershell scripts to do it, and my guess is that it's not possible, but thought I'd check around for an official answer.
Thanks,
Casie

This article explains what you can move and what you cannot move in Azure resource group without detail or limitations for majority of items..
When you move items with powershell, you have to supply ResourceId of the item you want to move. If you try Get-AzureRMResource and Get-AzureRmVirtualNetwork, you will see the Virtual Network item is associated with ResourceId but subnets are just a sub configuration with a Etag. Combine with the article above, my take is, you might be able to move entire Virtual Network, but not a subnet.
Still, I consider moving network related resource in azure extremely risky.

Related

Azure front door resource not appearing in azure dns when adding apex domain

I suffer a problem when adding the apex for a custom domain for a front door instance in azure dns. Accordingly to this documentation, it should be possible to select the specific front door resource, but in my case, nothing appears.
Both, the dns-zone as well as the front door instance are contained within the same subscription, as well as the same resource group.
Anyone having same problems?
Many thanks in advice!

Prevent users from creating new work items in Azure DevOps

I've been looking at organisation and project settings but I can't see a setting that would prevent users from creating work items in an Azure DevOps project.
I have a number of users who refuse to follow the guidelines we set out for our projects so I'd like to inconvenience them and the wider project team so that they find it better to follow the guidelines than not - at the moment we've got one-word user stories and/or tasks with estimates of 60-70 hours which isn't reflective of the way that we should be planning.
I'd still want them to be able to edit the stories or tasks and moving statuses, but that initial creation should be off-limits for them (for a time at least). Is there a way to do this??
The Azure DevOps Aggregator project allows you to write simple scripts that get triggered when a work item is created or updated. It uses a service hook to trigger when such an event occurs and abstracts most of the API specific stuff away, providing you with an instance of the work item to directly interact with.
You can't block the creation or update from, such a policy, Azure DevOps will inform the aggregator too late in the creation process to do so, but you can revert changes, close the work item etc. There are also a few utility functions to send email.
You need to install the aggregator somewhere, it can be hosted in Azure Functions and we provide a docker container you can spin up anywhere you want. Then link it to Azure DevOps using a PAT token with sufficient permissions and write your first policy.
A few sample rules can be found in the aggregator docs.
store.DeleteWorkItem(self);
should put the work item in the Recycle Bin in Azure DevOps. You can create a code snippet around it that checks the creator of the work item (self.CreatedBy.Id) against a list of known bad identities.
Be mindful that when Azure DevOps creates a new work item the Created and Updated event may fire in rapid succession (this is caused by the mechanism that sets the backlog order on work items), so you may need to find a way to detect what metadata tells you a work item should be deleted. I generally check for a low Revision number (like, < 5) and the last few revisions didn't change any field other than Backlog Priority.
I'd still want them to be able to edit the stories or tasks and moving statuses, but that initial creation should be off-limits for them (for a time at least). Is there a way to do this??
I am afraid there is no such out of setting to do this.
That because the current permission settings for the workitem have not yet been subdivided to apply to the current scenario.
There is a setting about this is that:
Project Settings->Team configuration->Area->Security:
Set this value to Deny, it will prevent users from creating new work items. But it also prevent users from modify the workitem.
For your request, you could add your request for this feature on our UserVoice site (https://developercommunity.visualstudio.com/content/idea/post.html?space=21 ), which is our main forum for product suggestions.

How can I look up an existing Internet Gateway in CDK?

I'm using the FromLookup() method on the Vpc construct to get a reference to the default VPC in an account like this:
Vpc.FromLookup(this, "Default VPC", new VpcLookupOptions {IsDefault = true}); (C#)
Is there a way to do something similar for the Internet Gateway (IGW) that's created by default in that VPC? Alternatively, can I list the IGWs for an existing VPC? I need to get a reference to that IGW in order to add routes to it.
I came across this GitHub issue which shows a workaround using a Cfn escape hatch to get a reference to the existing IGW using its ID, but the need to manually look up and provide the ID breaks the automation we're trying to achieve. We need to spin up copies of these stacks in dozens of isolated accounts and having a manual lookup step is a deal breaker.
Also, the PR that addresses that issue only allows getting references for IGWs in new VPCs created as part of the stack, not existing ones.

I want to link multiple domains to one bucket with gcs

I want to link multiple domains to one bucket with gcs
However, in an official document, the bucket name will be the domain as it is, so it seems that you can not associate multiple domains.
Do not you know someone?
GCS does not support this directly. Instead, to accomplish this, you'd likely need to make use of Google Cloud Load Balancing with your GCS bucket as a backing store. With it, you can obtain a dedicated, static IP address which you can map several domains to, and it also allows you to map static and dynamic content under the same domain, and it allows you to swap out which bucket is being served at the same path. The main downside to it is added complexity and cost.

Multiple pods and nodes management in Kubernetes

I've been digging the Kubernetes documentation to try to figure out what is the recommend approach for this case.
I have a private movie API with the follow microservices (pods).
- summary
- reviews
- popularity
Also I have accounts that can access these services.
How do restrict access to services per account e.g. account A can access all the services but account B can only access summary.
The account A could be doing 100x more requests than account B. It's possible to scale services for specific accounts?
Should I setup the accounts as Nodes?
I feel like I'm missing something basic here.
Any thoughts or animated gifs are very welcome.
It sounds like this is level of control should be implemented at the application level.
Access to particular parts of your application, in this case the services, should probably be controlled via user permissions. Similar line of thought for scaling out the services...allow everything to scale but rate limit up front, e.g., account A can get 10 requests per second and account B can do 100x. Designating accounts to nodes might also be possible, but should be avoided. You don't want to end up micromanaging the orchestration layer :)