Moving resources between azure subscriptions - powershell

I set up an area in Azure that contains a cloud service, storage accounts, and a webapp. I need to move that to a separate subscription.
I know how to do that but my question is, is there an order that I need to follow when moving all the resources?
When I go directly to the cloud service and click Move to a new subscription it tells me it can't do that because it's a classic service. I'm guessing it's because I have a classic storage account under that service along with a RM storage account as well so I need to move those first?

Related

Use only a domain and disable https://storage.googleapis.com url access

I am newbie at cloud servers and I've opened a google cloud storage to host image files. I've verified my domain and configured it, to view images via my domain. The problem is, same file is both accessible via my domain example.com/images/tiny.png and also via storage.googleapis.com/example.com/images/tiny.png Is there any solution to disable access via storage.googleapis.com and use only my domain?
Google Cloud Platform Support Version:
NOTE: This is the reply from Google Cloud Platform Support when contacted via email...
I understand that you have set up a domain name for one of your Cloud Storage buckets and you want to make sure only URLs starting with your domain name have access to this bucket.
I am afraid that this is not possible because of how Cloud Storage permission works.
Making a Cloud Storage bucket publicly readable also gives each of its files a public link. And currently this public link can’t be disabled.
A workaround would be implement a proxy program and running it on a Compute Engine virtual machine. This VM will need a static external IP so that you can map your domain to it. The proxy program will be in charged of returning the requested file from a predefined Cloud Storage bucket while the bucket keeps to be inaccessible to the public.
You may find these documents helpful if you are interested in this workaround:
1. Quick start to set up a Linux VM (1).
2. Python API for accessing Cloud Storage files (2).
3. How to download service account keys to grant a program access to a set of services (3).
4. Pricing calculator for getting a picture on how much a VM may cost (4).
(1) https://cloud.google.com/compute/docs/quickstart-linux
(2) https://pypi.org/project/google-cloud-storage/
(3) https://cloud.google.com/iam/docs/creating-managing-service-account-keys
(4) https://cloud.google.com/products/calculator/
My Version:
It seems the solution to this question is really a simple, just FUSE Google Cloud Storage with VM Instance.
After FUSE private files from GCS can be accessed through VM's IP address. It made Google Cloud Storage Bucket act like a directory.
The detailed documentation about how to setup FUSE in Google Cloud is here.
There is but it requires you to do more work.
Your current solution works because you've made access to the GCS bucket (example.com), public and then you're DNS aliasing from your domain.
An alternative approach would be for you to limit access to the GCS bucket to one (possibly several) accounts and then run a web-server that uses one of the accounts to access your image files. You could then also either permit access to your web-server to anyone or also limit access to it.
More work for you (and possibly cost) but more control.

Data Transfer between Google Storage different Service Accounts

I have two Google Service Credentials and a bucket on each account .I have to transfer files from one bucket to another. How can I do this programmatic ally?
Can I achieve this with two Storage objects or using the Cloud storage Transfer service?
Yes, with Storage Transfer Service you can create a transfer job and send the data to a destination bucket (in another project), keep in mind that it is documented that:
To access the data source and the data sink, this service account must
have source permissions and sink permissions.
Meaning that you can't use two different service accounts, you will need to grant access to only one of the two service accounts you have.
If you want to transfer files from one bucket to another programmatically. First, you must grant permission to the service account associated with the Storage Transfer Service so it can access the data sink(destination bucket), please follow these steps.
Please note that if you are not creating the transfer job in the same project where the source bucket is located, then you must grant permissions to access it.
With Storage Transfer Service you can create a transfer job programmatically with Java and Python, examples include creating the transfer job and checking the transfer operation status. Full code example can be found for Java and Python.

Moving a resource from an active directory to another within the same subscription

I have an Azure resource group under an Active Directory AC1 which I would like to move to another active directory AC2 within the same subscription. How can I achieve this using UI or powershell (or any other means)? To make thing easier I can forget all about the resource group itself and just for the sake of argument lets say I have a resource R1 in AC1 which I would like to move to AC2 within the same subscription.
How can I do this without recreating the resource in the destination directory?
So, given the way information is presented in Azure Portal one is led to believe that an Azure Subscription contains one or more Azure ADs which is not correct. From https://azure.microsoft.com/en-in/documentation/articles/active-directory-how-subscriptions-associated-directory/:
Every Azure subscription has a trust relationship with an Azure AD
instance. This means that it trusts that directory to authenticate
users, services, and devices. Multiple subscriptions can trust the
same directory, but a subscription trusts only one directory.
So to answer your question, you can move resources from one Azure Subscription to another provided both Subscriptions use same Azure AD as trust store. There's no automated way of moving resources from one Azure Subscription which trusts one Azure AD to a different Azure Subscription which trusts a different Azure AD.

Azure HTTPS POST and GET

I am a new user of the Azure platform, and am having trouble understanding how differents parts are conected. I have data in a Storage blob that I would like to use to make HTTPS POST requests to a web service. My question therfore is as follows: How can I send data from my Azure storage blob to a REST API endpoint?
First, let's start with a little background:
Azure Resource Manager (ARM)
ARM is the REST API that you interface with, using the Azure Portal, PowerShell module, or cross-platform (xPlat) CLI tool, in order to provision and manage cloud resources inside your Azure subscription (account). In order to provision resources, you must first create a Resource Group, essentially a management container for various cloud resource instances.
Azure Storage (Blob)
Microsoft Azure Storage offers several different services:
Blob (unstructured, flat data storage)
Files (cloud-based SMB share for Azure VMs)
Queue (FIFO / LIFO queues, similar to Azure Service Bus)
Table (NOSQL partitioned storage)
Of these types of storage, Blob storage is arguably the most common. In order to utilize any of these storage services, you must first provision a Storage Account inside an ARM Resource Group (see above). To specifically utilize blob storage, you create a Blob Container inside your Storage Account, and then create or upload blobs into this container(s). Once data is stored in an Azure Blob Container, it does not move unless a service explicitly requests the data.
Azure App Service
If you're deploying a Web App (with a front end) or a REST API App (no front end), you'll be using Microsoft Azure's App Service offering. One unique feature of Azure App Service's Web App (I know, it's a mouthful) offering is WebJobs. WebJobs essentially allow you to run arbitrary code in the cloud, kind of like a background worker process. You can trigger WebJobs when blobs are created or uploaded, using this document.
Essentially, you use the [BlobTrigger()] .NET attribute, from the Azure WebJobs SDK, to designate code that will be executed inside Azure WebJobs whenever a new blob is created. The code that executes could grab the blob data, and send it off to your REST API endpoint.

Best practices for setting up developer access to Azure Resources

I would like to find out what the best practices are for managing developers' access to a sub-set of resources on a client's subscription?
I've searched Google and the Azure documentation looking for definitive answers, but I have yet to come across an article that puts it all together. Because Azure is still developing so rapidly I often find it difficult to determine whether a particular article may still be relevant.
To sum up our situation:
I've been tasked with researching and implementing the Azure infrastructure for a web site our company is developing for a client. At the moment our manager and I have access to the client's entire subscription on the Azure Portal by means of the Service Administrator's credentials, even though we're managing only:
Azure Cloud Service running a Web-Role (2-instances with Production and Staging environments).
Azure SQL Database.
Azure Blob Storage for deployments, diagnostics etc.
We're now moving into a phase where more of the developers in the team will require access to perform maintenance type tasks such as performing a VIP swap, retrieving diagnostic info etc.
What is the proper way to manage developer's access on such a project?
The approach I've taken was to implement Role Based Access Control (https://azure.microsoft.com/en-us/documentation/articles/role-based-access-control-configure/)
Move 1, 2, and 3 above into a new Resource Group according to http://blog.kloud.com.au/2015/03/24/moving-resources-between-azure-resource-groups/
Creating a new User Group for our company, say "GroupXYZ".
Adding the "GroupXYZ" to the Contributor role.
Adding the particular developer's company accounts to "GroupXYZ"
Motivation for taking the role-based approach
From what I understand giving everyone access as a Co-Administrator would mean that they have full access to every subscription in the portal.
Account-based authentication is preferable to certificate-based authentication due to the complexity added by managing the certificates.
What caused me to question my approach was the fact that I could not perform a VIP swap against the Cloud Service using PowerShell; I received an error message stating that a certificate could not be found.
Do such role-based accounts only have access to Azure by means of the Resource Manager Commandlets?
I had to switch PowerShell to the Azure Service Manager (ASM) Mode before having access to the Move-AzureDeployment commandlet.
Something else I'm not sure of is whether or not Visual Studio will have access to those resources (in the Resource Group) when using Role Based Access Control.
When you apply RBAC to Azure as you have or just in general, give access to an account via RBAC, then those accounts can only access Azure via the Azure Resource Manager APIs, whether that's PowerShell, REST or VS.
VS 2015 can access Azure resources via RBAC when using the 2.7 SDK. VS 2013 will have support for it soon.