Programmatically download RDP file of Azure Resource Manager VM - rest

I am able to create VM from a custom image using Azure resource management sdk for .net. Now, I want to download the RDP file for virtual machine programmatically. I have searched and able to find Rest API for azure 'Classic' deployments which contains an api call to download RDP file but i can't find the same in Rest API for 'ARM' deployment. Also, I can't find any such Method in .net sdk for azure.
Does there any way exist to achieve that? Please guide..

I don't know of a way to get the RDP file, but you can get all the information you need from the deployment itself. On the deployment, you can set outputs for the values you need like the publicIp dns. See this:
https://github.com/bmoore-msft/AzureRM-Samples/blob/master/VMCSEInstallFilePS/azuredeploy.json#L213-215
If your environment is more complex (load balancers, network security groups) you need to account for port numbers, etc.

Related

Tyring to run VSTS agent thru a proxy which limits sites

Have installed VSTS agent in a very locked down environment. It makes a connection to VSTS, gets job but fails when downloading artefact. Gives error
Error: in getBuild, so retrying => retries pending : 4.
It retries 4 times and fails.
The agent is going thru a proxy. Have setup the proxy using ./config --proxyurl and also set HTTP_PROXY AND HTTPS_PROXY system environment vars.
The proxy is very limiting in that URLS are locked down, there is no authentication required. Does anybody know what URLs the agent accesses? Am hoping if can get a definitive list this will solve the issue. If anybody knows how can get a list would be great. Or maybe I have misconfigured?
Any ideas?
Tyring to run VSTS agent thru a proxy which limits sites
According to the document
I'm running a firewall and my code is in Azure Repos. What URLs does the agent need to communicate with?:
To ensure your organization works with any existing firewall or IP
restrictions, ensure that dev.azure.com and dev.azure.com are open
and update your allow-listed IPs to include the following IP
addresses, based on your IP version. If you're currently allow-listing
the 13.107.6.183 and 13.107.9.183 IP addresses, leave them in place,
as you don't need to remove them.
And With just the organization's name or ID, you can get its base URL using the global Resource Areas REST API (https://dev.azure.com/_apis/resourceAreas). This API doesn't require authentication and provides information about the location (URL) of the organization as well as the base URL for REST APIs, which can live on different domains.
Please check this document Best practices for working with URLs in Azure DevOps extensions and integrations for some more details.
Hope this helps.

Use only a domain and disable https://storage.googleapis.com url access

I am newbie at cloud servers and I've opened a google cloud storage to host image files. I've verified my domain and configured it, to view images via my domain. The problem is, same file is both accessible via my domain example.com/images/tiny.png and also via storage.googleapis.com/example.com/images/tiny.png Is there any solution to disable access via storage.googleapis.com and use only my domain?
Google Cloud Platform Support Version:
NOTE: This is the reply from Google Cloud Platform Support when contacted via email...
I understand that you have set up a domain name for one of your Cloud Storage buckets and you want to make sure only URLs starting with your domain name have access to this bucket.
I am afraid that this is not possible because of how Cloud Storage permission works.
Making a Cloud Storage bucket publicly readable also gives each of its files a public link. And currently this public link can’t be disabled.
A workaround would be implement a proxy program and running it on a Compute Engine virtual machine. This VM will need a static external IP so that you can map your domain to it. The proxy program will be in charged of returning the requested file from a predefined Cloud Storage bucket while the bucket keeps to be inaccessible to the public.
You may find these documents helpful if you are interested in this workaround:
1. Quick start to set up a Linux VM (1).
2. Python API for accessing Cloud Storage files (2).
3. How to download service account keys to grant a program access to a set of services (3).
4. Pricing calculator for getting a picture on how much a VM may cost (4).
(1) https://cloud.google.com/compute/docs/quickstart-linux
(2) https://pypi.org/project/google-cloud-storage/
(3) https://cloud.google.com/iam/docs/creating-managing-service-account-keys
(4) https://cloud.google.com/products/calculator/
My Version:
It seems the solution to this question is really a simple, just FUSE Google Cloud Storage with VM Instance.
After FUSE private files from GCS can be accessed through VM's IP address. It made Google Cloud Storage Bucket act like a directory.
The detailed documentation about how to setup FUSE in Google Cloud is here.
There is but it requires you to do more work.
Your current solution works because you've made access to the GCS bucket (example.com), public and then you're DNS aliasing from your domain.
An alternative approach would be for you to limit access to the GCS bucket to one (possibly several) accounts and then run a web-server that uses one of the accounts to access your image files. You could then also either permit access to your web-server to anyone or also limit access to it.
More work for you (and possibly cost) but more control.

Azure Service Fabric REST API - how to copy application package to image store?

I am looking for Service Fabric REST API method for copying an application package to the image store of a service fabric cluster. That is, method similar to Power Shell cmdlet Copy-ServiceFabricApplicationPackage and Service Fabric Client .NET API method FabricClient.ApplicationManagementClient.CopyApplicationPackage.
I can't find such a method from Service Fabric Client REST API Reference.
How similar operation should be done using Service Fabric REST API methods?
I managed to copy the manifest files using ImageStore REST API method Upload File. In this case only the manifest files are uploaded as they define Azure Container Registry location where container packages are stored. After manifest files were loaded to ImageStore, I succeeded to Provision Application Type to Service Fabric Cluster.
Details that caused me some head-ache:
Upload File: manifest files were uploaded to image store into a folder with subfolders. An empty file '_.dir' needed to be uploaded into each folder; this is a mark file used by image store service internally to indicate the availability of the linked folder. See API reference and GitHub discussion 'Provisioning application type throws exception'.
Image Store contents can be checked with REST API method Get Image Store Content. Anyhow, the uploaded files are not visible via this method until application type is provisioned.
If you Provision Application Type using 'ImageStorePath' option, the value given to body parameter ApplicationTypeBuildPath is relative to 'fabric:ImageStore'. I spent some quality time using 'fabric:ImageStore/MyAppType' until I realized to fix this to 'MyAppType'.
AFAIK both CopyApplicationPackage and Copy-ServiceFabricApplicationPackage use ImageStore API under the hood, so I think ImageStore REST API is what you are looking for.

List Azure Virtual Machines via REST API

I am currently attempting to get a list of all of the Virtual Machines that I have running under a Windows Azure subscription programmatically. For this, I am attempting to use the Azure REST API (https://management.core.windows.net), and not use the power-shell cmdlets.
Using the cmdlets I can run 'Get-AzureVM' and get a listing of all of the VM's with ServiceName, Name, and Status without any modifications. The problem is that I cannot find anywhere in the documentation of how to list out the VMs via the API.
I have looked through the various Azure REST API's but have not been able to find anything. The documentation for VM REST API does not show or provide a list function.
Am I missing the fundamentals somewhere?
// Create the request.
// https://management.core.windows.net/<subscription-id>/services/hostedservices
requestUri = new Uri("https://management.core.windows.net/"
+ subscriptionId
+ "/services/"
+ operation);
This is what I am using for the base of the request. I can get a list of hosted services but not the Virtual Machines.
You would need to get a list all the Cloud Services (Hosted Services), and then the deployment properties for each. Look for the deployment in the Production environment/slot. Then check for a role type of "PersistentVMRole".
VMs are really just a type of Cloud Service, along with Web and Worker roles. The Windows Azure management portal and PowerShell cmdlets abstracts this away to make things a little easier to understand and view.
Follow these steps for listing VMs:
List HostedServices using the following ListHostedServices
For each service in from the above,
a)GetDeployment by Environment(production or staging).
OR
b) Get Deployment By Name.
In either case, get the value for Deployment.getRoleInstanceList().getRoleInstance().getInstanceName().
You can use Azure node SDK to list out all VMs in your subscription
computeClient.virtualMachines.listAll(function (err, result))
More details on Azure Node SDK here: https://github.com/Azure-Samples/compute-node-manage-vm

Create New Windows Azure Hosted Service from a Worker Role

What is the best way to create a new Windows Azure Hosted service from a running role using a package and configuration that I have stored in blob storage?
I am guessing that I could use a Service Management REST API Create Deployment request, however running a cmdlet from my worker role might be better. Any thoughts? If the cmdlet route is better, bonus points if you can point me in the right direction on how to run them from a worker role.
Not sure what is 'best' here because it depends on what you are trying to do. If you just need to create a hosted service programmatically it would be about the same to create a REST client, upload a cert, and just do it versus using the cmdlets or anything else.
As the creator of the cmdlets, they have a special place in my heart, but I would probably stick to using those for IT admin tasks. They rock for cmd line automation.
That being said, it is not terribly hard to roll your own client, but I typically recommend that you download the Service Managements contracts from csmanage. That way, you have a simple wrapper around this to get going. While it does use WCF, it is not too onerous.