I am trying to setup my environment to be able to to access Azure resources from outside Azure.
While looking at different options I cam across mainly below options of many others
Option 1:
Creating a Service Principal with the Azure CLI and use client secrets for Token retrieval and accessing Resources
Get Client secrets Run Time
Option 2:
Using DefaultAzureCredential (Azure.Identity) for Token retrieval and accessing Resources
DefaultAzureCredential
I am currently trying out DefaultAzureCredential option to be able to access Azure resources such as ADF, Blob storage etc.
I am able to do this using the Visual Studio credentials (VS 2019). However challenge remains to perform same action via a Pipeline running outside Azure. I do not want to save any secrets in the code. Does this means that I cannot use environment variables for the Purpose?
If indeed this is still possible then need help with the code.
Environment:
. Net Framework 4.8/Core 3.1
Desired Flow:
Use Visual Studio Credentials for local Development and Test.
Use Environment Variables OR other tasks supported by DefaultAzureCredential via DevOps Pipeline task.
Code:
var tokenCredential = new DefaultAzureCredential();
var accessToken = await tokenCredential.GetTokenAsync(
new TokenRequestContext(scopes: new string[] { ResourceId + "/.default" }) { }
);
I was able to solve this using DefaultAzureCredential. We followed the below approach to solve this
Added code to read the secrets from appsetting.json
Add secrets to environment variables
Use DefaultAzureCredential* to reach to correct override.
Add replace token task in Build/Release pipelines to replace client secret variables with secrets from pipeline parameters.
Code when executed from Visual studio does not find actual value to secret variables from appsetting.json and then uses VisualStudio Credentials.
Read values
string AZURE_CLIENT_SECRET = ConfigurationHelper.GetByName("AZURE_CLIENT_SECRET");
string AZURE_CLIENT_ID = ConfigurationHelper.GetByName("AZURE_CLIENT_ID");
string AZURE_TENANT_ID = ConfigurationHelper.GetByName("AZURE_TENANT_ID");
// Check whether the environment variable exists.
if (AZURE_CLIENT_SECRET != "{{AZURE_CLIENT_SECRET}}"
&& AZURE_CLIENT_ID != "{{AZURE_CLIENT_ID}}" &&
AZURE_TENANT_ID != "{{AZURE_TENANT_ID}}")
{
Environment.SetEnvironmentVariable("AZURE_CLIENT_SECRET", AZURE_CLIENT_SECRET);
Environment.SetEnvironmentVariable("AZURE_CLIENT_ID", AZURE_CLIENT_ID);
Environment.SetEnvironmentVariable("AZURE_TENANT_ID", AZURE_TENANT_ID);
Console.WriteLine("Setting Environment Variables");
}
Call DefaultAzureCredential
var objDefaultAzureCredentialOptions = new DefaultAzureCredentialOptions
{
ExcludeEnvironmentCredential = false,
ExcludeManagedIdentityCredential = true,
ExcludeSharedTokenCacheCredential = true,
ExcludeVisualStudioCredential = false,
ExcludeVisualStudioCodeCredential = false,
ExcludeAzureCliCredential = true,
ExcludeInteractiveBrowserCredential = true
};
var tokenCredential = new DefaultAzureCredential(objDefaultAzureCredentialOptions);
ValueTask<AccessToken> accessToken = tokenCredential.GetTokenAsync(
new TokenRequestContext(scopes: new[] { "https://management.azure.com/.default" }));
If environment variables are present in the active session then the code uses environment variables
Related
I've been using Terraform for some time but I'm new to Terraform Cloud. I have a piece of code that if you run it locally it will create a .tf file under a folder that I tell him but if I run it with Terraform CLI on Terraform cloud this won't happen. I'll show it to you so it will be more clear for everyone.
resource "genesyscloud_tf_export" "export" {
directory = "../Folder/"
resource_types = []
include_state_file = false
export_as_hcl = true
log_permission_errors = true
}
So basically when I launch this code with terraform apply in local, it creates a .tf file with everything I need. Where? It goes up one folder and under the folder "Folder" it will store this file.
But when I execute the same code on Terraform Cloud obviously this won't happen. Does any of you have any workaround with this kind of troubles? How can I manage to store this file for example in a github repo when executing github actions? Thanks beforehand
The Terraform Cloud remote execution environment has an ephemeral filesystem that is discarded after a run is complete. Any files you instruct Terraform to create there during the run will therefore be lost after the run is complete.
If you want to make use of this information after the run is complete then you will need to arrange to either store it somewhere else (using additional resources that will write the data to somewhere like Amazon S3) or export the relevant information as root module output values so you can access it via Terraform Cloud's API or UI.
I'm not familiar with genesyscloud_tf_export, but from its documentation it sounds like it will create either one or two files in the given directory:
genesyscloud.tf or genesyscloud.tf.json, depending on whether you set export_as_hcl. (You did, so I assume it'll generate genesyscloud.tf.
terraform.tfstate if you set include_state_file. (You didn't, so I assume that file isn't important in your case.
Based on that, I think you could use the hashicorp/local provider's local_file data source to read the generated file into memory once the MyPureCloud/genesyscloud provider has created it, like this:
resource "genesyscloud_tf_export" "export" {
directory = "../Folder"
resource_types = []
include_state_file = false
export_as_hcl = true
log_permission_errors = true
}
data "local_file" "export_config" {
filename = "${genesyscloud_tf_export.export.directory}/genesyscloud.tf"
}
You can then refer to data.local_file.export_config.content to obtain the content of the file elsewhere in your module and declare that it should be written into some other location that will persist after your run is complete.
This genesyscloud_tf_export resource type seems unusual in that it modifies data on local disk and so its result presumably can't survive from one run to the next in Terraform Cloud. There might therefore be some problems on the next run if Terraform thinks that genesyscloud_tf_export.export.directory still exists but the files on disk don't, but hopefully the developers of this provider have accounted for that somehow in the provider logic.
I'm working on automating some software installation on Windows AWS EC2. My infrastructure as a dev environment and a prod environment. So to handle that i'm trying to pass through user data some variables :
data "template_file" "template_name" {
template = file("user-data/user-data-software.ps1")
vars = {
aws_region = var.region
bucket_key = module.s3_access_key.bucket_id
bucket_sim_files = module.s3_sim_files.bucket_id
}
}
My issue is that I'm not able to access these variables, I couldn't get then through 'args' or with param.
Do you guys ever encounter this?
I'm having great difficulty getting Kerberos Auth working with Vault using VaultSharp.
I don't have control over Vault server but I've been informed that it is configured and ready to use.
I'm using .NET running in IIS and I want to make use of the service account that IIS is running under so that I don't need to store additional secrets or user/passwords.
Here is the code I'm using and the error:
public string GetSecretWithKerberosAuthUsingVaultSharp(string keyName, string vaultBaseAddress, string vaultResourcePath, string mountPoint)
{
IAuthMethodInfo authMethod = new KerberosAuthMethodInfo(); // uses network credential by default.
var vaultClientSettings = new VaultClientSettings(vaultBaseAddress, authMethod);
IVaultClient vaultClient = new VaultClient(vaultClientSettings);
var result = vaultClient.V1.Secrets.KeyValue.V2.ReadSecretAsync(vaultResourcePath, mountPoint: mountPoint).Result;
//Above line gives this error message:
//{"request_id":"a85dfbb3-b283-3513-7cd3-01ad757eed1b","lease_id":"","renewable":false,"lease_duration":0,"data":null,"wrap_info":null,"warnings":["Unauthorised.\n\n"],"auth":null}
var resultData = result.Data;
string secret = resultData.Data[keyName].ToString();
return secret;
}
I have managed to get it working using token auth as well as through the CLI but that is not quite what I want.
authMethod.Credentials.UserName/Domain both are empty strings.
Don't know if they are supposed to be populated in this case or not but documentation states that it "uses network credentials by default"
Any help appreciated.
Is your web application running in integrated Windows Auth mode, with anonymous auth disabled?
If no, please make it work in that mode for your web app to have the Windows Integrated Auth context so that web calls from VaultSharp to Vault API can have the security context.
If yes, then can you please try a couple of things?
var kerberosAuthInfo = new KerberosAuthMethodInfo(CredentialCache.DefaultCredentials);
If the above doesn't work, then can you try explicit credentials.
var kerberosAuthInfo = new KerberosAuthMethodInfo(new NetworkCredential(userName, password, domain));
Ideally, the web app context should carry the integrated windows context so that you don't need to provide explicit credentials, but it might be worth trying to ensure that it works first and then we can backtrack as to why the context is not being passed.
I am using the AzureBlobCache config and would like to set the CacheStorageAccount parameter (and other parameters) at runtime because I don't want to commit my storage account credentials into source control in the config file.
I am deploying to Azure App Service and would like to set my CacheStorageAccount in Azure App Service's AppSettings so it could read at runtime instead of reading from the config file.
How can/should I achieve this? Should I modify the web.config in Global.asax?
Managed to find the solution. I set this in the Global.asax Application_Start() event to overwrite the settings in the config files.
var appSettings = ConfigurationManager.AppSettings;
var config = ImageProcessorConfiguration.Instance;
var cachedStorageAccount = appSettings["CachedStorageAccount"];
if (!string.IsNullOrEmpty(cachedStorageAccount))
{
config.ImageCacheSettings["CachedStorageAccount"] = cachedStorageAccount;
}
I want to automate the queue-ing of Azure Pipelines with an API call, get information on the pipeline/build/job status,
Azure Pipelines docs only mention "API" for the "Invoke HTTP Rest API" task: https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/http-rest-api?view=vsts That might come in handy, but is not what I am looking for.
There is a "Azure DevOps Services REST API":
https://learn.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-5.1
But I couldn't find any mention of "Pipeline" there, so this doesn't seem to be the right thing as well.
The StackOverflow tag azure-devops-rest-api also only mentions VSTS and TFS:
Visual Studio Team Services REST APIs is a set of APIs allowing management of a Visual Studio Team Services accounts as well as TFS 2015 and 2017 servers.
Besides these two results, I only find other versions or translations of various copies of these - and a lot of unrelated documents that are about Azure in general.
Am I just using the wrong words to search?
Is there an actual API for Azure DevOps Pipelines?
Does it have a usable API Explorer?
Does it have proper clients for languages like JavaScript, Ruby or PHP?
Seems I was bad at googling:
Trigger Azure Pipelines build via API and Start a build and passing variables through VSTS Rest API (found via the searching for [azure-pipelines] apihere on StackOverflow) point me to the Azure DevOps Services REST API that I had mentioned above.
I too have been working on automating DevOps pipelines and keep winding up back here. Some of this information appears to be outdated. As of the time of my writing this, I believe this article in the Microsoft Docs is the most recent. I did have to scratch my head a bit to make it work, but wound up with this code
public static async Task InitiatePipeline(CancellationToken cancellationToken = default)
{
using(HttpClient client = new HttpClient())
{
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var token = Convert.ToBase64String(System.Text.ASCIIEncoding.ASCII.GetBytes(string.Format("{0}:{1}", "", AppSettings.DevOpsPAT)));
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic", token);
var repoGuid = "Put GUID Here"; // You can get GUID for repo from the URL when you select the rpo of interest under Repos is Project Settings
var bodyJson = #"{
""parameters"": {
""parameterName"": ""parameterValue""
},
""variables"": {},
""resources"": {
""repositories"": {
""self"": {
""repository"": {
""id"": """ + repoGuid + #""",
""type"": ""azureReposGit""
},
""refName"": ""refs/heads/master""
}
}
}
}";
var bodyContent = new StringContent(bodyJson, Encoding.UTF8, "application/json");
var pipeLineId = 61; // Can get this from URL when you open the pipeline of interest in Azure DevOps
var response = await client.PostAsync($"https://dev.azure.com/ORG_NAME/PROJECT_NAME/_apis/pipelines/{pipeLineId}/runs?api-version=6.0-preview.1", bodyContent, cancellationToken);
response.EnsureSuccessStatusCode();
}
}
I ran into these problems as well and wound up making a powershell wrapper of the API and then wrapping that into an Azure DevOps Pipeline Template. I've just published it for anyone to use. I hope that anyone who finds this thread can find this template useful.
With AzFunc4DevOps this can be done in an event-driven way. And in C#.
E.g. here is how to trigger a build when another build succeeds:
[FunctionName(nameof(TriggerBuildWhenAnotherBuildSucceeds))]
public static async Task Run(
[BuildStatusChangedTrigger
(
Project = "%TEAM_PROJECT_NAME%",
BuildDefinitionIds = "%BUILD_DEFINITION_ID%",
ToValue = "Completed"
)]
BuildProxy build,
[BuildClient]
BuildHttpClient buildClient,
[BuildDefinition(Project = "%TEAM_PROJECT_NAME%", Id = "%NEXT_BUILD_DEFINITION_ID%")]
BuildDefinitionProxy nextbuildDefinition
)
{
await buildClient.QueueBuildAsync(new Build
{
Definition = nextbuildDefinition,
Project = nextbuildDefinition.Project
});
}
Here are some more examples.