Azure Function : Release pipeline to include appsettings.json values - azure-devops

I have 2 settings file in my Azure Functions Project.
1)local.settings.json
2)appsettings.json
In the Startup.cs I am combining both into a single Configuration and everything works fine in the code with local debugging.
Config = new ConfigurationBuilder()
.SetBasePath(currentDirectory)
.AddConfiguration(configuration)
.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
.Build();
The Appsetings.json contains nested values.
eg:
{
"Test":
{
"abc":"def"
}
}
I am deploying my code to a function App using the Release pipeline in Azure DevOps.
I am using "Azure App Service Settings" task in my release pipeline to substitute the values in the final deployment to the Function App.
So the values that go in the Release task mentioned above is,
[
{ "name": "FUNCTIONS_WORKER_RUNTIME", "value": "dotnet", "slotSetting": false },//from local.settings.json
{ "name": "Test__abc", "value": "def", "slotSetting": false },//from appsettings.json
]
ISSUE:
The code get deployed without any issue and the local.setting.json value is passing through fine. But the issue is with the appsettings.json value. This is getting added to the Configuration tab in Azure Portal, but it does not seem to be used by the app. The app is erroring out because that value is null.
How do I pass the appsetting value through the release task to Azure Functio?

The app started to work after I added .AddEnvironmentVariables() in startup.
Config = new ConfigurationBuilder()
.SetBasePath(currentDirectory)
.AddConfiguration(configuration)
.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
.AddEnvironmentVariables()
.Build();

Related

Need to report on data from Retrospectives - Azure Dev Ops

We need a way to access data through an automated way (either Rest API or some SDK) that is contained within the Retrospective Azure Dev Ops extension. Currently, there is an option to export CSV but the process is manual and limited to each Retrospective. Any ideas/thoughts?
You can try like as the following steps:
Run the API to get the information of project teams in a project.
Request URL
POST https://dev.azure.com/{organization_Name}/_apis/Contribution/HierarchyQuery?api-version=5.0-preview.1
Request Body
{
"contributionIds": ["ms.vss-admin-web.org-admin-groups-data-provider"],
"dataProviderContext": {
"properties": {
"teamsFlag": true,
"sourcePage": {
"url": "https://dev.azure.com/{organization_Name}/{project_Name}/_settings/teams",
"routeId": "ms.vss-admin-web.project-admin-hub-route",
"routeValues": {
"project": "{project_Name}",
"adminPivot": "teams",
"controller": "ContributedPage",
"action": "Execute",
"serviceHost": "{organization_Id} ({organization_Name})"
}
}
}
}
}
Run the API to list the retrospectives for a specified project team in the project.
GET https://extmgmt.dev.azure.com/{organization_Name}/_apis/ExtensionManagement/InstalledExtensions/ms-devlabs/team-retrospectives/Data/Scopes/Default/Current/Collections/{projectTeam_identityId}/Documents?api-version=3.1-preview.1
Run the API to get more details about a specified retrospective.
GET https://extmgmt.dev.azure.com/{organization_Name}/_apis/ExtensionManagement/InstalledExtensions/ms-devlabs/team-retrospectives/Data/Scopes/Default/Current/Collections/{retrospective_Id}?api-version=3.1-preview.1
However, we have not any available interface (API or CLI) to Export CSV content.

What is the valid value for destinationTable for the Data Factory Diagnostic settings in the ARM JSON Tempate?

I want to use the recommended Diagnostic Settings for Azure Data Factory using "Resource Specific" destination table. I'm using ARM Templates to deploy the change, but none of the values I put in seems to work:
What is the correct value to use "resource specific" destination table?
Thanks!
I have tried these different values in the ARM Template: resourceSpecific, ResourceSpecific, Resource-Specific
"properties": {
"name": "[variables('LogAnalyticsSettingName')]",
"storageAccountId": null,
"eventHubAuthorizationRuleId": null,
"eventHubName": null,
"workspaceId": "[resourceId('microsoft.operationalinsights/workspaces',parameters('OMSWorkspaceName'))]",
"destinationTable": "resourceSpecific",
"logs": [
{
"category": "PipelineRuns",
"enabled": true,
"retentionPolicy": {
"enabled": false,
"days": 0
}
}
After deploying the ARM Template...
Expected Result: Destination table is Resource Specific in the ADF Diagnostics Settings
Actual Result: Destination table remains to be Azure diagnostics
I was able to find the solution by reviewing the Activity Log in the Data Factory when I change the diagnostic settings manually from the portal.
In the "Create or update resource diagnostic setting" JSON request body, I saw a property called "logAnalyticsDestinationType" with a value of "Dedicated".
I removed the destinationTable property in the ARM JSON template and replaced it with "logAnalyticsDestinationType":"Dedicated" instead and redeployed the ARM template. It worked as expected.

How to use code first migrations with Azure App Service

When I run locally I run the commands below manually and then package and publish the app onto my IIS server.
Add-Migration Initial
Update-Database
When I want to publish to an azure appservice will these commands run automatically? If so how does it know to use a different ConnectionString when I publish it to azure?
I added the connectionString for azure in appsettings.json but I don't understand how I can tell my controllers etc to use that when I publish to azure AppServices
"ConnectionStrings": {
"AzureTestConnection": "Data Source=tcp:xxxxxx-test.database.windows.net,1433;Initial Catalog=xxxxx;User Id=xxx#yyyy.database.windows.net;Password=xxxxxx",
"NWMposBackendContext": "Server=(localdb)\\mssqllocaldb;Database=NWMposBackendContext-573f6261-6657-4916-b5dc-1ebd06f7401b;Trusted_Connection=True;MultipleActiveResultSets=true"
}
I am trying to have three profiles with different connection strings
Local
Published to AzureApp-Test
Published to AzureApp-Prod
When I want to publish to an azure appservice will these commands run automatically?
EF does not support Automatic migrations, you may need to manually execute Add-Migration or dotnet ef migrations add for adding migration files. You could explicitly execute the command to apply the migrations, also you could apply migrations in your code.
And you could add the following code in the Configure method of Startup.cs file:
using (var scope = app.ApplicationServices.GetService<IServiceScopeFactory>().CreateScope())
{
scope.ServiceProvider.GetRequiredService<ApplicationDbContext>().Database.Migrate();
}
I am trying to have three profiles with different connection strings
You would dynamically choose a connection string based on Environment, so here is main steps, you could refer to it.
Set the ASPNETCORE_ENVIRONMENT value to azure in webapp>property>debug.
2.Follow ASP.NET Core MVC with Entity Framework Core to get started.
3.Set the appsetting.json with your two connection string.
{
"ConnectionStrings": {
"DefaultConnection": "connectiondefault",
"azure": "connectionazure"
},
"Logging": {
"IncludeScopes": false,
"LogLevel": {
"Default": "Warning"
}
}
}
Note:You could also set the connectionstring in database on portal to here, then you could test it in local and could use debug to troubleshooting.
Also, you could try to test with one connectionstring to ensure you have no problem with connecting to database.
4.Enable Developer exception page by using app.UseDeveloperExceptionPage(); and the app.UseExceptionHandler methods in your startup class which would display the errors.
public Startup(IHostingEnvironment env)
{
Configuration = new ConfigurationBuilder()
.SetBasePath(env.ContentRootPath)
.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
.Build();
HostingEnvironment = env;
}
public IConfigurationRoot Configuration { get; }
public IHostingEnvironment HostingEnvironment { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
if (HostingEnvironment.IsDevelopment())
{
services.AddDbContext<SchoolContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
}
else
{
services.AddDbContext<SchoolContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("azure")));
}
services.AddMvc();
}
For more details, you could refer to this thread.

Passing secureObject array as VSTS variable

I have an ARM template that deploys Key Vault and populates it with secrets. It does creates secrets, based on how many arrays are in the parameter secretsObject. For example if I have:
"secretsObject": {
"type": "secureObject",
"defaultValue": {
"secrets": [
{
"secretName": "exampleSecret1",
"secretValue": "secretVaule1"
},
{
"secretName": "exampleSecret2",
"secretValue": "secretValue2"
}
]
}
}
The template will create 2 Secrets. So this is the line that I put into .parameters.json to deploy the template from Visual Studio:
"secrets": [
{
"secretName": "exampleSecret1",
"secretValue": "secretVaule1"
},
{
"secretName": "exampleSecret2",
"secretValue": "secretValue2"
}
]
The problem is I can't figure out how to past such line into VSTS as a variable (to overwrite parameter). This is the ARM template I'm using
There were errors in your deployment. Error code: InvalidDeploymentParameterKey.
One of the deployment parameters has an empty key. Please see https://aka.ms/arm-deploy/#parameter-file for details.
Processed: ##vso[task.issue type=error;]One of the deployment parameters has an empty key. Please see https://aka.ms/arm-deploy/#parameter-file for details.
task result: Failed
Task failed while creating or updating the template deployment.
There is the issue in Azure Resource Group deployment task and I submit a feedback here: VSTS build/release task: Override template parameters of Azure Resource Group Deployment.
The workaround is that you can update the parameter file during the build/release (e.g. parameter.json) and specify this parameter file in Azure Resource Group deployment task.
There are many ways to update file, such as Replace Tokens.
Update:
Feedback in Gitgub: https://github.com/Microsoft/vsts-tasks/issues/6108

How to deploy an opsworks application by cloudformation?

In a cloudformation template, I create an opsworks stack, a layer, an instance and an application. This template sets up and configures the instance by a chef cookbook of recipes and scripts. How can I deploy the application automatically from the template without clicking manually on deploy inside the stack ? After the deploy the defined Deloy recipes from the cookbook are being executed:
"MyLayer": {
"Type": "AWS::OpsWorks::Layer",
"DependsOn" : "OpsWorksServiceRole",
"Properties": {
"AutoAssignElasticIps" : false,
"AutoAssignPublicIps" : true,
"CustomRecipes" : {
"Setup" : ["cassandra::setup","awscli::setup","settings::setup"],
"Deploy": ["imports::deploy"]
},
"CustomSecurityGroupIds" : { "Ref" : "SecurityGroupIds" },
"EnableAutoHealing" : true,
"InstallUpdatesOnBoot": false,
"LifecycleEventConfiguration": {
"ShutdownEventConfiguration": {
"DelayUntilElbConnectionsDrained": false,
"ExecutionTimeout": 120 }
},
"Name": "script-node",
"Shortname" : "node",
"StackId": { "Ref": "MyStack" },
"Type": "custom",
"UseEbsOptimizedInstances": true,
"VolumeConfigurations": [ {
"Iops": 10000,
"MountPoint": "/dev/sda1",
"NumberOfDisks": 1,
"Size": 20,
"VolumeType": "gp2"
}]
}
}
An application looks like this:
Any idea ?
Thank you.
The CreateDeployment API call generates a one-off event that executes the Deploy actions within your OpsWorks stack. I don't think any official CloudFormation resource maps to this directly, but here are some ideas on how to call it within the context of a CloudFormation template:
Write a Custom Resource that calls CreateDeployment (e.g., via the AWS SDK for Node.js) when created.
Add an AWS::CodePipeline::Pipeline resource to your template that's configured to deploy your OpsWorks app as part of a Deploy Stage. See Using AWS CodePipeline with AWS OpsWorks Stacks for documentation on this integration. (Though it's an extra service + layer of complexity, I think CodePipeline is a better layer of abstraction for modeling deployment actions in your application stack anyway.)
I believe this can be done within the recipes. So in your recipes you'll have a function to validate the app name and if it exists then proceed with the deployment.
For example your deploy recipe would look something like this:
if validator(node[:app][:name]) == true
do whatever
end
and this validator function can reside in your chef library:
def validator(app_name)
app = search("aws_opsworks_app", "name:#{app_name}").first
if app[:deploy] == true
Chef::Log.warn("PROCEEDING: Deploy initiated for #{app[:name]}")
end
end