Azure functions swap functionality is not working after enabling private endpoint for function app linked storage - azure-devops

Azure functions swap functionality is not working after enabling private endpoint(with selected networks option) for function app linked storage account(webjobstorage)
Created private endpoint for blob, file and table storage
Below are the additional app settings I am adding
{
"name": "WEBSITE_CONTENTOVERVNET",
"value": "1",
"slotSetting": false
},
{
"name": "WEBSITE_CONTENTSHARE",
"value": "production",
"slotSetting": false
},
{
"name": "WEBSITE_DNS_SERVER",
"value": "168.63.129.16",
"slotSetting": false
},
{
"name": "WEBSITE_VNET_ROUTE_ALL",
"value": "1",
"slotSetting": false
}
Referred this article Secure storage account linked to Function App with private endpoint
From the azure devops I am trying to deploy the code to staging slot first, then later I am swapping it with prod slot. at this step it is failing.
Tried to swap it from the portal that also failed.
I am getting below error
From devops swap task :
##[error]Error: Failed to swap App Service 'testmgmt-fa-min-go' slots - 'staging' and 'production'. Error: InternalServerError - There was an unexpected error swapping slots 'staging' and 'production' for site 'testmgmt-fa-min-go(staging)'. Please try to cancel your swap operation. (CODE: 500)
From Portal:

This was caused by an internal platform component, and I’ll updated this question to notify when the component fix has been fully released. Unfortunately, the ETA for a full roll out is within the next 3 to 4 months.

Thanks to #UBK, your comment helped me to resolve the same swapping issue in my Azure Private Endpoint Function App.
I tried to reproduce the issue by following the given documentation: Secure storage account linked to Function App with private endpoint - Microsoft Tech Community
Solved the swapping issue by allowing access to all networks in the Networking of Storage Account.

The fix is deployed but we had to introduce a new app setting that you should set on your production slot (or the swap slot if you're swapping between two subslots) called WEBSITE_OVERRIDE_STICKY_DIAGNOSTICS_SETTINGS and set it to 0 (zero). I.e.,
WEBSITE_OVERRIDE_STICKY_DIAGNOSTICS_SETTINGS=0
This will allow you to swap the slots when the storage account is network restricted. Here is our documentation on app settings. This should not have any impact on your Azure Monitor related diagnostics settings configuration and is related to the legacy Application Log Settings configuration, which was preventing Premium Functions slot swaps from occurring.
Next steps on our side are:
We will add to our backlog a work item for this setting to defaulted for Premium Functions, so you won't have to add it but currently no ETA for this, so the above is the current final solution.
We will add the app setting to our App Settings list documentation

Related

How to make Amplify CloudFormation aware of changes made outside of it

I ended up on a point that Amplify fails to push any change I made, with a non existent UserPool clientId exception.
Something like
Resource Name: XXXXXXXXXXX (AWS::Cognito::UserPoolClient) Event Type:
update Reason: User pool client does not exist. (Service:
AWSCognitoIdentityProviderService; Status Code: 400; Error Code:
ResourceNotFoundException; Request ID: YYYYYYYYYYYYYYYYYY URL:
https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/xxxxxxxxxxx
I have explained my whole journey on a Github issue for Amplify Cli that you can see here, unfortunately, I'm not getting much support from Amplify team, as you can see there.
I also have created a StackOverflow question with the initial problem I was facing, that you can check here
After digging more into this issue for 3-4 long days, as this issue is blocking my deployment, I came to a guess to what happened:
I have added auth to my amplify project months ago
Eventually, I noticed one of the created clients were not being used, so I have deleted it, using the Cognito console.
I had not updated the auth during months
Now that I have introduced the social authentication Amplify tried to update it and because of the client Id not existing anymore, it can't and raises the mentioned error.
Now, anything I try to update it fails, and I guess the reason is this out of sync between what Amplify expects and what actually is the infra.
Every time I pull --restore my environment, I get my amplify-meta.json updated with this invalid client Id (and yes, I have tried changing it on the local amplify-meta.json and pushing it), something like:
"auth": {
"myproject": {
"service": "Cognito",
"providerPlugin": "awscloudformation",
"output": {
"GoogleWebClient": "111111111.apps.googleusercontent.com",
"AppClientSecret": "aaaaaaaaaaa",
"UserPoolId": "region-pooId",
"AppClientIDWeb": "VALID ID",
"AppClientID": "INVALID ID",
"FacebookWebClient": "2222222222",
"IdentityPoolId": "region:Id",
"IdentityPoolName": "myproject__env",
"UserPoolName": "mypoolname"
},
"lastPushTimeStamp": "2020-05-13T20:48:29.797Z",
"providerMetadata": {
"s3TemplateURL": "https://s3.amazonaws.com/myproject-deployment/amplify-cfn-templates/auth/lexis-cloudformation-template.yml",
"logicalId": "authmyproject"
},
"lastPushDirHash": "XXXXXXXXXXXXXX="
}
},
I have a different valid ClientId on my Cognito, so on my last resort, what I have tried is going direct to the S3TemplateURL pointed on this code and updating it there to the valid one, my guess was that this file was the single point of truth for Amplify.
But no success, still getting the same wrong Id after pull restore.
Any idea how can I make Amplify in sync again? Making it aware that this ClientId doesn't exist anymore and just getting rid of it on the CloudFormation/Templates?
Amplify Cli is not supporting this feature.
I had the same problem.
I updated Appsync and Cognitor in the cloud and I cannot pull the changes to my project.
When I run amplify status, it said no changes.
So I contacted AWS support and they said this is coming feature.
The solution is to change everything in amplify cli and manage amplify in the console. Don't change anything in the cloud.

Heroku Review Apps not deploying at all

I'm trying to automatically create review apps as part of my pipeline and testing procedure when pull requests are created on the corresponding GitHub repository. When the PR is created, it appears as a review app, but doesn't actually get created.
In the DevTools console, a 404 error is there about the review-app-config. I'm not sure if this is directly related, as I've successfully created a review app on a different pipeline (with a different owner) with the same error.
This 404 error changes between the file not being available at all, or that it's returning an error. When it's the latter, the file contains the following:
{"id":"missing_version","error":"Please specify a version along with Heroku's API MIME type. For example, `Accept: application/vnd.heroku+json; version=3`.\n"}
I'm creating and managing all of the apps/pipelines with the GUI on dashboard.heroku.com. The version accept header appears to be needed for the Heroku API but I've no idea how to implement it. Any help would be greatly appreciated!
Firstly check that your app.json file is valid json. If it isn't then that will cause the deployment to fall over.
Secondly check if you have any scripts in the app.json key. If you have any here and they are incorrect then this will also cause it to hand and fall over with no warning displayed.
{
"name": "App name",
"scripts": {
"deploy": "command that won't work!!"
},
...
}
You many not need any scripts in here so it can also be empty!
{
"name": "App name",
"scripts": {},
...
}

Extending S/4HANA OData service to SCP

I want to extend a custom OData service created in a S/4HANA system. I added a Cloud Connector to my machine, but I don't know how to go from there. The idea is that I want people to access the service from SCP and that I don't need multiple accounts accessing the service on the S/4 system, but just the one coming from SCP. Any ideas?
Ok I feel silly doing this but it seems to work. My test is actually inconclusive because I don't have a cloud connector handy, but it works proxy-ing google.
I'm still thinking about how to make it publicly accessible. There might be people with better answers than this.
create the cloud connector destination.
make a new folder in webide
create file neo-app.json.
content:
{
"routes": [{
"path": "/google",
"target": {
"type": "destination",
"name": "google"
},
"description": "google"
}],
"sendWelcomeFileRedirect": false
}
path is the proxy in your app, so myapp.scp-account/google here. the target name is your destination. I called it just google, you'll put your cloud connector destination.
Deploy.
My test app with destination google going to https://www.google.com came out looking like this. Paths are relative so it doesn't work but google seems proxied.
You'll still have to authenticate etc.

How to Set IP to Static with Powershell and Azure

I have an Azure Dev Test Lab that I am deploying to Azure via Power Shell. I am able to deploy the ARM templates and join to the test domain (not Azure AD) with no issues. The next step I would like to do is to set the IP to static. I can think of 3 ways to possibly do this. Either figure out the IP structure beforehand and deploy it with those settings. Let the DHCP assign the settings and try to problematically set them from Dynamic to Static using Powershell DSC. Or some type of preferred lease from the DHCP. These labs are meant to be stood up and torn down ad hoc. The IPs are internal and not Public. It is possible for me to know the IPs before hand. Could someone make a recommendation on what would make the most sense to pursue?
Well, there are several ways of looking at it, first of all, you can define ip at deployment time, by setting it to static, instead of dynamic:
{
"name": "xxx",
"type": "Microsoft.Network/networkInterfaces",
"apiVersion": "2016-10-01",
"location": "loc",
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"privateIPAllocationMethod": "Static",
"privateIPAddress": "ipgoeshere",
"subnet": {
"id": "subnetgoeshere"
}
}
}
]
}
but this method is only valid if you know the available IP addresses beforehand and you will have to look those up and pass to the template.
Another way of doing this is créating NIC as dynamic, getting its IP address and setting it to static. All can be done with an ARM Template. The example is a bit too much to paste here, you can check it here. look for deployments called: "[concat(variables('vmNamePrefix'),'setStaticIp')]", and "[concat(variables('vmNamePrefix'),copyIndex(1),'-primaryIp')]", and their corresponding templates: getip and setip
You can do pretty much the same with powershell, I dont have a script Handy, but the logic is the same, deploy > getip > setip

VSTS Web Hooks get silently disabled?

VSTS,
How does one configure VSTS to never auto-disable ServiceHooks that encounter errors? Looking thru the UI, there's no checkbox for 'always run, regardless of errors'.
Occasionally, we have to take-down the receiving service for maintenance, we need VSTS to continue to send the request regardless of any errors encountered (past or present).
No, there isn’t the way to configure it to never auto disable service hooks that encounter errors.
Also, continue to send the request regardless of any errors encountered (past or present) will affect the performance.
You can build a app (e.g. windows service) to check and enable web hooks through REST API: Update a subscription.
For example:
Put https://[account].visualstudio.com/_apis/hooks/subscriptions/[subscription id]?api-version=1.0
Body (Content-Type:application/json)
{
"publisherId": "tfs",
"eventType": "build.complete",
"resourceVersion": "1.0-preview.1",
"consumerId": "webHooks",
"consumerActionId": "httpRequest",
"scope":1,
"status":0,
"publisherInputs": {
"buildStatus": "",
"definitionName":"ClassTestVNext",
"projectId": "578ca584-4268-4ba2-b579-7aaee499c306"
},
"consumerInputs":{"url":"http://XXXX/"}
}