I've setup Azure CDN (Verizon Premium) and have it serving static files from a Blob Storage.
domain is http://abc.domain.com
I also have an azure function at
POST https://flows1.azurewebsites.net/api/survey
now from my page
http://abc.domain.com/index.html
I need to call my azure function.
Am I able to use the CDN for a URL re-write to take
POST http://abc.domain.com/api/survey
and have the CDN pass through to
POST https://flows1.azurewebsites.net/api/survey
I want to avoid cross-domain scripting issues and am hoping this configuration will allow me to do so.
Thanks for your help,
I have done some research and concluded that this is not possible. At least, not without sacrificing the benefits of using the Azure CDN and Azure Storage in the first place.
To address the issue of cross-domain scripting: Once you have setup another domain to point to the Azure Functions (e.g. http://api.domain.com), make sure you add Access-Control-Allow-Origin: https://abc.domain.com to the response headers in the Azure Functions. Do not use Access-Control-Allow-Origin: *. See here why.
Related
I am just thinking what the best approach is to implement a simple form with file upload on a static website without any backend.
Scenario:
I have static website (NuxtJS) where a form can be filled and files can be uploaded.
To protect this form I wanted to use recaptcha by Google but as I read a little further in their documentation it seems that I need a backend which is a overkill for a static website.
Furthermore I wanted to support file upload... quite complicated without a backend.
What I thought of:
Maybe an existing product which does exactly what I am looking for? Or should I build a AWS Lambda Pipeline (of course with an S3 Bucket) to function as my "backend" for recaptcha and file upload.
Is there any approach which makes this scenario simpler, or am I thinking to complicated at the moment.
Use Case / Flow Chart:
Users enters Website.
Fills out form.
(optional) uploads files
Checks recaptcha
Clicks Send - Sends "Message" in our companies slack channel / or email.
However I solved this "common" task with a custom "backend" hosted on AWS Lambda which makes the whole stuff "serverless".
For those who are interested in "how to setup a server less backend" here's the current flow-chart which I made use of.
As you can see after the recaptcha is validated on client side and a token is generated, it is sent to the AWS API Gateway which triggers a Lambda Function (NodeJS Implementation of a Backend) where the token is validated and for file uploads pre-signed Uris are generated.
Notice: The API Gateway and the S3 Bucket need a valid CORS Configuration to communicate with each other and the world.
We use VSTS dashboards and like to use "embedded webpage" widget to display customized information. We do this by linking to a server where we put some code that calls the VSTS rest api. We authenticate using Personal Access Tokens stored on the server(PAT)
To simplify this process we could skip the server and PATs altogether by using the embedded webpage widget and point it to a html file. This html file would contain javascript and perform the api calls to VSTS and display the information. This however is not possible because of CORS restrictions. We would need to provide a PAT to perform CORS which complicates things.
One work around for this is to host the html page in git in VSTS. If we do this the CORS policy would match but it is not possible to get the file from git with content type as text/html so the html is not rendered when put in the widget.
I also tried the IFrame extension which allows iframe from data: URI but data URIs seems to have a different origin so it doesn't transfer the cookie which means it wont authenticate.
I understand there is a security risk that it would be possible to perform api calls on behalf on whoever is viewing the dashboard so it may be by design if it is not possible.
Is is possible to make a VSTS widget in pure html that calls VSTS api without using PATs?
No, you can't, you need to do it in extension html file directly.
Using only the REST API, I am able to upload a file to Azure Media Services from my local machine and start an encoding job. Then I need to poll the job for status to see when it is done. But, what I really want is for Azure Media Services to send a request to my callback URL when it is done. Is there way to do this?
Take a look at our Notifications features which supports WebHooks.
https://learn.microsoft.com/en-us/azure/media-services/media-services-dotnet-check-job-progress-with-webhooks
It integrates well with Azure Functions also - if you want to host your callback in Azure Functions and just leverage the WebHook trigger in there.
We have some examples of doing that up here:
https://github.com/Azure-Samples/media-services-dotnet-functions-integration/tree/master/101-notify-webhooks
I want to call external rest resource from within confluence atlassian wiki .
Any examples ?
Can this be achieved via CLI in the backend ?
Please kindly share your thoughts.
The fact that you need this is a warning sign about the design of your app. The plugin api is way more powerful than the REST api and you should lern to use it.
Technically, what you want is possible, but you may have a problem with authentication. When you try to reach the web interface from the backend, you have to log in as a user, you will not be automatically logged in as the backend user. You also need to have access to the url, which is not automatic in corporte environment with all kinds of complex networks solutions.
If the rest service is unauthenticated then you could look to Enable the html-include macro.
Which would allow you to do an html include of the GET REST service call within the page.
Would look like this once enabled:
{html-include:url=http://www.example.com/rest/myservice?param1=1}
However, I suggest looking to use their whitelist feature if you do this.
This also only works for self hosted instances and not for on-demand.
Say we want a REST API to support file uploads, and we want uploads to be done directly on S3.
According to this solution Amazon S3 direct file upload from client browser - private key disclosure, we have to create POLICY and SIGNATURE for user to be allowed to upload to S3.
However, we want a single entry point for the API, including uploads.
Can we:
1. in our API, catch POST https://www.example.org/users/1234/objects
2. calculate POLICY and SIGNATURE to allow direct upload to S3
3. return a 307 "Temporary Redirect" to https://s3-bucket.s3.amazonaws.com
How to pass POLICY and SIGNATURE in the redirect?
What is best practice here?
You dont redirect, instead your API should return the policy and signature in the response (say in JSON).
Then the browser can use these values to directly upload to S3 as in the document. This is a two step process.