Web API inaccessible from APIM when both services have Private Link enabled - azure-devops

I'm currently tasked with setting up a secure, non-public connection between APIM and a Web API, and I've opted to use Private Endpoints for both services. The problem is that when Private Link is enabled on both, APIM can no longer connect to the Web API.
I've searched for similar questions online, but none of them seem to have Private Link enabled on APIM. Here's what I've done so far:
I created a virtual network called VNET1 with two subnets: PrivateLink-Subnet and VM-Subnet.
I deployed a simple Web API as a Web App, enabled private link, and used PrivateLink-Subnet.
Microsoft automatically created a private DNS zone for it. After this setup, the Web App is not accessible to the public, as expected.
To test VNET resources and Private Link, I used a Windows Virtual Machine, and from within the VM, I could access the Web API through "myapi.azurewebsites.net". So far, everything seems to be working well, as the app is only accessible from within the VNET.
For API Management, I selected "None" for the Virtual network settings, as per the documentation, and instead created a Private Endpoint. I chose the same VNET1 and PrivateLink-Subnet for the private endpoint and added a single API to the APIs, pointing to "myapi.azurewebsites.net".
The issue arises when I try to connect to the API through APIM gateway, as it returns a 403 error, saying that the APP has blocked my access. When I do an NSLOOKUP from within the VM, both APIM and the Web App are resolving to the same subnet, which is expected as both private links use the same subnet.
I believe for some reason APIM still try to resolve the API to the public IP address even though the Private DNS zone in Web APP and Private link has a records to sort that out!
I tried putting the private links on different subnets, but still no luck.
And if I go to the Networking section of the Web APP and enable public access, everything works like a charm, but that's not what we want. We need this to be accessible via VNET only and then later we'll add a VPN so people can access the APIs through APIM only when connected through the VPN.
FYI, if I choose Virtual Network type of External or Internal on APIM, everything works fine. But we're supposed to use Private Link for both the Web APP and APIM. no exposure to the internet!

Related

How to limit access in Cloud Foundry

I am new to Cloud Foundry.
Is there any way that only specific users can view and update an app deployed in Cloud Foundry?
1.I deployed an app in Cloud Foundry using “cf push”command.
2.After entering “cf push “command I’ve got an message below.
Using manifest file /home/stevemar/node-hello-world/manifest.yml
enter Creating app node-hello-world-example...
name: node-hello-world-example
requested state: started
routes: {route-information}
last uploaded: Mon 14 Sep 13:46:54 UTC 2020
stack: cflinuxfs3
buildpacks: sdk-for-nodejs
type: web
instances: 1/1
memory usage: 256M
3.Using the {route-information} above,I can see the app deployed via browser entering below URL.
https://{route-information}
By this way ,anyone can see app from browser, but I don’t want that to be seen by everyone and limit access to specific user.
I heard that this global IP will be allocated to {route-information} by default.
Is there any way to limit access to only between specific users?
(For example,is there any function like “private registry” at Kubernetes in Cloud Foundry which is not open to public)
Since I am using Cloud Foundry in IBM Cloud it would be better if there is solution using IBM Cloud.
I’ve already granted cloud foundry role to the other user.
Thank you.
The CloudFoundry platform itself does not provide any access controls for applications. If you assign a public route to your application, where the DNS is publicly resolvable and the foundation is on the public Internet, like IBM Bluemix, then anyone can access your app.
There's a number of things you can do to limit access, but they do require some work on your part.
Use a private DNS. You can add any domain you want to Cloud Foundry, even ones that don't resolve. That means you could add my-cool-domain.local which does not resolve anywhere. You could then add a record to /etc/hosts for this domain or perhaps run DNS on your local network to resolve this DNS domain and direct traffic to the CloudFoundry.
With this setup, most people cannot access your application because the DNS domain for the route to your application does not resolve anywhere. It's important to understand that this isn't really security, but obscurity. It would stop most traffic from making it to your app, but if someone knew the domain, they could add their own /etc/hosts header or send fake Host headers to access your application.
This type of setup can work well if you have light security requirements like you just want to hide something while you work on it, or it can work well paired with other options below.
You can set up access controls in your application. Many application servers & frameworks can do things like restrict access by IP address or require user access (Basic auth is easy and it is OK, if you're only allowing HTTPS traffic to your app which you should always do anyway).
You can use OAuth2 to secure apps too. Again, many app servers & frameworks have support for this and make it relatively simple to secure your apps. If you don't have a corporate OAuth2 solution, there are public providers you can use. Exactly how you do OAuth2 in your app is beyond the scope of this question, but there's plenty of material out there on how to do this. Google information for your application language/framework of choice.
You could set up an access Gateway. This would be an application that's job is to proxy traffic to other applications on the foundation. The Gateway could be something like Nginx, Apache HTTPD, or Spring Cloud Gateway. The idea is that the gateway would be publicly accessible, and would almost certainly apply access controls/restrictions (see #2, many of these proxies have access control options that only take a few lines of config). Your actual applications would not be deployed publicly though. When you deploy your actual applications, they would only be on the internal Cloud Foundry domain.
CloudFoundry has local domains, often apps.internal (run cf domains to see if that shows up), which you can use to easily route traffic across the internal container-to-container network. Using this domain and the C2C network, you can have apps deployed to CF that are not accessible to the public Internet, except through your Gateway.
Again, how you configure this exactly is outside the scope of this question, but check out the docs I linked to for info on using the C2C network & internal routes. Then check out your proxy server of choice's documentation.

How to use private IP based backends with google cloud API gateway?

So I am trying to make Google cloud's API gateway serve requests from a private IP based backend. Currently, the backend is a Kubernetes based service. However, I couldn't find it explicitly being mentioned whether its possible or not.
Has anyone else encountered such an issue given that its a pretty common use case? It seems possible only when the API gateway infrastructure has a link to the VPC network(route table) or an explicit private connection.
After looking for a while I think that the best way to do what you are asking is to use Private service connect, this allows private consumption of services across VPC networks that belong to different groups, teams, projects, or organizations and also lets you connect to service producers using endpoints with internal IP addresses in your VPC network.
Here is a guide of how to use Private Service Connect to access Google APIs.
the Google API gateways exist only for serverless product and is intended to be use only against serverless backends(s). It is possible to configure it against public IP’s that are hosted on our Google backends because they leverage the same x-google-backend configuration key-value pairs in the openapi.yaml for API Gateway, but more niche features like authorization on behalf of backend services, or limiting access to backed services hosted on non-serverless platforms like GKE are currently not supported. a possible workaround could be to set up endpoints directly with your GKE cluster you, this documentation could help you first, second, third
Best regards.

Is Google Cloud Run Service to Service Communication internal like k8s's cluster.local?

Cloud Run is providing a domain *.run.app to access the service deployed. I am wondering how Google Cloud Run handling requests from one to another Cloud Run service. Is all the service to service communication internal even we have a custom domain instead of *.run.app?
The definition of "internal" is not clear.
Your request stay in the Google Network. Is it internal or external?
To resolve the Custom Domain, a DNS resolution request (port 53) is performed on the public network, but the content of the request stays in the Google Network and forwarded after the resolution. Is it internal or external?
So, as long as you use Google Services (in premium network option), you don't go out of the Google Network and thus you can consider this as highly secured.
I assume, my answer isn't very clear, in fact all depend if you trust or not the Google Cloud network.

Dialogflow fulfillment URL issue

I am creating a voice-bot using dialogflow with google assistant. My client has provided his network access, which is not a private URL an IP instead. It is not SSL certified too. I will get two errors this time
Only public URLs are allowed and
You can use only https:// in fulfillment url when "Google Assistant" integration enabled
Any workaround for this? What are the other options I have? I can access clients API within his network only. So I cannot replace this IP address. Please advise how to proceed further.
You can use an IP address, as long as it is a public IP address. The machine doesn't need a DNS entry.
Actions on Google requires an HTTPS connection, however, using a valid certificate (ie - not self-signed). This is to protect your client and their users data.
One possible workaround is to look into a tunnel/proxy service such as ngrok. They provide a public HTTPS address that securely tunnels to an ngrok client you run on the same machine as the webhook fulfillment server. They have a free service that will change hostnames periodically, or you can subscribe to a commercial service which will give you a fixed name which you can use for the fulfillment URL.
You have to make URL https://.
you can try https://letsencrypt.org/

Bluemix public CF App protect/private REST Endpoint

I have a public Bluemix CF APP which exposes a REST Service. I would like to have the option, that the public url bound to the CFApp would be inaccessible from outside. The REST Service itself should only be usable from other CF Apps in my org, for example over API Management. I don't want to implement an own security mechanism for it, because API Management provides already everything I need to control, which clients will access my service. So some kind of private route inside Bluemix public, only available to runtimes and services in my Bluemix organisation.
This is not currently possible with IBM Bluemix, due to limitations in Cloud Foundry.
All bound routes are accessible from the external network.
If you want to have a private API exposed, you have the following options.
Add authentication to the REST API, managing the credentials as a user-provided service bound to all the apps. The API will be accessible externally but only by users with the credentials.
Use an application service, like a message queue, to expose an internal RPC-style API. Applications can bind to the same service and it will only be accessible internally.