I've configured a custom domain and certificate and hooked up the cloud functions api to my actions and this works fine.
Endpoints work over both https and http.
But I'd like to enforce https only. Something like "FORCE_HTTPS: true" in the static buildpack. Is there someway that I can do this?
You should get an X-Forwarded-Url header in the action itself that you could inspect to force HTTPS. Using that in conjunction with secure actions via the web_key annotation should make it enforceable.
In the future, the API Gateway may be able to enforce this for you via the configuration specified in the Open API doc.
Related
I am using azure kubernetes for backend deployment. I have 2 URLs one is API URL(api.project.com) and other one is BFF URL(bff.project.com).
From Web application, instead of calling API URL(api.project.com) they use BFF URL(bff.project.com) which internally calls the API URL(api.project.com) and sends the response.
I now want to restrict direct usage of API URL(api.project.com) even from any REST API Clients(like postman, insomnia, ...) it should only work when triggered from BFF URL(bff.project.com).
We have used nginx-ingress for subdomain creation and both the URLs(BFF and API) are in same cluster.
Is there any firewall or inbuilt azure services to resolve the above mentioned problem ?
Thanks in Advance :)
You want to keep your api private, only accessible from another K8S service, so don't expose it using your ingress controller and it simply won't be accessible outside K8S to any client.
This means that you lose the api.project.com address (although you can get that back if you really want to, it seems unnecessary). The BFF would then access the API via the URL: http://<service-name>.<namespace>.svc.cluster.local:<service-port>, which in your case might be:
http://api.api_ns.svc.cluster.local
Assuming you haven't used TLS (http rather than https), the service is called api, it's running on port 80 (which it should be) and the namespace is called api_ns.
Should you need to provide temporary access to the API for developers to use, say, postman, then they can use port-forwarding to provide that in a dev environment without allowing external access all the time.
However, this won't restrict access to BFF alone. Any service running in K8S could access the API. If you need/want to restrict things further, then you have a lot of options.
We're developing a web application (SPA) consisting of the following parts:
NextJS container
Django backend for user management
Data API (FastAPI) protected with API keys, for which we also provide 3rd party access
The NextJS container uses an API key to access the data API. We don't want to expose the API key to the client (browser), so the browser sends the API requests to the NextJS container, which then relays it to the data API, see here. This seems secure, but is more complicated and slower than sending requests from the browser to the data API directly.
I'm wondering if it's possible to whitelist the web application in the data API, so that the client (browser) can call the data API directly without API key, but 3rd parties can't. FastAPI provides a TrustedHostMiddleware, but it's insecure because it's possible to spoof the host header. It has been suggested to whitelist IPs instead, but we don't have a dedicated IP for our web application. I looked into using the referer header, but it's not available in the FastAPI request object for some reason (I suspect some config problem in our hosting). Also, the referer header could be spoofed as well.
Is there even a safe way to whitelist our web application for data API access, or do we need to relay the request via NextJS container and use an API key?
Is there even a safe way to whitelist our web application for data API
access,
No, you need in all case an Authentication mechanism, something before the backend that check if the client is an authorize client.
The simplest pattern is using the NextJS container as the proxy. Proxy that have an api-key to call the backend ( what you are currently doing ).
There is many way to implement a secured proxy for a backend, but this authentication logic should not be inside the backend but in a separate service ( like envoy , nginx ... )
I'm reading through https://kubernetes.io/docs/reference/access-authn-authz/authentication/, but it is not giving any concrete commands and it is mostly focusing when we want to create everything from scratch. It's also explaining auth for engineers using Kubernetes.
I have an existing deployment and service (with exposed external IP) and would like to create the simplest possible authentication (preferably token based) for an external user accessing the exposed IP. I can't add authentication to the services since I don't have access to their code. If somebody could help me with some commands I would be grateful.
The documentation which referred is for authentication with k8s (for api accesses). This is not for application layer authentication.
However I can suggest one way to implement application layer authentication without changing the service at all. You can redirect the traffic to nginx (or any other reverse proxy) which can perform the authentication and redirect the authenticated user to service directly. It can also perform some kind of authorization too.
There are various resources available which can help you choose various authentication mechanism available in nginx such as password file based mechanism (link) or JWT based authentication (link)
Kubernetes surfaces an API proxy, which allows querying the internal services via eg: https://myhost.com/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/
This is all well, and good. However, for security & compliance reasons, all of our services expose an HTTPS endpoint. Attempting to access them by going to https://myhost/api/v1/proxy/namespaces/default/services/myhttpsservice:3000/ results in
Error: 'read tcp 172.20.122.129:48830->100.96.29.113:3000: read: connection reset by peer'
Trying to reach: 'http://100.96.29.113:3000/'
Because the endpoint, 100.96.29.113:3000 is in fact https.
Is there any way to configure the proxy to apply SSL to specific service endpoints?
(Edit: If this is not currently possible, a relevant github issue link for tracking the feature request is also acceptable answer until it will be)
As documented at https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls, (and pointed out on slack), you can access services behind HTTPS by prefixing the servicename with "https:" ;
Using the example from above, correctly it would be: https://myhost/api/v1/proxy/namespaces/default/services/https:myhttpsservice:3000/
I've added certificate with custom domain name map in AWS API gateway but it allows HTTP automatically, how can I block normal HTTP and only allows HTTPS?
All API Gateway APIs are fronted with a CloudFront distribution. Each of these CloudFront distributions (whether it's a Custom Domain like yours or the default *.execute-api distribution) is configured to redirect all HTTP requests to HTTPS. Although CloudFront has the option to strictly require HTTPS and return 403 on HTTP requests we currently don't expose this option for simplicity.
If you feel you have valid use case for requiring HTTPS without a redirect please open a support ticket and the team can evaluate your request.