I am using azure kubernetes for backend deployment. I have 2 URLs one is API URL(api.project.com) and other one is BFF URL(bff.project.com).
From Web application, instead of calling API URL(api.project.com) they use BFF URL(bff.project.com) which internally calls the API URL(api.project.com) and sends the response.
I now want to restrict direct usage of API URL(api.project.com) even from any REST API Clients(like postman, insomnia, ...) it should only work when triggered from BFF URL(bff.project.com).
We have used nginx-ingress for subdomain creation and both the URLs(BFF and API) are in same cluster.
Is there any firewall or inbuilt azure services to resolve the above mentioned problem ?
Thanks in Advance :)
You want to keep your api private, only accessible from another K8S service, so don't expose it using your ingress controller and it simply won't be accessible outside K8S to any client.
This means that you lose the api.project.com address (although you can get that back if you really want to, it seems unnecessary). The BFF would then access the API via the URL: http://<service-name>.<namespace>.svc.cluster.local:<service-port>, which in your case might be:
http://api.api_ns.svc.cluster.local
Assuming you haven't used TLS (http rather than https), the service is called api, it's running on port 80 (which it should be) and the namespace is called api_ns.
Should you need to provide temporary access to the API for developers to use, say, postman, then they can use port-forwarding to provide that in a dev environment without allowing external access all the time.
However, this won't restrict access to BFF alone. Any service running in K8S could access the API. If you need/want to restrict things further, then you have a lot of options.
I'm reading through https://kubernetes.io/docs/reference/access-authn-authz/authentication/, but it is not giving any concrete commands and it is mostly focusing when we want to create everything from scratch. It's also explaining auth for engineers using Kubernetes.
I have an existing deployment and service (with exposed external IP) and would like to create the simplest possible authentication (preferably token based) for an external user accessing the exposed IP. I can't add authentication to the services since I don't have access to their code. If somebody could help me with some commands I would be grateful.
The documentation which referred is for authentication with k8s (for api accesses). This is not for application layer authentication.
However I can suggest one way to implement application layer authentication without changing the service at all. You can redirect the traffic to nginx (or any other reverse proxy) which can perform the authentication and redirect the authenticated user to service directly. It can also perform some kind of authorization too.
There are various resources available which can help you choose various authentication mechanism available in nginx such as password file based mechanism (link) or JWT based authentication (link)
When trying to binding any Bluemix apps to a pre-configured Secure Gateway service, the Secure Gateway is not in the list of services which can be bound to apps. Is there a different way to bind a nodejs app to a Secure Gateway instance?
Applications can no longer be bound to the Secure Gateway service. Binding was possible in previous versions but provided no additional functionality to the application.
To have your application use the connectivity provided by Secure Gateway, your application simply needs to call the cloud host:port provided by your destination.
After watching the BUILD conference videos for Azure Service Fabric, I'm left imagining how this might be a good fit for our current microservice-based architecture. There is one thing I'm not entirely sure how I would go about solving, however - the API gateway/proxy.
Consider a less-than-trivial microservice architecture where you have N number of services running within the Azure Service Fabric exposing REST endpoints. In many situations, you want to package these fragmented API endpoints up into a single-entry API for consumers to use, to avoid having them connecting to the service fabric-instances directly. The Azure Service Fabric solution seems so complete in every way that I'm sort of wondering if I missed something obvious when I don't see a way to trivially solve this within the capabilities mentioned during the BUILD talks.
Services like Vulcan aim to solve this problem by having the services register the paths they want routed to them in etcd. I'm guessing one way of solving this may be to create a separate stateful web service that other services can register themselves with, providing service name and the paths they need routed to them. The stateful web service can then route traffic to the correct instance based on its state. This doesn't seem entirely ideal, though, with stuff like removing routes when applications are removed and generally keeping the state in sync with the services deployed within the cluster. Has anybody given this any thought, or have any ideas how one might go about solving this within Azure Service Fabric?
The service registration/discoverability you need to do this is actually already there. There's a stateful system service called the Naming Service, which is basically a registrar of service instances and the endpoints they're listening on. So when you start up a service - either stateless or stateful - and open some listener on it, the address gets registered with the Naming Service.
Now the part you'd need to fill in is the "gateway" that users interact with. This doesn't have to be stateful because the Naming Service manages the stateful part. But you'd have to come up with an addressing scheme that works for you, and then it would just forward requests along to the right place. Basically something like this:
Receive request.
Use NS to find the service that can take the request.
Forward the request to it and the response back to the user.
If the service doesn't exist anymore, 404.
In general we don't like to dictate anything about how your services talk to each other, but we are thinking of ways to solve this problem for HTTP as a complete built-in solution.
We implemented a HTTP gateway service for this purpose as well. To make sure we can have one HTTP gateway for any internal protocol, we implemented the gateway for HTTP based internal services (like ASP.NET WebAPIs) using an ASP.NET 5 middleware. It routes requests from e.g /service to an internal Service Fabric address like fabric:/myapp/myservice by using the ServicePartitionClient and some retry logic from CommunicationClientFactoryBase.
We open-sourced this middleware and you can find it here:
https://github.com/c3-ls/ServiceFabric-HttpServiceGateway
There's also some more documentation in the wiki of the project.
This feature is build in for http endpoints, starting with release 5.0 of service fabric. The documentation is available at https://azure.microsoft.com/en-us/documentation/articles/service-fabric-reverseproxy/
We have used an open source project called Traefik with amazing success. There is an Azure Service Fabric wrapper around it - it's essentially a GoLang exe that is deployed onto the cluster as Managed Executable.
It supports circuit breakers, weighted round robin LB, path & header version routing (this is awesome for hosting multiple API versions), the list goes on. And its got a handy portal to view the config and health stats.
The real power in it lies in how you configure it. It's done via the service itself in the ServiceManifest.xml. This allows you to deploy new services and have them immediately able to be routed to - no need to update a routing table etc.
Example
<StatelessServiceType ServiceTypeName="WebServiceType">
<Extensions>
<Extension Name="Traefik">
<Labels xmlns="http://schemas.microsoft.com/2015/03/fabact-no-schema">
<Label Key="traefik.frontend.rule.example">PathPrefixStrip: /a/path/to/service</Label>
<Label Key="traefik.enable">true</Label>
<Label Key="traefik.frontend.passHostHeader">true</Label>
</Labels>
</Extension>
</Extensions>
</StatelessServiceType>
Highly recommended!
Azure Service Fabric makes it easy to implement the standard architecture for this scenario: a gateway service as a frontend for the clients to connect to and all the N backend services communicating with the front end gateway. There are a few communication API stacks available as part of Service Fabric that make it easy to communicate from clients to services and within services themselves. The communication API stacks provided by Service Fabric hide the details of discovering, connecting and retrying connections so that you can focus on the actual exchange of information. When using the Service Fabric communication APIs the services do not have to implement the mechanism of registering their names and endpoints to a specific routing service except what are the usual steps as part of creating the service itself. The communication APIs take in the service URI and partition key and automatically resolve and connect to the right service instance. This article provides a good starting point to help make a decision with regards to which communication APIs will be best suited for your particular case depending on whether you are using Reliable Actors or Reliable Services, or protocols such as HTTP or WCF, or the choice of programming language that the services are written in. At the end of the article you will find links to more detailed articles and tutorials for different communication APIs. For a tutorial on communication in Web API services see this.
We are using SF with a gateway pattern and about 13 services behind the gateway. We use the built in DNS service that SF provides, see: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-dnsservice, this allows the internal service to service calls with known (internal to SF) DNS names, including gateway service to internal services. There are some well known asp.net core gateways (Ocelot, ProxyKit) to use, but we rolled our own. We have an external load balancer to route to multiple gateway instances in SF.
When a service is started, it registers it's endpoint with the fabric naming service. Using the Fabric client APIs you can then ask fabric for the registered endpoints, associated with the registered service name.
So yes, just as you described your case, you would have a gateway that would accept an incoming URI for connection, and then use that path information as the service name lookup, to then create a proxy connection between the incoming request and the actual internal endpoint location.
Looks like the team as posted one the samples that shows how to do this: https://github.com/Azure/servicefabric-samples/tree/master/samples/Services/VS2015/WordCount
I get that it wont be possible to bind the service and therefore not use the VCAP_SERVICES, and credentials would need to be managed in another way.
Since the communication would go via the internet, I guess the question is really:
Does the SSO service have an API that can be reached from outside of Bluemix?
Yes the SSO service can be reached from outside Bluemix and therefore also from apps deployed on UK.
However, to retrieve the credentials you need to create an SSO service on US and then bind an app to it and inspect the VCAP_SERVICES. This is due to how Cloud Foundry works. Read more here