Azure Service Fabric: Make endpoint Input and Internal for identity server 4 - azure-service-fabric

I want to run an application on Azure service fabric. One service should serve as identity provider. So I installed identity server 4 package on that 'usermanager'. I have also two other services which should use this usermanager for authentication and authorization.
That works on localhost. But on Azure I have the problem that an endpoint must be 'Input' or 'Internal' in my service manifest. But for my usermanager I need both input and internal.
<Endpoint Protocol="http" Name="IdentityServerEndpoint" Type="Input" Port="5000" />
/.well-known/openid-configuration needs 'Internal' and
/connect/authorize?xxxxxx needs 'Input'
I found that for Input endpoints azure service fabric uses the full qualified domain name and for internal endpoints it uses the ip address of the lokal network like 10.0.0.4.
Is there a solution to make an endpoint both input and internal?
Or is there a solution to make identity server 4 to handle two endpoints?
Any ideas to solve this problem?

Believe it or not, the "Type" field in the Endpoint config doesn't actually do anything on any hosting platform. It's just metadata that you can configure and use in your code (basically a way for you to set your own policies). It doesn't matter what you put there otherwise.
Ultimately, you're opening an endpoint on a process on a VM. That endpoint will be open on the VM's IP and the port you choose, e.g., 10.0.0.1:5000.
If you want that endpoint to also be available on your cluster's VIP and FQDN, that configuration is external to Service Fabric. In Azure you just need to configure the Azure Load Balancer to forward external traffic on the port your service is listening on. See here for more info on that: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-connect-and-communicate-with-services#connections-from-external-clients

Related

Kubernetes - route static IP to multiple services (Google Cloud Platform)

I have a small application comprising three services:
A single page application (SPA) served from nginx
A simple nodejs HTTP API used by the SPA
An MQtt broker exposing ports 1883 and 9001
Ideally I'd like the all to be served from the same subdomain and static IP address and have been trying to configure this in Kubernetes on the Google Cloud Platform.
I've created deployments for each of the services, with the SPA exposing port 80, the API 3000 and the MQTT broker 1883/9001. I've then followed the instructions here to set up a static IP and a Service to route to the SPA, then created similar services for the API and the MQTT app. (I've initally adapted these from deployments and services generated from a docker-compose file and Kompose).
The SPA and API seem to work fine but the MQTT service does not. When I run kubetl get events I see:
Error creating load balancer (will retry): failed to ensure load balancer for service default/mqtt-broker: failed to create forwarding rule for load balancer (a5529f2a9bdaf11e8b35d42010a84005(default/mqtt-broker)): googleapi: Error 400: Invalid value for field 'resource.IPAddress': '35.190.221.113'. Specified IP address is in-use and would result in a conflict., invalid
So I'm wondering if I should be creating a single service to route to the three deployments but can't find any documentation or examples that explain how to do this for a non http service.
I guess I could put the mqtt service on a separate IP address but this seems to be hacking around the problem rather than solving it.
Thanks in advance for any advice.
I eventually found an almost identical use case to my own on this github repository.
In essence, they are creating the MQTT broker on a separate static IP and using Kubernetes API calls to expose the details to the front end, which they explain in the following comment at the top of the web.yaml file:
This needs a bit of trickery
as it needs to expose the LB ip address for the MQTT server. That
requires kubernetes API calls to look it up, and the ability to
store it somewhere (we put it in a secret). To be secure this is
done with a dedicated service account and an init container.
https://github.com/IBM/ny-power

Hostname verification failed in OpenShift when integration a external service using an External Domain Name

I want to call a REST service running outside OpenShift via a Service and external domain name. This works perfect with a http:// request. The mechanism is described in the documentation : https://docs.okd.io/latest/dev_guide/integrating_external_services.html#saas-define-service-using-fqdn
However the external service is secured with https. In this case I got the following exception:
Host name 'external-test-service' does not match the certificate subject provided by the peer (CN=.xxx, O=xxx, L=xxx, ST=GR, C=CH); nested exception is javax.net.ssl.SSLPeerUnverifiedException: Host name 'external-test-service' does not match the certificate subject provided by the peer (CN=.xxx, O=xxx, L=xxx, ST=GR, C=CH)
The exception is clear to me because we use the Service name from OpenShift. This name does not correspond to the origin host name in the certificate. So currently I see three possibilities to solve this issue:
Add the name of the OpenShift Service to the certificate
Deactivate hostname verification before calling the external REST service
Configure OpenShift (don't know this is possible)
Has anybody solve this or a similar issue?
Currently I used OpenShift v3.9. We are running a simple Spring Boot application in a pod accessing REST services outside OpenShift.
Any hint will be appreciated.
Thank you
Markus
Ugly and might cost you extra $$
Defeats the purpose of TLS.
On Kubernetes 1.10 and earlier you can use ExternalName.
You can also use with OpenShift.
You can also use and Kubernetes Ingress with TLS. Also, documented for OpenShift
Hope it helps!

How do you deploy Identity Server on Kubernetes?

I want to deploy Identity Server 4 on Kubernetes 1.8, and use this as a Federation Gateway between my web application and Azure Active Directory (to begin with).
If I call Identity Server from my web application using the local k8s service name, my users are redirected to the wrong Identity Server URL (containing the local k8s service name) during Sign in which clearly won't work. We are using an implicit flow.
I therefore setup a Azure Load balancer with dns name and configured Identity Server to be externally accessible with the domain name as the PublicOrigin URL.
However, my web application which runs in the same cluster cannot access Identity Server using the external URL of the Identity Server (discovery fails).
If I run Identity Server on another Kubernetes cluster then everything works fine.
My question is:
How do you properly deploy Identity Server in Kubernetes? Do I really need another Kubernetes cluster?
Note: I am using Kubernetes on Azure created with ACS engine (because we have mixed windows and linux containers).
I'm using AKS (Azure managed kubernetes) and have a single client asp.net core 2 web app in the same cluster as my IS4 service with no issues. Both webapps are fronted by Nginx with kube-lego for LetsEncrpyt TLS support, and DNS is provided by Azure DNS.
I'm not using the PublicOrigin but instead the client app's Authority (in the openidconnect setup) uses the full (external Azure) DNS name of the IS4 service. You can use PublicOrigin if you want to use the cluster service naming from your clients

Difference in ServiceManifest for owin hosted API controller vs regular stateless service

When you create owin hosted API controller in service fabric with vs 2015, the following line appears in the ServiceManifest.xml file (under Resources/Endpoints):
<Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input" Port="8726" />
But in case of regular stateless service, the following line appears in the related ServiceManifest.xml file:
<Endpoint Name="ServiceEndpoint" />
Since both are stateless services under the hood, why the difference in Endpoint definition? What does this signify? Also how would I call the 2nd service (from 1st service) over http transport?
Thanks.
A web API is normally used as a gateway to the application, so it requires a fixed port, to be mapped by the load balancer to an external port (in contrast to the default, a random port assigned by the Fabric).
In addition, this ensures a correct HTTP endpoint registration in Windows, as described in the documentation:
This step is important because the service host process runs under
restricted credentials (Network Service on Windows). This means that
your service won't have access to set up an HTTP endpoint on its own.
By using the endpoint configuration, Service Fabric knows to set up
the proper access control list (ACL) for the URL that the service will
listen on. Service Fabric also provides a standard place to configure
endpoints.

How to call an app with no-route from another app in Bluemix?

Here is usecase:
I have two apps in Bluemix: app1 and app2
app1 is accessible through the internet using its route (e.g. app1.mybluemix.net)
app2 doesn't have any route to prevent from being accessible through the internet.
app2 may expose a REST API.
How do I call app2 from app1 inside Bluemix?
An example of communicating to an application without a route is implemented in this Microservice Shipping sample.
This is an EJB Liberty application that runs on Bluemix without a route and subscribes to the Bluemix MQ Light service. The sender of the messages is the Microservice Orders sample application, which binds to the same MQ Light service.
Going the REST API route will mean you must have an externally accessible route. However, you could secure it using keys and tokens.
It would be easier to use one of the services in Bluemix as an "RPC" layer between the two applications. You could use one of the queue services (MQLight, RabbitMQ) or Redis to pass messages between the applications to execute commands.
These service bindings are internal and won't be exposed externally unlike the REST API.
Alternatively, you could expose the REST API from App2 and use authentication to control access.
There are two ways you can prevent access.
Put your microservice inside a Bluemix Container and utilize private IPs https://new-console.ng.bluemix.net/docs/containers/container_security_network.html#container_cli_ips_byoip
Use API Connect as a API Gateway/proxy to the private IP being in your container microservice.
Use Bluemix Dedicated to deploy app2. Bluemix dedicated provides firewall capabilities and you could set it up so that it only accepts requests from app1's IP address.
Use Bluemix Local when it becomes available with the same approach where you use your corporate firewall to only accept requests that come from your App1 IP Address. This is an expensive alternative compared to a public PAAS.
Use the API Connect Service which replaced the API Management Service to:
Specify what users can access your apis
Specify the number of requests per day or other unit of time
Provides a API Gateway to securely call the other service App2.
I expect at some point a software network defined solution will be considered as part of the offering.