I want to deploy Identity Server 4 on Kubernetes 1.8, and use this as a Federation Gateway between my web application and Azure Active Directory (to begin with).
If I call Identity Server from my web application using the local k8s service name, my users are redirected to the wrong Identity Server URL (containing the local k8s service name) during Sign in which clearly won't work. We are using an implicit flow.
I therefore setup a Azure Load balancer with dns name and configured Identity Server to be externally accessible with the domain name as the PublicOrigin URL.
However, my web application which runs in the same cluster cannot access Identity Server using the external URL of the Identity Server (discovery fails).
If I run Identity Server on another Kubernetes cluster then everything works fine.
My question is:
How do you properly deploy Identity Server in Kubernetes? Do I really need another Kubernetes cluster?
Note: I am using Kubernetes on Azure created with ACS engine (because we have mixed windows and linux containers).
I'm using AKS (Azure managed kubernetes) and have a single client asp.net core 2 web app in the same cluster as my IS4 service with no issues. Both webapps are fronted by Nginx with kube-lego for LetsEncrpyt TLS support, and DNS is provided by Azure DNS.
I'm not using the PublicOrigin but instead the client app's Authority (in the openidconnect setup) uses the full (external Azure) DNS name of the IS4 service. You can use PublicOrigin if you want to use the cluster service naming from your clients
Related
We have a cluster that runs a number of dotnet apps, one of which runs the identity server. All the other apps need to authenticate with the identity server. If the identity server was external this wouldn't be an issue as it would have an HTTPS endpoint, but internally they are all running HTTP.
With istio adding MTLS security to all the comms, do all the apps just get set with RequireHttpsMetadata = false?
Is this the correct way to setup the network with internal requests being sent as http://auth-server.default.svc.cluster.local/...?
or should they be sent as https://auth-server.default.svc.cluster.local/..., if so how?
I'm trying to setup Identity Aware Proxy for my backend services parts of which resides in GCP and other on on-prem,according to the instruction given in the following link
Enabling IAP for on-premises apps and
Overview of IAP for on-premises apps
After, following the guide I ended up in a partial state where services running on GCP serving at https endpoint is perfectly accessible via IAP. However, the app which is running on on-prem is not reachable through pods* and external loadbalancer*.
Current Architecture followed:
Steps Followed
On GCP project
Created a VPC network in any region with one subnet in my case (asia-southeast1)
Used IAP connector https://github.com/GoogleCloudPlatform/iap-connector
Configured the mapping for 2 domains.
For app in GCP
source: gcp.domain.com
destination: app1.domain.com (serving at https endpoint)
For app in on-prem(Another GCP project)
source: onprem.domain.com
destination: app2.domain.com (serving at https endpoint but not exposed to internet)
Configured VPN Tunnel between both the project so the network gets peered
Enabled IAP for the loadbalancer which is created by the deployment.
Added corresponding accounts to allow access to the services with IAP web-user role.
On-prem
Created VPC network in a region with one subnet (asia-southeast1)
Created VM on VPC in that region
Assigned that VM to an instance group
Created Internal Https loadbalancer and chose instance group as backend
Secured load balancer http with ssl
Setup VPN tunnel to the first project
What I have tried?
logged in to pods and pinged different pods. All pods were reachable.
logged in to nodes and pinged the remote VM on port 80 and 443 both are reachable.
pinged remote VM from inside the pods. Not reachable.
Expected Behaviour:
User requests to loadbalancer on the app1.domain.com which IAP authenticates and authorizes user with OAuth and grant access to the webapp.
User requests to loadbalancer on the app2.domain.com which IAP authenticates and authorizes user with OAuth and grant access to the webapp running on on-prem.
Actual Behaviour
Request to the app1.domain.com prompts OAuth screen after authenticating the website is returned to the user.
Request to the app2.domain.com prompts OAuth screen after authenticating the browser returns 503 - "No healthy upstream"
Note:
I am using a separate GCP project to simulate on-premise.
Both projects are peered via VPN tunnel.
Both peering projects have subnets in the same region.
I have used internal https loadbalancer in my on-prem project to make my VM visible in my host project so that the external loadbalancer can route request to the VM's https endpoint.
** I'm suspecting that if pod could able to reach the remote VM the problem might as well be resolved. It's just a wild guess.
Thank you so much guys. I'm looking forward for your responses.
I am running a web application packaged as WAR inside WildFly, with authentication configured via a secure deployment managed by the Keycloak adapter subsystem.
The corresponding client in Keycloak is configured with a service account. Now, I'd like to send requests to Keycloak (and possibly other services) using the service account and associated roles.
What is the best way to obtain a token for authentication "as the service", i.e. using the service account?
Is there a way to access the client secret specified in the secure deployment definition from the runtime context of my WAR?
Am I doing things wrong? What is the optimal approach here?
Note that I still need to be able to authenticate requests from the web inbound to the service with Keycloak.
I have been using SQL Server 2017 Linux image for quite some time. I am able to deploy this to Azure Kubernetes Service (AKS) cluster with Statefulset and a service name exposed via a Service object. I can then connect to the SQL Server instance using the service name from web API or frontend application. The complete working example of this can be found in the repo
https://github.com/NileshGule/AKS-learning-series/tree/master/k8s/AKS
I upgraded to SQL Server 2019 Linux image using the docs https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-kubernetes-manage?view=sqlallproducts-allversions. With the operator and the availability group deployment, I am able to create the database. However, I cannot access the database from the front end and Web API projects without giving in the loadbalancer IP address of the Availability Group primary service.
Is there any way to access the SQL Server 2019 using service discovery without specifying the IP address?
I want to run an application on Azure service fabric. One service should serve as identity provider. So I installed identity server 4 package on that 'usermanager'. I have also two other services which should use this usermanager for authentication and authorization.
That works on localhost. But on Azure I have the problem that an endpoint must be 'Input' or 'Internal' in my service manifest. But for my usermanager I need both input and internal.
<Endpoint Protocol="http" Name="IdentityServerEndpoint" Type="Input" Port="5000" />
/.well-known/openid-configuration needs 'Internal' and
/connect/authorize?xxxxxx needs 'Input'
I found that for Input endpoints azure service fabric uses the full qualified domain name and for internal endpoints it uses the ip address of the lokal network like 10.0.0.4.
Is there a solution to make an endpoint both input and internal?
Or is there a solution to make identity server 4 to handle two endpoints?
Any ideas to solve this problem?
Believe it or not, the "Type" field in the Endpoint config doesn't actually do anything on any hosting platform. It's just metadata that you can configure and use in your code (basically a way for you to set your own policies). It doesn't matter what you put there otherwise.
Ultimately, you're opening an endpoint on a process on a VM. That endpoint will be open on the VM's IP and the port you choose, e.g., 10.0.0.1:5000.
If you want that endpoint to also be available on your cluster's VIP and FQDN, that configuration is external to Service Fabric. In Azure you just need to configure the Azure Load Balancer to forward external traffic on the port your service is listening on. See here for more info on that: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-connect-and-communicate-with-services#connections-from-external-clients