Restricting access to Keycloak console from the internet on Kubernetes - kubernetes

Context: I am working in an application deployed in a CaaS and it has 2 ingresses for keycloak, each with a specific hostname, one of them is reachable from the internet.
What I want is NOT be able to access the Keycloak admin console from the internet.
I am trying this: https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource but can't seem to make it work.

Not sure if anyone else has this problem, but what I did was to change the ingress facing the internet so the path matches the prefix of the application realm instead of the master one and that is enough for us.

This can also be done by changing rules in the external firewall (if it is available) and by using kubernetes native options like namespace isolation, network isolation and api isolation. In your case both network isolation and namespace isolation would also work. Follow this document for more information.

Related

Keycloak internal and external link

I understand that the question was asked and discussed in different formats before. However, I still miss clear guidelines on how to handle the situation.
Our keycloak setup has multiple keycloak replicas and is behind a load balancer without a fixed ip in a separate infrastructure. So that our DNS records look like:
CNAME keycloak.acme.com public-lb.acme.com
And public-lb.acme.com forwards the request to specific instances of keycloak.
One of our end-user applications is located in a completely different infrastructure with strict access. The end-user application is built using java and is using Keycloak integration org.keycloak:keycloak-servlet-filter-adapter. We do not have any custom adapters and simply follow "standard" configuration:
{
"auth-server-url" : "https://keycloak.acme.com",
..
However, this does not work since keycloak.acme.com ip address have to be whitelisted in that "special" infrastructure. So that validation requests from the application inside the "special" infrastructure do not hit the keycloak. And we cannot whitelist the ip, since the ip of our load balancer public-lb.acme.com is not fixed and changes with time.
We have a "tunnel" between the keycloak infrastructure and that "special" infrastructure with a dedicated ip cidr range which is whitelisted.
Hence we have create a special internal load balancer that is in the tunnels cidr range and forwards requests to the keycloak replicas. Unfortunately that internal load balancer does not have a fixed ip address, and can change within time.
Since we do not have fixed ip address, is the only correct method is to use add DNS record inside the "special" infrastructure pointing to the internal load balancer? Something like:
CNAME keycloak.acme.com internal-lb.acme.com
Or are there any alternative solutions? I understand the historical reasons behind this.

Restricting communication from a service which is consul connect enabled to non consul connect service through intention?

If we have two service for example
Front-end (which is consul-connect enabled)
Back-end (which is not consul-connect enabled).
Is it possible to restrict communication between then through intention. Provided we use Consul-Sync from to moved k8s service into consul catalog. Then back-end which is not consul-connect enabled will show in intention. I tried setting deny between Front-end -> Back-end. If not working Front-end is hitting Back-end. I am missing something Or its like Authorization can only happen between two consul-connect enabled service
This question was recently answered in https://stackoverflow.com/a/68432317/12384224.
Consul intentions are authorization polices that allow you to control access between applications within a service mesh. You must use a sidecar proxy, or natively integrate your application with the mesh, in order to use intentions. They are not applicable if you are only using Consul for service discovery, or your application is not part of the service mesh.

How to limit access in Cloud Foundry

I am new to Cloud Foundry.
Is there any way that only specific users can view and update an app deployed in Cloud Foundry?
1.I deployed an app in Cloud Foundry using “cf push”command.
2.After entering “cf push “command I’ve got an message below.
Using manifest file /home/stevemar/node-hello-world/manifest.yml
enter Creating app node-hello-world-example...
name: node-hello-world-example
requested state: started
routes: {route-information}
last uploaded: Mon 14 Sep 13:46:54 UTC 2020
stack: cflinuxfs3
buildpacks: sdk-for-nodejs
type: web
instances: 1/1
memory usage: 256M
3.Using the {route-information} above,I can see the app deployed via browser entering below URL.
https://{route-information}
By this way ,anyone can see app from browser, but I don’t want that to be seen by everyone and limit access to specific user.
I heard that this global IP will be allocated to {route-information} by default.
Is there any way to limit access to only between specific users?
(For example,is there any function like “private registry” at Kubernetes in Cloud Foundry which is not open to public)
Since I am using Cloud Foundry in IBM Cloud it would be better if there is solution using IBM Cloud.
I’ve already granted cloud foundry role to the other user.
Thank you.
The CloudFoundry platform itself does not provide any access controls for applications. If you assign a public route to your application, where the DNS is publicly resolvable and the foundation is on the public Internet, like IBM Bluemix, then anyone can access your app.
There's a number of things you can do to limit access, but they do require some work on your part.
Use a private DNS. You can add any domain you want to Cloud Foundry, even ones that don't resolve. That means you could add my-cool-domain.local which does not resolve anywhere. You could then add a record to /etc/hosts for this domain or perhaps run DNS on your local network to resolve this DNS domain and direct traffic to the CloudFoundry.
With this setup, most people cannot access your application because the DNS domain for the route to your application does not resolve anywhere. It's important to understand that this isn't really security, but obscurity. It would stop most traffic from making it to your app, but if someone knew the domain, they could add their own /etc/hosts header or send fake Host headers to access your application.
This type of setup can work well if you have light security requirements like you just want to hide something while you work on it, or it can work well paired with other options below.
You can set up access controls in your application. Many application servers & frameworks can do things like restrict access by IP address or require user access (Basic auth is easy and it is OK, if you're only allowing HTTPS traffic to your app which you should always do anyway).
You can use OAuth2 to secure apps too. Again, many app servers & frameworks have support for this and make it relatively simple to secure your apps. If you don't have a corporate OAuth2 solution, there are public providers you can use. Exactly how you do OAuth2 in your app is beyond the scope of this question, but there's plenty of material out there on how to do this. Google information for your application language/framework of choice.
You could set up an access Gateway. This would be an application that's job is to proxy traffic to other applications on the foundation. The Gateway could be something like Nginx, Apache HTTPD, or Spring Cloud Gateway. The idea is that the gateway would be publicly accessible, and would almost certainly apply access controls/restrictions (see #2, many of these proxies have access control options that only take a few lines of config). Your actual applications would not be deployed publicly though. When you deploy your actual applications, they would only be on the internal Cloud Foundry domain.
CloudFoundry has local domains, often apps.internal (run cf domains to see if that shows up), which you can use to easily route traffic across the internal container-to-container network. Using this domain and the C2C network, you can have apps deployed to CF that are not accessible to the public Internet, except through your Gateway.
Again, how you configure this exactly is outside the scope of this question, but check out the docs I linked to for info on using the C2C network & internal routes. Then check out your proxy server of choice's documentation.

How to create authentication with Kubernetes when service is already existing?

I'm reading through https://kubernetes.io/docs/reference/access-authn-authz/authentication/, but it is not giving any concrete commands and it is mostly focusing when we want to create everything from scratch. It's also explaining auth for engineers using Kubernetes.
I have an existing deployment and service (with exposed external IP) and would like to create the simplest possible authentication (preferably token based) for an external user accessing the exposed IP. I can't add authentication to the services since I don't have access to their code. If somebody could help me with some commands I would be grateful.
The documentation which referred is for authentication with k8s (for api accesses). This is not for application layer authentication.
However I can suggest one way to implement application layer authentication without changing the service at all. You can redirect the traffic to nginx (or any other reverse proxy) which can perform the authentication and redirect the authenticated user to service directly. It can also perform some kind of authorization too.
There are various resources available which can help you choose various authentication mechanism available in nginx such as password file based mechanism (link) or JWT based authentication (link)

Grafana | Auth Proxy - Security

I am trying to implement Grafana Auth Proxy as documented at
https://grafana.com/docs/grafana/latest/auth/auth-proxy/
https://community.grafana.com/t/django-auth-valid-session-on-grafana-behind-nginx/2793/6
Based on how it works, it seems X-WEBAUTH-USER is set in plain text. So any one who can spoof it, can get logged in.
Grafana does have a IP Whitelist, BUT I dont think its practice to maintain IP Addresses of Docker Containers (Django and Grafana are running in separate docker containers).
Questions:
Is there a better implementation to achieve some thing more secured?
Can whitelist have a easier value?
That is design. AuthProxy offloads the authentication to your own legacy "auth" server. Of course you will need to secure connection between auth server and Grafana, so no one will be able to spoof it. For example you may create dedicated docker network (mutual TLS connection, VPN, ...), where users don't have access. The best approach depends on used infrastructure. If you are not able to secure this communication properly, then AuthProxy is not the best auth method for you.
IMHO the best authentication (and single sign on) protocol supported also by Grafana is Open ID Connect (or SAML for Grafana Enteprise). But you will need Identity Provider, which will support these standards.