JWT Authentication between services running in Kubernetes cluster - kubernetes

I am using jwt authentication between two services written in django. The authentication is working on local machine. But when I run the same services in a kubernetes cluster, I get authentication error.
Also, when I make the decorator above the api to #permission_classes([AllowAny, ]) in order to avoid any authentication check but still pass the token in the header, I get the unauthorization error 401 in Kubernetes cluster.
Does anyone have any idea on how to do jwt authentication between two django services running in a kubernetes cluster in django?

Related

Deploy 2 pods gateway api then got token expired error

I deploy my gateway api on k8s, I sync token with spring session hazelcast. It's work fine until I scale replica to 2, hazelcast session cluster worked but when I login (with keycloak) and then when call API, some api get 500 internal error and gateway log error "Expired token".
I don't understand why, because it's a new session in session cluster why it expired ?

How To Configure PrestoDB internal communication in Kubernetes

I'm testing prestoDB with kubernetes, and I want to configure user and password auth, but it is necessary to have https configured.
Could you help me how to do that on kubernetes?
Im following this page:
https://prestodb.io/docs/current/security/internal-communication.html
But I didnt understand which domain I should issue a certificate.
Today Im using kubernetes service name resolution like a discovery uri
discovery.uri=http://prestodb-coordinator-service:8080

Two-factor Authentication for Service Fabric Explorer?

Anyone have insight on how to implement Two-Factor Authentication when using Service Fabric Explorer to access a Service Fabric cluster in Azure?
I currently have it secured with a client certificate but I haven't found ways to add another type of authentication to go with it.
per the official documentation here:
When a client connects to a Service Fabric cluster node, the client can be authenticated and secure communication established using certificate security or Azure Active Directory (AAD). This authentication ensures that only authorized users can access the cluster and deployed applications and perform management tasks. Certificate or AAD security must have been previously enabled on the cluster when the cluster was created. For more information on cluster security scenarios, see Cluster security. If you are connecting to a cluster secured with certificates, set up the client certificate on the computer that connects to the cluster.
It doesn't support MFA, I'd recommend checking out Service Fabric cluster security scenarios
You could also implement MFA on the AAD level and then using the AAD to authenticate to Service Fabric

Restrict access to Kubernetes UI via VPN or other on GKE

GKE currently exposes Kubernetes UI publicly and by default is only protected by basic auth.
Is there a better method for securing access to the UI? It appears to me this should be accessed behind a secure VPN to prevent various types of attacks. If someone could access the Kubernetes UI, they could cause a lot of damage to the cluster.
GKE currently exposes Kubernetes UI publicly and by default is only protected by basic auth.
The UI is running as a Pod in the Kubernetes cluster with a service attached so that it is accessible from inside of the cluster. If you want to access it remotely, you can use the service proxy running in the apiserver, which means that you would authenticate with the apiserver to access the UI.
The apiserver accepts three forms of client authentication: basic auth, bearer token, and client certificate. The basic auth password should have high entropy, and is only transmitted over SSL. It is provided to make access via a browser simpler since OAuth integration does not yet exist (although you should only pass your credentials over the SSL connection if you have verified the server certificate in your web browser so that your credentials aren't stolen by a man in the middle attack).
Is there a better method for securing access to the UI?
There isn't a way to tell GKE to disable the service proxy in the master, but if an attacker had credentials, then they could access your cluster using the API and do as much harm as if they could get to the UI. So I'm not sure why you are particularly concerned with securing the UI via the service proxy vs. securing the apiserver's API endpoint.
It appears to me this should be accessed behind a secure VPN to prevent various types of attacks.
Which types of attacks are you concerned about specifically?

Secure Kubernetes API

I'm a bit disturbed on how to secure the kubernetes API for call and access, also Kube-ui is available to everybody.
How can I set credential to secure all the services ?
Thank you
The Kubernetes API supports multiple forms of authentication: http basic auth, bearer token, client certificates. When launching the apiserver, you can enable / disable each of these authentication methods with command line flags.
You should also be running the apiserver where the insecure port is only accessible to localhost, so that all connections coming across the network use https. By having your api clients verify the TLS certificate presented by the apiserver, they can verify that the connection is both encrypted and not susceptible to man-in-the-middle attacks.
By default, anyone who has access credentials to the apiserver has full access to the cluster. You can also configure more fine grained authorization policies which will become more flexible and configurable in future Kubernetes releases.