How to set secodary key for kubernetes ingress basic-auth - kubernetes

I wanna have an ingress for all my service in the k8s, and give the ingress a basic auth. But for auth rotation, I want to support a secondary auth for user so the endpoint can be reached when they re-generate the primary key.
I currently can follow this guide to set up an ingress with single basic auth.

Adapting the guide, you can put multiple usernames and passwords in the auth file you're using to generate the basic auth secret. Specifically, if you run the htpasswd command without the -c flag, so e.g. htpasswd <filename> <username> it will add an entry to the file rather than creating a new file from scratch:
$ htpasswd -c auth foo
New password: <bar>
Re-type new password: <bar>
Adding password for user foo
$ cat auth
foo:$apr1$isCec65Z$JNaQ0GJCpPeG8mR1gYsgM1
$ htpasswd auth user2
New password: <pass2>
Re-type new password: <pass2>
Adding password for user user2
$ cat auth
foo:$apr1$isCec65Z$JNaQ0GJCpPeG8mR1gYsgM1
user2:$apr1$.FsOzlqA$eFxym7flDnoDtymRLraA2/
If you've already created the secret in the first place via the given command:
$ kubectl create secret generic basic-auth --from-file=auth
You can then update the secret with this trick:
$ kubectl create secret generic basic-auth --from-file=auth\
--dry-run -o yaml | kubectl apply -f -
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
secret/basic-auth configured
You can confirm setting the secret worked:
$ kubectl get secret basic-auth -ojsonpath={.data.auth} | base64 -D
foo:$apr1$isCec65Z$JNaQ0GJCpPeG8mR1gYsgM1
user2:$apr1$.FsOzlqA$eFxym7flDnoDtymRLraA2/
Finally, you can test basic auth with both usernames and passwords is working:
$ curl http://<minikube_ip>/ -H 'Host: foo.bar.com' \
-s -w"%{http_code}" -o /dev/null
401
$ curl http://<minikube_ip>/ -H 'Host: foo.bar.com' \
-u 'wronguser:wrongpass' \
-s -w"%{http_code}" -o /dev/null
401
$ curl http://<minikube_ip>/ -H 'Host: foo.bar.com' \
-u 'foo:bar' \
-s -w"%{http_code}" -o /dev/null
200
$ curl http://<minikube_ip>/ -H 'Host: foo.bar.com' \
-u 'user2:pass2' \
-s -w"%{http_code}" -o /dev/null
200

Related

How can I configure encryption TLS/SSL for Cassandra in K8ssandra?

i am new to kubernetes/helm.
I'm trying to configure encryption TLS/SSL for Cassandra in K8ssandra?
I've seen this example but can't get any further.
Click here!
I perform the following steps as described in this tutorial
Click here!
helm repo add k8ssandra https://helm.k8ssandra.io/stable
helm repo update
cd ~/github
git clone https://github.com/k8ssandra/k8ssandra-operator.git
cd k8ssandra-operator
scripts/setup-kind-multicluster.sh --clusters 1 --kind-worker-nodes 4
helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl config use-context kind-k8ssandra-0
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager --create-namespace --set installCRDs=true
helm install k8ssandra-operator k8ssandra/k8ssandra-operator -n k8ssandra-operator --create-namespace
cat <<EOF | kubectl -n k8ssandra-operator apply -f -
apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
name: demo
spec:
cassandra:
serverVersion: "4.0.1"
datacenters:
- metadata:
name: dc1
size: 3
storageConfig:
cassandraDataVolumeClaimSpec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
config:
jvmOptions:
heapSize: 512M
stargate:
size: 1
heapSize: 256M
EOF
CASS_USERNAME=$(kubectl get secret demo-superuser -n k8ssandra-operator -o=jsonpath='{.data.username}' | base64 --decode)
echo $CASS_USERNAME
CASS_PASSWORD=$(kubectl get secret demo-superuser -n k8ssandra-operator -o=jsonpath='{.data.password}' | base64 --decode)
echo $CASS_PASSWORD
kubectl exec -it demo-dc1-default-sts-0 -n k8ssandra-operator -c cassandra -- nodetool -u $CASS_USERNAME -pw $CASS_PASSWORD status
kubectl exec -it demo-dc1-default-sts-0 -n k8ssandra-operator -c cassandra -- cqlsh -u $CASS_USERNAME -p $CASS_PASSWORD -e "CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3};"
kubectl exec -it demo-dc1-default-sts-0 -n k8ssandra-operator -c cassandra -- cqlsh -u $CASS_USERNAME -p $CASS_PASSWORD -e "insert into test.users (email, name, state) values ('john#gamil.com', 'John Smith', 'NC');"
kubectl exec -it demo-dc1-default-sts-0 -n k8ssandra-operator -c cassandra -- cqlsh -u $CASS_USERNAME -p $CASS_PASSWORD -e "insert into test.users (email, name, state) values ('joe#gamil.com', 'Joe Jones', 'VA');"
kubectl exec -it demo-dc1-default-sts-0 -n k8ssandra-operator -c cassandra -- cqlsh -u $CASS_USERNAME -p $CASS_PASSWORD -e "insert into test.users (email, name, state) values ('sue#help.com', 'Sue Sas', 'CA');"
kubectl exec -it demo-dc1-default-sts-0 -n k8ssandra-operator -c cassandra -- cqlsh -u $CASS_USERNAME -p $CASS_PASSWORD -e "insert into test.users (email, name, state) values ('tom#yes.com', 'Tom and Jerry', 'NV');"
The files truststore.jks and keystore.jks are stored locally on the pc under the ./mnt/keystore/... and ./mnt/truststore/... directory.
Here i create the secrets keystoreSecret and truststoreSecret.
kubectl create secret generic keystore --from-file=./mnt/keystore/keystore.jks -n k8ssandra-operator
kubectl create secret generic truststore --from-file=./mnt/truststore/truststore.jks -n k8ssandra-operator
Now I have run the above examples.
This is the value.yaml
cassandra:
version: 4.0.1
cassandraYamlConfigMap: cassandra-config
encryption:
keystoreSecret: keystore
keystoreMountPath: /mnt/keystore
truststoreSecret: truststore
truststoreMountPath: /mnt/truststore
heap:
size: 512M
datacenters:
- name: dc1
size: 1
I'm trying to run this example as follows.
helm upgrade k8ssandra-operator k8ssandra/k8ssandra-operator -n k8ssandra-operator -f value.yaml
Now I'm trying to run the configmap file.
config-file.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cassandra-config
data:
cassandra.yaml: |-
server_encryption_options:
internode_encryption: all
keystore: /mnt/keystore/keystore.jks
keystore_password: cassandra
truststore: /mnt/truststore/truststore.jks
truststore_password: cassandra
kubectl apply -f config-file.yaml -n k8ssandra-operator
i can switch to cassandra bash without any problems
kubectl exec -it demo-dc1-default-sts-0 -n k8ssandra-operator -- /bin/bash
In cassandra bash environment i can't find truststore and keystore files under /mnt/truststore/truststore.jks and /mnt/keystore/keystore.jks directories.
I'm trying to log into cassandra with ssl but I can't.
cassandra#demo-dc1-default-sts-0:/$ cqlsh --ssl -u demo-superuser -p JKv59QPynp3s0qGSf1DZ demo-dc1-stargate-service
Validation is enabled; SSL transport factory requires a valid certfile to be specified. Please provide path to the certfile in [ssl] section as 'certfile' option in /home/cassandra/.cassandra/cqlshrc (or use [certfiles] section) or set SSL_CERTFILE environment variable.
Have you checked this doc?
It applies to k8ssandra-operator (considered K8ssandra v2), while the links you found apply to K8ssandra v1 (where everything was helm charts).
Let me know if that works for you.

Prometheus datasource : client_error: client error: 403

Hi I am trying to add built-in OpenShift(v4.8) prometheus data source to a local grafana server. I have given basic auth with username and password and as of now I have enabled skip tls verify also. Still I'm getting this error
Prometheus URL = https://prometheus-k8s-openshift-monitoring.apps.xxxx.xxxx.xxxx.com
this is the grafana log
logger=tsdb.prometheus t=2022-04-12T17:35:23.47+0530 lvl=eror msg="Instant query failed" query=1+1 err="client_error: client error: 403"
logger=context t=2022-04-12T17:35:23.47+0530 lvl=info msg="Request Completed" method=POST path=/api/ds/query status=400 remote_addr=10.100.95.27 time_ms=36 size=65 referer=https://grafana.xxxx.xxxx.com/datasources/edit/6TjZwT87k
You cannot authenticate to the OpenShift prometheus instance using basic authentication. You need to authenticate using a bearer token, e.g. one obtained from oc whoami -t:
curl -H "Authorization: Bearer $(oc whoami -t)" -k https://prometheus-k8s-openshift-monitoring.apps.xxxx.xxxx.xxxx.com/
Or from a ServiceAccount with appropriate privileges:
secret=$(oc -n openshift-monitoring get sa prometheus-k8s -o jsonpath='{.secrets[1].name}')
token=$(oc -n openshift-monitoring get secret $secret -o jsonpath='{.data.token}' | base64 -d)
curl -H "Authorization: Bearer $token" -k https://prometheus-k8s-openshift-monitoring.apps.xxxx.xxxx.xxxx.com/

Use the Kubernetes REST API without kubectl

You can simply interact with K8s using its REST API. For example to get pods:
curl http://IPADDR/api/v1/pods
However I can't find any example of authentication based only on curl or REST. All the examples show the usage of kubectl as proxy or as a way to get credentials.
If I already own the .kubeconfig, and nothing else, is there any way to send the HTTP requests directly (e.g. with a token) without using kubectl?
The kubeconfig file you download when you first install the cluster includes a client certificate and key. For example:
clusters:
- cluster:
certificate-authority-data: ...
server: https://api.cluster1.ocp.virt:6443
name: cluster1
contexts:
- context:
cluster: cluster1
user: admin
name: admin
current-context: admin
preferences: {}
users:
- name: admin
user:
client-certificate-data: ...
client-key-data: ...
If you extract the client-certificate-data and client-key-data to
files, you can use them to authenticate with curl. To extract the
data:
$ yq -r '.users[0].user."client-certificate-data"' kubeconfig | base64 -d > cert
$ yq -r '.users[0].user."client-key-data"' kubeconfig | base64 -d >
key
And then using curl:
$ curl -k --cert cert --key key \
'https://api.cluster1.ocp.virt:6443/api/v1/namespaces/default/pods?limit=500'
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "22022"
},
"items": []
Alternately, if your .kubeconfig has tokens in it, like this:
[...]
users:
- name: your_username/api-clustername-domain:6443
user:
token: sha256~...
Then you can use that token as a bearer token:
$ curl -k https://api.mycluster.mydomain:6443/ -H 'Authorization: Bearer sha256~...'
...but note that those tokens typically expire after some time, while the certificates should work indefinitely (unless they are revoked somehow).

How to Add Users to Kubernetes (kubectl)?

I've created a Kubernetes cluster on AWS with kops and can successfully administer it via kubectl from my local machine.
I can view the current config with kubectl config view as well as directly access the stored state at ~/.kube/config, such as:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://api.{CLUSTER_NAME}
name: {CLUSTER_NAME}
contexts:
- context:
cluster: {CLUSTER_NAME}
user: {CLUSTER_NAME}
name: {CLUSTER_NAME}
current-context: {CLUSTER_NAME}
kind: Config
preferences: {}
users:
- name: {CLUSTER_NAME}
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
password: REDACTED
username: admin
- name: {CLUSTER_NAME}-basic-auth
user:
password: REDACTED
username: admin
I need to enable other users to also administer. This user guide describes how to define these on another users machine, but doesn't describe how to actually create the user's credentials within the cluster itself. How do you do this?
Also, is it safe to just share the cluster.certificate-authority-data?
For a full overview on Authentication, refer to the official Kubernetes docs on Authentication and Authorization
For users, ideally you use an Identity provider for Kubernetes (OpenID Connect).
If you are on GKE / ACS you integrate with respective Identity and Access Management frameworks
If you self-host kubernetes (which is the case when you use kops), you may use coreos/dex to integrate with LDAP / OAuth2 identity providers - a good reference is this detailed 2 part SSO for Kubernetes article.
kops (1.10+) now has built-in authentication support which eases the integration with AWS IAM as identity provider if you're on AWS.
for Dex there are a few open source cli clients as follows:
Nordstrom/kubelogin
pusher/k8s-auth-example
If you are looking for a quick and easy (not most secure and easy to manage in the long run) way to get started, you may abuse serviceaccounts - with 2 options for specialised Policies to control access. (see below)
NOTE since 1.6 Role Based Access Control is strongly recommended! this answer does not cover RBAC setup
EDIT: Great, but outdated (2017-2018), guide by Bitnami on User setup with RBAC is also available.
Steps to enable service account access are (depending on if your cluster configuration includes RBAC or ABAC policies, these accounts may have full Admin rights!):
EDIT: Here is a bash script to automate Service Account creation - see below steps
Create service account for user Alice
kubectl create sa alice
Get related secret
secret=$(kubectl get sa alice -o json | jq -r .secrets[].name)
Get ca.crt from secret (using OSX base64 with -D flag for decode)
kubectl get secret $secret -o json | jq -r '.data["ca.crt"]' | base64 -D > ca.crt
Get service account token from secret
user_token=$(kubectl get secret $secret -o json | jq -r '.data["token"]' | base64 -D)
Get information from your kubectl config (current-context, server..)
# get current context
c=$(kubectl config current-context)
# get cluster name of context
name=$(kubectl config get-contexts $c | awk '{print $3}' | tail -n 1)
# get endpoint of current context
endpoint=$(kubectl config view -o jsonpath="{.clusters[?(#.name == \"$name\")].cluster.server}")
On a fresh machine, follow these steps (given the ca.cert and $endpoint information retrieved above:
Install kubectl
brew install kubectl
Set cluster (run in directory where ca.crt is stored)
kubectl config set-cluster cluster-staging \
--embed-certs=true \
--server=$endpoint \
--certificate-authority=./ca.crt
Set user credentials
kubectl config set-credentials alice-staging --token=$user_token
Define the combination of alice user with the staging cluster
kubectl config set-context alice-staging \
--cluster=cluster-staging \
--user=alice-staging \
--namespace=alice
Switch current-context to alice-staging for the user
kubectl config use-context alice-staging
To control user access with policies (using ABAC), you need to create a policy file (for example):
{
"apiVersion": "abac.authorization.kubernetes.io/v1beta1",
"kind": "Policy",
"spec": {
"user": "system:serviceaccount:default:alice",
"namespace": "default",
"resource": "*",
"readonly": true
}
}
Provision this policy.json on every master node and add --authorization-mode=ABAC --authorization-policy-file=/path/to/policy.json flags to API servers
This would allow Alice (through her service account) read only rights to all resources in default namespace only.
You say :
I need to enable other users to also administer.
But according to the documentation
Normal users are assumed to be managed by an outside, independent service. An admin distributing private keys, a user store like Keystone or Google Accounts, even a file with a list of usernames and passwords. In this regard, Kubernetes does not have objects which represent normal user accounts. Regular users cannot be added to a cluster through an API call.
You have to use a third party tool for this.
== Edit ==
One solution could be to manually create a user entry in the kubeconfig file. From the documentation :
# create kubeconfig entry
$ kubectl config set-cluster $CLUSTER_NICK \
--server=https://1.1.1.1 \
--certificate-authority=/path/to/apiserver/ca_file \
--embed-certs=true \
# Or if tls not needed, replace --certificate-authority and --embed-certs with
--insecure-skip-tls-verify=true \
--kubeconfig=/path/to/standalone/.kube/config
# create user entry
$ kubectl config set-credentials $USER_NICK \
# bearer token credentials, generated on kube master
--token=$token \
# use either username|password or token, not both
--username=$username \
--password=$password \
--client-certificate=/path/to/crt_file \
--client-key=/path/to/key_file \
--embed-certs=true \
--kubeconfig=/path/to/standalone/.kube/config
# create context entry
$ kubectl config set-context $CONTEXT_NAME \
--cluster=$CLUSTER_NICK \
--user=$USER_NICK \
--kubeconfig=/path/to/standalone/.kube/config
bitnami guide works for me, even if you use minikube. Most important is you cluster supports RBAC.
https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/

Access Pod information using Master Public IP in Kubernetes

I can get Pods information using http://localhost:8001/api/v1/pods from inside my cluster.
Is there any way to get pod informations using http://master-public-ip:8001/api/v1/pods ?
By default, the master only exposes HTTPS to the public internet, not HTTP. You should be able to hit https://admin:password#master-public-ip/api/v1/pods/, where password is the generated password for the admin user. This can be found either in the .kube/config file on your machine, or in the /srv/kubernetes/known_tokens.csv file on the master.
E.g. on the master VM:
$ cat /srv/kubernetes/known_tokens.csv
mYpASSWORD,admin,admin
unused,kubelet,kubelet
...
Or on your machine:
$ cat ~/.kube/config
...
- name: my-cluster
user:
client-certificate-data: ...
client-key-data: ...
password: mYpASSWORD
username: admin
...
$ curl --insecure https://admin:mYpASSWORD#master-public-ip/api/v1/pods/
...
To avoid using --insecure (i.e. actually verify the server certificate that your master is presenting), you can use the --cacert flag to specify the cluster certificate authority from your .kube/config file.
$ cat ~/.kube/config
...
- cluster:
certificate-authority-data: bIgLoNgBaSe64eNcOdEdStRiNg
server: https://master-public-ip
name: my-cluster
...
$ echo bIgLoNgBaSe64eNcOdEdStRiNg | base64 -d > ca.crt
$ curl --cacert=ca.crt https://admin:mYpASSWORD#master-public-ip/api/v1/pods/
...