I've a kubernetes sealed secret with encrypted data in it. How can I edit the sealed secret like editing a deployment using command "kubectl edit deployment".
I know kubectl edit secret works on normal secrets not on sealed secrets.
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: my-secret
namespace: test-ns
spec:
encryptedData:
password: AgCEP9MHaeVGelL3jSbnfUF47m+eR0Vv4ufzPyPXBvEjYcJluI0peLgilSqWUMnDLjp00UJq186nRYufEf7Bi1Jxm39t3nDbW13+wTSbblVzb9A2iKJI2VbxgTG/IDodNFxFWdKefZdZgVct2hTsuwlivIxpdxDZtcND4h+Cx8YFQMUpT5oO26oqISzRTh5Ewa6ehWtv6krfeFmbMQCF70eg0yioe6Op0YaQKloiFVInc1JK5KTR5iQYCeKb2R0ovKVa/lipbqjHCYSRhLR/3q5wz/ZuWz7g7ng6G9Q7o1pVv3udQYUvp2B6XvK1Nezc85wbhGgmuz5kcZUa36uF+eKMes6UdPcD7q58ndaj/0KWozdTAuk1OblV7mrUaK8Q45GIf+JqaBfzVt52INMT07P4MId/KB31sZDeE+OwEXhCDVTBAlxSRM0U9NjxDDb+mwUzxHNZHL1sY8M1YCoX+rr6n1+yW1HG42VHLCRzeBa2V31OFuQTNjoNxDEfUq+CSTRNDCmt8UvercSkgyM3mBa6JpHdkySllpqyEJDYKM1YvVRrjVvg1qGTF5dOCx6x3ROXnZtA3NBIafTu0+pHovVo+X7nUkl7hyupd0KKzBG+afgNpYQOxeuei5A+o++o92G5lexxk2v4bQt6ANYBxMlvT0LdBUW9e/L2y+TuNAHL23Xa/aTq1lagNBi9JTowX0lx0br2CqDbKg==
username: AgArwZm3qh83Fpzles1r/PjTDKQ2/SZ482IKC84T72/kI4M29aG2VT4SCXcqbmtVDYuVUN0wTbsFYsnwY1DSRrL4oup2xRg6N34IxHjj0ZtF1q0YtBKIM/DPcF2bBVAYc9/vOI0L3+VVSF9r93XYEMUWX6hY9eHa8VUHBM/Y65Sj3Il7Pmx/qoEcZ+e9UJhqWEJPotz6W5OMh/Al/QPJZknwUulM4coZ3C0J4TmrBVexPturcRCimDEQnd9UitotnGDoNAp2O28ovhXoImNsJBhNK5LykesRxEfIp4UJOb3I0CpLdoz9khEcb2r31j+KTtxifLez7Rg3Pg7BGpR3EKC3INZWrR8S/aUm5u/dP12ELgW3nq4WbafRitrZcHhLFZkHma/Er8miFbuTXvpFcXE1g+BnG2vIs4kHSl2QcP32HPGKHJJt0KEd1dUJrXXTjS9eXHJ2KsA5DZk4TcFA5dPAG76ZdKo0GCIQwvNeT0Ao4ntqmeOiijAQgmhXdCtD2WVavXi54h0f8F2ue6b0mBFCgTGKZyypjbXznzB/MPAZxgIu+UWQzV1CczwKlitPy638s/9iSan2/u2rhKu2SP0JFMZ6pPnfO51nMpDHtCDGFc1unjsjM4ZpnNXtaQJJmXo7Hw0L4dW2/N3uxCfxNtmYuBxE1t4GCefSUCTIleDgmAbB00nKkja+ml9bidcxawlIgHnoq/XNCqy2R3PkEw==
template:
data: null
metadata:
creationTimestamp: null
name: my-secret
namespace: test-ns
You can update the existing SealedSecret by using --merge-into option in SealedSecret service. You can simply copy & paste the encrypted data into a json and merge this to the existing SealedSecret like this
$ echo -n bar | kubectl create secret generic mysecret --dry-run=client --from-file=foo=/dev/stdin -o json \
| kubeseal > mysealedsecret.json
$ echo -n baz | kubectl create secret generic mysecret --dry-run=client --from-file=bar=/dev/stdin -o json \
| kubeseal --merge-into mysealedsecret.json
When the sealedsecret needs to decrypt and work the same as normal secrets in kubernetes then both sealedsecrets and secrets need to be in the same namespace.
For more detailed information refer this official sealedsecrets github page
To know more about the usage of SealedSecret refer to this document
Related
When using Kubernetes .yml files, I can do the following:
$ cat configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
foo: ${FOO}
bar: ${BAR}
static: doesNotChange
$ export FOO=myFooVal
$ export BAR=myBarVal
$ cat configmap.yml | envsubst | kubectl apply -f -
This would replace ${FOO} and ${BAR} in the configmap.yml file before actually applying the file to the cluster.
How could I achieve the very same behavior with a Kubernetes secret which has it's data values base64 encoded?
I would need to read all the keys in the data: field, decode the values, apply the environment variables and encode it again.
A tool to decode and encode the data: values inplace would be much appreciated.
It is actually possible, to store the secret.yml with stringData instead of data which allows to keep the files in plain text (SOPS encryption is still possible and encouraged)
$ cat secret.yml
apiVersion: v1
kind: Secret
metadata:
name: test-secret
namespace: default
type: Opaque
stringData:
dotenv: |
DATABASE_URL="postgresql://test:test#localhost:5432/test?schema=public"
API_PORT=${PORT}
FOO=${FOO}
BAR=${BAR}
$ export PORT=80
$ export FOO=myFooValue
$ export BAR=myBarValue
$ cat secret.yml | envsubst | kubectl apply -f -
A plus is for sure, that this not only allows for creation of the secret, but updating is also possible.
Just for documentation, here would be the full call with SOPS:
$ sops --decrypt secret.enc.yml | envsubst | kubectl apply -f -
after watching a view videos on RBAC (role based access control) on kubernetes (of which this one was the most transparent for me), I've followed the steps, however on k3s, not k8s as all the sources imply. From what I could gather (not working), the problem isn't with the actual role binding process, but rather the x509 user cert which isn't acknowledged from the API service
$ kubectl get pods --kubeconfig userkubeconfig
error: You must be logged in to the server (Unauthorized)
Also not documented on Rancher's wiki on security for K3s (while documented for their k8s implementation)?, while described for rancher 2.x itself, not sure if it's a problem with my implementation, or a k3s <-> k8s thing.
$ kubectl version --short
Client Version: v1.20.5+k3s1
Server Version: v1.20.5+k3s1
With duplication of the process, my steps are as follows:
Get k3s ca certs
This was described to be under /etc/kubernetes/pki (k8s), however based on this seems to be at /var/lib/rancher/k3s/server/tls/ (server-ca.crt & server-ca.key).
Gen user certs from ca certs
#generate user key
$ openssl genrsa -out user.key 2048
#generate signing request from ca
openssl req -new -key user.key -out user.csr -subj "/CN=user/O=rbac"
# generate user.crt from this
openssl x509 -req -in user.csr -CA server-ca.crt -CAkey server-ca.key -CAcreateserial -out user.crt -days 365
... all good:
Creating kubeConfig file for user, based on the certs:
# Take user.crt and base64 encode to get encoded crt
cat user.crt | base64 -w0
# Take user.key and base64 encode to get encoded key
cat user.key | base64 -w0
Created config file:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <server-ca.crt base64-encoded>
server: https://<k3s masterIP>:6443
name: home-pi4
contexts:
- context:
cluster: home-pi4
user: user
namespace: rbac
name: user-homepi4
current-context: user-homepi4
kind: Config
preferences: {}
users:
- name: user
user:
client-certificate-data: <user.crt base64-encoded>
client-key-data: <user.key base64-encoded>
Setup role & roleBinding (within specified namespace 'rbac')
role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-rbac
namespace: rbac
rules:
- apiGroups:
- "*"
resources:
- pods
verbs:
- get
- list
roleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-rb
namespace: rbac
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: user-rbac
subjects:
apiGroup: rbac.authorization.k8s.io
kind: User
name: user
After all of this, I get fun times of...
$ kubectl get pods --kubeconfig userkubeconfig
error: You must be logged in to the server (Unauthorized)
Any suggestions please?
Apparently this stackOverflow question presented a solution to the problem, but following the github feed, it came more-or-less down to the same approach followed here (unless I'm missing something)?
As we can find in the Kubernetes Certificate Signing Requests documentation:
A few steps are required in order to get a normal user to be able to authenticate and invoke an API.
I will create an example to illustrate how you can get a normal user who is able to authenticate and invoke an API (I will use the user john as an example).
First, create PKI private key and CSR:
# openssl genrsa -out john.key 2048
NOTE: CN is the name of the user and O is the group that this user will belong to
# openssl req -new -key john.key -out john.csr -subj "/CN=john/O=group1"
# ls
john.csr john.key
Then create a CertificateSigningRequest and submit it to a Kubernetes Cluster via kubectl.
# cat <<EOF | kubectl apply -f -
> apiVersion: certificates.k8s.io/v1
> kind: CertificateSigningRequest
> metadata:
> name: john
> spec:
> groups:
> - system:authenticated
> request: $(cat john.csr | base64 | tr -d '\n')
> signerName: kubernetes.io/kube-apiserver-client
> usages:
> - client auth
> EOF
certificatesigningrequest.certificates.k8s.io/john created
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
john 39s kubernetes.io/kube-apiserver-client system:admin Pending
# kubectl certificate approve john
certificatesigningrequest.certificates.k8s.io/john approved
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
john 52s kubernetes.io/kube-apiserver-client system:admin Approved,Issued
Export the issued certificate from the CertificateSigningRequest:
# kubectl get csr john -o jsonpath='{.status.certificate}' | base64 -d > john.crt
# ls
john.crt john.csr john.key
With the certificate created, we can define the Role and RoleBinding for this user to access Kubernetes cluster resources. I will use the Role and RoleBinding similar to yours.
# cat role.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: john-role
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
# kubectl apply -f role.yml
role.rbac.authorization.k8s.io/john-role created
# cat rolebinding.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: john-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: john-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: john
# kubectl apply -f rolebinding.yml
rolebinding.rbac.authorization.k8s.io/john-binding created
The last step is to add this user into the kubeconfig file (see: Add to kubeconfig)
# kubectl config set-credentials john --client-key=john.key --client-certificate=john.crt --embed-certs=true
User "john" set.
# kubectl config set-context john --cluster=default --user=john
Context "john" created.
Finally, we can change the context to john and check if it works as expected.
# kubectl config use-context john
Switched to context "john".
# kubectl config current-context
john
# kubectl get pods
NAME READY STATUS RESTARTS AGE
web 1/1 Running 0 30m
# kubectl run web-2 --image=nginx
Error from server (Forbidden): pods is forbidden: User "john" cannot create resource "pods" in API group "" in the namespace "default"
As you can see, it works as expected (user john only has get and list permissions).
Thank you matt_j for the example | answer provided to my question. Marked that as the answer, as it was an direct answer to my question regarding RBAC via certificates. In addition to that, I'd also like to provide the an example for RBAC via service accounts, as a variation (for those whom prefer with specific use case).
Service account creation
//kubectl create serviceaccount name -n namespace
$ kubectl create serviceaccount udef -n rbac
This creates the service account + automatically a corresponding secret (udef-token-lhvm8). See with yaml output:
Get token from created secret:
// kubectl describe secret secretName -o yaml
$ kubectl describe secret udef-token-lhvm8 -o yaml
secret will contain 3 objects, (1) ca.crt (2) namespace (3) token
# ... other secret context
Data
====
ca.crt: x bytes
namespace: x bytes
token: xxxx token xxxx
Put token into config file
Can start by getting your 'admin' config file and output to file
// location of **k3s** kubeconfig
$ sudo cat /etc/rancher/k3s/k3s.yaml > /home/{userHomeFolder}/userKubeConfig
Under users section, can replace certificate data with token:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxx root ca cert content xxx
server: https://<host IP>:6443
name: home-pi4
contexts:
- context:
cluster: home-pi4
user: nametype
namespace: rbac
name: user-homepi4
current-context: user-homepi4
kind: Config
preferences: {}
users:
- name: nametype
user:
token: xxxx token xxxx
The roles and rolebinding manifests can be created as required, like previously specified (nb within the same namespace), in this case linking to the service account:
# role manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-rbac
namespace: rbac
rules:
- apiGroups:
- "*"
resources:
- pods
verbs:
- get
- list
---
# rolebinding manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-rb
namespace: rbac
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: user-rbac
subjects:
- kind: ServiceAccount
name: udef
namespace: rbac
With this being done, you will be able to test remotely:
// show pods -> will be allowed
$ kubectl get pods --kubeconfig
..... valid response provided
// get namespaces (or other types of commands) -> should not be allowed
$ kubectl get namespaces --kubeconfig
Error from server (Forbidden): namespaces is forbidden: User bla-bla
I created a secret of type service-account using the below code. The secret got created but when I run the kubectl get secrets the service-account secret is not listed. Where am I going wrong
apiVersion: v1
kind: Secret
metadata:
name: secret-sa-sample
annotations:
kubernetes.io/service-account.name: "sa-name"
type: kubernetes.io/service-account-token
data:
# You can include additional key value pairs as you do with Opaque Secrets
extra: YmFyCg==
kubectl create -f sa-secret.yaml
secret/secret-sa-sample created```
it might have been created in default namespace.
Specify namespace explicitly using -n $NS argument to kubectl
I would like to avoid keeping secret in the Git as a best practise, and store it in AWS SSM.
Is there any way to get the value from AWS System Manager and use to create Kubernetes Secret?
I manage to create secret by fetching value from AWS Parameter store using the following script.
cat <<EOF | ./kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: kiali
namespace: istio-system
type: Opaque
data:
passphrase: $(echo -n "`aws ssm get-parameter --name /dev/${env_name}/kubernetes/kiali_password --with-decrypt --region=eu-west-2 --output text --query Parameter.Value`" | base64 -w0)
username: $(echo -n "admin" | base64 -w0)
EOF
For sure, 12factors requires to externalize configuration outside Codebase.
For your question, there is an attempt to integrate AWS SSM (AWS Secret Manager) to be used as the single source of truth for Secrets.
You just need to deploy the controller :
helm repo add secret-inject https://aws-samples.github.io/aws-secret-sidecar-injector/
helm repo update
helm install secret-inject secret-inject/secret-inject
Then annotate your deployment template with 2 annotations:
template:
metadata:
annotations:
secrets.k8s.aws/sidecarInjectorWebhook: enabled
secrets.k8s.aws/secret-arn: arn:aws:secretsmanager:us-east-1:123456789012:secret:database-password-hlRvvF
Other steps are explained here.
But I think that I highlighted the most important steps which clarifies the approach.
You can use GoDaddy external secrets. Installing it, creates a controller, and the controller will sync the AWS secrets within specific intervals. After creating the secrets in AWS SSM and installing GoDaddy external secrets, you have to create an ExternalSecret type as follows:
apiVersion: 'kubernetes-client.io/v1'
kind: ExtrenalSecret
metadata:
name: cats-and-dogs
secretDescriptor:
backendType: secretsManager
data:
- key: cats-and-dogs/mysql-password
name: password`
This will create a Kubernetes secrets for you. That secret can be exposed to your service as an environment variable or through volume mount.
Use Kubernetes External Secret. This below solution uses Secret Manager (not SSM) but servers the purpose.
Deploy using Helm
$ `helm repo add external-secrets https://external-secrets.github.io/kubernetes-external-secrets/`
$ `helm install kubernetes-external-secrets external-secrets/kubernetes-external-secrets`
Create new secret with required parameter in AWS Secret Manager:
For example - create a secret with secret name as "dev/db-cred" with below values.
{"username":"user01","password":"pwd#123"}
Secret.YAML:
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
name: my-kube-secret
namespace: my-namespace
spec:
backendType: secretsManager
region: us-east-1
dataFrom:
- dev/db-cred
Refer it in helm values file as below
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: my-kube-secret
key: password
I'm trying to retrieve the value for the key clientSecret, from my kubernetes response, but I am failing to find the correct go syntax.
I have tried commands like:
kubectl get secret client-secret -o yaml --namespace magic-test -o go-template --template="{{range .items}}{{range .data}}{{.clientSecret}} {{end}}{{end}}"
And other variations
This is the yaml output of what I am trying to retrieve from
kubectl get secret client-secret -n magic-test -o yaml
apiVersion: v1
data:
clientSecret: NmQQuCNFiOWItsdfOTAyMCb00MjEwLWFiNGQtNTI4NDdiNWM5ZjMx
kind: Secret
metadata:
creationTimestamp: 2019-05-31T14:03:44Z
name: client-secret
namespace: magic-test
resourceVersion: "11544532074"
selfLink: /api/v1/namespaces/magic-test/secrets/client-secret
uid: e72acdsfbcc-83fsdac-1sdf1e9-9sdffaf-0050dsf56b7c1fa
type: Opaque
How can I retrieve the value for clientSecret?
The output is not a list of items but an object or dictionary, so you can't iterate over the pipeline but you may simply index it by the keys you're interested in.
So simply use the template {{.data.clientSecret}}:
kubectl get secret client-secret -o yaml --namespace magic-test -o go-template
--template="{{.data.clientSecret}}"