Kuberntes deployment using yaml file - kubernetes

Can I define DeploymentConfig kind in kuberntes yaml file where api version is of openstack
If yes, how?

If I get your question in terms of redhat OpenStack .. You will need OpenShift installed on top of redhat OpenStack
And the API will be as
"apiVersion": "apps.openshift.io/v1",
OpenShift API reference
https://docs.openshift.com/container-platform/3.7/rest_api/apis-apps.openshift.io/v1.DeploymentConfig.html
HTTP REQUEST
POST /apis/apps.openshift.io/v1/deploymentconfigs HTTP/1.1 Authorization: Bearer $TOKEN Accept: application/json Connection: close Content-Type: application/json' { "kind": "DeploymentConfig", "apiVersion": "apps.openshift.io/v1", ... }
CURL REQUEST
$ curl -k \ -X POST \ -d #- \ -H "Authorization: Bearer $TOKEN" \ -H 'Accept: application/json' \ -H 'Content-Type: application/json' \ https://$ENDPOINT/apis/apps.openshift.io/v1/deploymentconfigs <<'EOF' { "kind": "DeploymentConfig", "apiVersion": "apps.openshift.io/v1", ... } EOF
As reference below is redhat open shifts documents link for deploymentconfig creation
https://docs.openshift.com/container-platform/4.2/applications/deployments/what-deployments-are.html#deployments-and-deploymentconfigs_what-deployments-are

Related

Prometheus datasource : client_error: client error: 403

Hi I am trying to add built-in OpenShift(v4.8) prometheus data source to a local grafana server. I have given basic auth with username and password and as of now I have enabled skip tls verify also. Still I'm getting this error
Prometheus URL = https://prometheus-k8s-openshift-monitoring.apps.xxxx.xxxx.xxxx.com
this is the grafana log
logger=tsdb.prometheus t=2022-04-12T17:35:23.47+0530 lvl=eror msg="Instant query failed" query=1+1 err="client_error: client error: 403"
logger=context t=2022-04-12T17:35:23.47+0530 lvl=info msg="Request Completed" method=POST path=/api/ds/query status=400 remote_addr=10.100.95.27 time_ms=36 size=65 referer=https://grafana.xxxx.xxxx.com/datasources/edit/6TjZwT87k
You cannot authenticate to the OpenShift prometheus instance using basic authentication. You need to authenticate using a bearer token, e.g. one obtained from oc whoami -t:
curl -H "Authorization: Bearer $(oc whoami -t)" -k https://prometheus-k8s-openshift-monitoring.apps.xxxx.xxxx.xxxx.com/
Or from a ServiceAccount with appropriate privileges:
secret=$(oc -n openshift-monitoring get sa prometheus-k8s -o jsonpath='{.secrets[1].name}')
token=$(oc -n openshift-monitoring get secret $secret -o jsonpath='{.data.token}' | base64 -d)
curl -H "Authorization: Bearer $token" -k https://prometheus-k8s-openshift-monitoring.apps.xxxx.xxxx.xxxx.com/

Use the Kubernetes REST API without kubectl

You can simply interact with K8s using its REST API. For example to get pods:
curl http://IPADDR/api/v1/pods
However I can't find any example of authentication based only on curl or REST. All the examples show the usage of kubectl as proxy or as a way to get credentials.
If I already own the .kubeconfig, and nothing else, is there any way to send the HTTP requests directly (e.g. with a token) without using kubectl?
The kubeconfig file you download when you first install the cluster includes a client certificate and key. For example:
clusters:
- cluster:
certificate-authority-data: ...
server: https://api.cluster1.ocp.virt:6443
name: cluster1
contexts:
- context:
cluster: cluster1
user: admin
name: admin
current-context: admin
preferences: {}
users:
- name: admin
user:
client-certificate-data: ...
client-key-data: ...
If you extract the client-certificate-data and client-key-data to
files, you can use them to authenticate with curl. To extract the
data:
$ yq -r '.users[0].user."client-certificate-data"' kubeconfig | base64 -d > cert
$ yq -r '.users[0].user."client-key-data"' kubeconfig | base64 -d >
key
And then using curl:
$ curl -k --cert cert --key key \
'https://api.cluster1.ocp.virt:6443/api/v1/namespaces/default/pods?limit=500'
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "22022"
},
"items": []
Alternately, if your .kubeconfig has tokens in it, like this:
[...]
users:
- name: your_username/api-clustername-domain:6443
user:
token: sha256~...
Then you can use that token as a bearer token:
$ curl -k https://api.mycluster.mydomain:6443/ -H 'Authorization: Bearer sha256~...'
...but note that those tokens typically expire after some time, while the certificates should work indefinitely (unless they are revoked somehow).

Accessing introspect endpoint , failed:connection refused kong, keycload,OIDC

I am getting a connection refused response from the server when i try to hit the the endpoint via the proxy i have created on kong.
The curl command i am using to hit the proxy -
curl --location --request GET 'http://localhost:8000/listProducts/'
--header 'Accept: application/json'
--header 'Authorization: Bearer token'
to get the token I use the following curl -
curl --location --request POST 'http://localhost:8180/auth/realms/experimental/protocol/openid-connect/token'
--header 'Content-Type: application/x-www-form-urlencoded'
--data-urlencode 'username=username'
--data-urlencode 'password=password'
--data-urlencode 'grant_type=password'
--data-urlencode 'client_id=myapp'
The Client protocol is open-connect and the access type is public.
The config i have done in the Oidc plugin:
consumer:
response type:
code:
introspection endpoint: http://{MyHostIP}:8180/auth/realms/experimental/protocol/openid-connect/token/introspect
filters:
bearer only: yes
ssl verify: no
session secret:
introspection endpoint auth method:
realm: experimental
redirect after logout uri: /
scope: openid
token endpoint auth method:
client_secret_post:
logout path: /logout
client id: kong
discovery: https://{MyHostIP}:8180/auth/realms/master/.well-known/openid-configuration
client secret: myClientSecret
recovery page path:
redirect uri path:
Thanks in advance
How did u deployed Keycloak? I see 2 points:
Your discovery endpoint is https
Your introspection endpoint is just http
Also, if you are using Docker to deploy Kong + Keycloak, go to your hosts file and add a new line with your local ip with MyHostIP.
sudo nano /etc/hosts
Then add
your.ip.address. keycloak-host
Update the docker-compose file
kong:
build:
context: kong/
dockerfile: Dockerfile
extra_hosts:
- "Keycloak-host:your.ip.address"
Now configure your introspection and discovery url using the keycloak-host
Ex: http://keycloak-host:8180/auth/realms/master/protocol/openid-connect/token/introspect
If you need a functional example about Kong + openID + Keycloak, check this repo.

Kubernetes API: list pods with a label

I have namespace with few deployments. One of the deployments has a specific label (my-label=yes). I want to get all the pods with this label.
This is how it done with kubectl:
kdev get pods -l my-label=yes
it's working.
Now I want to do it with Kubernetes API. This is the closest point I get:
curl https://kubernetes.default.svc/api/v1/namespaces/XXX/pods --silent --header "Authorization: Bearer $TOKEN" --insecure
This command get all the pods in the namespace. I want to filter the results to all the pods with this requested label. How to do it?
Even more wide question: Is this possible to "translate" kubectl command into REST API call?
Is this possible to "translate" kubectl command into REST API call?
When you execute any command using kubectl it internally gets translated into a REST call with json payload before sending it to Kubernetes API Server. An easy way to inspect that is to run the command with verbosity increased
kubectl get pods -n kube-system -l=tier=control-plane --v=8
I0730 15:21:01.907211 5320 loader.go:375] Config loaded from file: /Users/arghyasadhu/.kube/config
I0730 15:21:01.912119 5320 round_trippers.go:420] GET https://xx.xx.xxx.xxx:6443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane&limit=500
I0730 15:21:01.912135 5320 round_trippers.go:427] Request Headers:
I0730 15:21:01.912139 5320 round_trippers.go:431] Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json
I0730 15:21:01.912143 5320 round_trippers.go:431] User-Agent: kubectl/v1.18.0 (darwin/amd64) kubernetes/9e99141
I0730 15:21:02.071778 5320 round_trippers.go:446] Response Status: 200 OK in 159 milliseconds
I0730 15:21:02.071842 5320 round_trippers.go:449] Response Headers:
I0730 15:21:02.071858 5320 round_trippers.go:452] Cache-Control: no-cache, private
I0730 15:21:02.071865 5320 round_trippers.go:452] Content-Type: application/json
I0730 15:21:02.071870 5320 round_trippers.go:452] Date: Thu, 30 Jul 2020 09:51:02 GMT
I0730 15:21:02.114281 5320 request.go:1068] Response Body: {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{"selfLink":"/api/v1/namespaces/kube-system/pods","resourceVersion":"1150005"},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names","priority":0},{"name":"Ready","type":"string","format":"","description":"The aggregate readiness state of this pod for accepting traffic.","priority":0},{"name":"Status","type":"string","format":"","description":"The aggregate status of the containers in this pod.","priority":0},{"name":"Restarts","type":"integer","format":"","description":"The number of times the containers in this pod have been restarted.","priority":0},{"name":"Age","type":"strin [truncated 16503 chars]
Found it.
curl https://kubernetes.default.svc/api/v1/namespaces/XXX/pods?labelSelector=my-label%3Dyes --silent --header "Authorization: Bearer $TOKEN" --insecure

How to set secodary key for kubernetes ingress basic-auth

I wanna have an ingress for all my service in the k8s, and give the ingress a basic auth. But for auth rotation, I want to support a secondary auth for user so the endpoint can be reached when they re-generate the primary key.
I currently can follow this guide to set up an ingress with single basic auth.
Adapting the guide, you can put multiple usernames and passwords in the auth file you're using to generate the basic auth secret. Specifically, if you run the htpasswd command without the -c flag, so e.g. htpasswd <filename> <username> it will add an entry to the file rather than creating a new file from scratch:
$ htpasswd -c auth foo
New password: <bar>
Re-type new password: <bar>
Adding password for user foo
$ cat auth
foo:$apr1$isCec65Z$JNaQ0GJCpPeG8mR1gYsgM1
$ htpasswd auth user2
New password: <pass2>
Re-type new password: <pass2>
Adding password for user user2
$ cat auth
foo:$apr1$isCec65Z$JNaQ0GJCpPeG8mR1gYsgM1
user2:$apr1$.FsOzlqA$eFxym7flDnoDtymRLraA2/
If you've already created the secret in the first place via the given command:
$ kubectl create secret generic basic-auth --from-file=auth
You can then update the secret with this trick:
$ kubectl create secret generic basic-auth --from-file=auth\
--dry-run -o yaml | kubectl apply -f -
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
secret/basic-auth configured
You can confirm setting the secret worked:
$ kubectl get secret basic-auth -ojsonpath={.data.auth} | base64 -D
foo:$apr1$isCec65Z$JNaQ0GJCpPeG8mR1gYsgM1
user2:$apr1$.FsOzlqA$eFxym7flDnoDtymRLraA2/
Finally, you can test basic auth with both usernames and passwords is working:
$ curl http://<minikube_ip>/ -H 'Host: foo.bar.com' \
-s -w"%{http_code}" -o /dev/null
401
$ curl http://<minikube_ip>/ -H 'Host: foo.bar.com' \
-u 'wronguser:wrongpass' \
-s -w"%{http_code}" -o /dev/null
401
$ curl http://<minikube_ip>/ -H 'Host: foo.bar.com' \
-u 'foo:bar' \
-s -w"%{http_code}" -o /dev/null
200
$ curl http://<minikube_ip>/ -H 'Host: foo.bar.com' \
-u 'user2:pass2' \
-s -w"%{http_code}" -o /dev/null
200