kube-apiserver not coming up after adding --admission-control-config-file flag - kubernetes

root#ubuntu151:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.10", GitCommit:"62876fc6d93e891aa7fbe19771e6a6c03773b0f7", GitTreeState:"clean", BuildDate:"2020-10-15T01:43:56Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
My admission webhooks require authentication so I'm restarting the apiserver by specifying the location of the admission control configuration file via the --admission-control-config-file flag. Doing as:
root#ubuntu151:~# vi /etc/kubernetes/manifests/kube-apiserver.yaml
...
spec:
containers:
- command:
- --admission-control-config-file=/var/lib/kubernetes/kube-AdmissionConfiguration.yaml
...
root#ubuntu151:~# vi /var/lib/kubernetes/kube-AdmissionConfiguration.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ValidatingAdmissionWebhook
configuration:
apiVersion: apiserver.config.k8s.io/v1
kind: WebhookAdmissionConfiguration
kubeConfigFile: /var/lib/kubernetes/kube-config.yaml
- name: MutatingAdmissionWebhook
configuration:
apiVersion: apiserver.config.k8s.io/v1
kind: WebhookAdmissionConfiguration
kubeConfigFile: /var/lib/kubernetes/kube-config.yaml
kubeConfigFile: /var/lib/kubernetes/kube-config.yaml is the file I copied from ~/.kube/config
Now my kube-apiserver is not coming up .
Please help !
Thanks in Advance !

Related

no matches for kind "Profile" in version "kubeflow.org/v1beta1"

I install kubeflow and tried manual profile creation following to here, but got this print
error: unable to recognize "profile.yaml": no matches for kind "Profile" in version "kubeflow.org/v1beta1"
How can I solve it?
Your valuable help is needed.
my resource is profile.yaml
apiVersion: kubeflow.org/v1beta1
kind: Profile
metadata:
name: tmp_namespace
spec:
owner:
kind: User
name: example_01#gmail.com
resourceQuotaSpec:
hard:
cpu: "2"
memory: 2Gi
requests.nvidia.com/gpu: "1"
persistentvolumeclaims: "1"
requests.storage: "5Gi"
user infomation in dex:
- email: exam_01#gmail.com
hash: $2a$12$lRDeywzDl4ds0oRR.erqt.b5fmNpvJb0jdZXE0rMNYdmbfseTzxNW
userID: "example"
username: example
Of course I did restart dex
$ kubectl rollout restart deployment dex -n auth
$ kubectl version --client && kubeadm version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.13", GitCommit:"a43c0904d0de10f92aa3956c74489c45e6453d6e", GitTreeState:"clean", BuildDate:"2022-08-17T18:28:56Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.13", GitCommit:"a43c0904d0de10f92aa3956c74489c45e6453d6e", GitTreeState:"clean", BuildDate:"2022-08-17T18:27:51Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
I've found a way.
If you see the message no matches for kind "Profile" in version "kubeflow.org/v1beta1, you may not have done the two necessary installs.
go kubeflow/manifasts, and follow command to install Profiles + KFAM and User Namespace

How to update api versions list in Kubernetes

I am trying to use "autoscaling/v2beta2" apiVersion in my configuration following this tutorial. And also I am on Google Kubernetes Engine.
However I get this error:
error: unable to recognize "backend-hpa.yaml": no matches for kind "HorizontalPodAutoscaler" in version "autoscaling/v2beta2"
When I list the available api-versions:
$ kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
certmanager.k8s.io/v1alpha1
cloud.google.com/v1beta1
coordination.k8s.io/v1beta1
custom.metrics.k8s.io/v1beta1
extensions/v1beta1
external.metrics.k8s.io/v1beta1
internal.autoscaling.k8s.io/v1alpha1
metrics.k8s.io/v1beta1
networking.gke.io/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scalingpolicy.kope.io/v1alpha1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
So indeed I am missing autoscaling/v2beta2.
Then I check my kubernetes version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.6", GitCommit:"abdda3f9fefa29172298a2e42f5102e777a8ec25", GitTreeState:"clean", BuildDate:"2019-05-08T13:53:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.6-gke.13", GitCommit:"fcbc1d20b6bca1936c0317743055ac75aef608ce", GitTreeState:"clean", BuildDate:"2019-06-19T20:50:07Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}
So it looks like I have a version of 1.13.6. Supposedly autoscaling/v2beta2 is available since 1.12.
So why it is not available for me?
Unfortunaltely HPA autoscaling API v2beta2 is not available on GKE yet. It can be freely used with Kubeadm and Minikube.
There is already opened issue on issuetracker - https://issuetracker.google.com/135624588

{ambassador ingress} Not able to use canary and add_request_headers in the same Mapping

I want to pass a few custom headers to a canary service. On adding both the mappings to the template, it is disregarding the weight and adding the header to 100% of the traffic and routing them to the canary service.
Below is my ambassador service config
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: flag_off_mapping
prefix: /web-app/
service: web-service-flag
weight: 99
---
apiVersion: ambassador/v1
kind: Mapping
name: flag_on_mapping
prefix: /web-app/
add_request_headers:
x-halfbakedfeature: enabled
service: web-service-flag
weight: 1
I expect 99% of the traffic to hit the service without any additional headers and 1% of the traffic to hit the service with x-halfbakedfeature: enabled header added to the request object.
Ambassador: 0.50.3
Kubernetes environment [AWS L7 ELB]
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-04T04:48:03Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:36:14Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
$
Apologies for X-posting in Github and SO.
Please take a look here:
As workaround could you consider:
"Make another service pointing to the same canary instances with an ambassador
annotation containing the same prefix and headers required."

Kubenates RunAsUser is forbidden

when I try to create a pods with non-root fsgroup (here 2000)
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: true
hitting error
Error from server (Forbidden): error when creating "test.yml": pods "security-context-demo" is forbidden: pod.Spec.SecurityContext.RunAsUser is forbidden
Version
root#ubuntuguest:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Can any one help me how to set ClusterRoleBinding in cluster.
If the issue is indeed because of RBAC permissions, then you can try creating a ClusterRoleBinding with cluster role as explained here.
Instead of the last step in that post (using the authentication token to log in to dashboard), you'll have to use that token and the config in your kubectl client when creating the pod.
For more info on the use of contexts, clusters, and users visit here
Need to disable admission plugins SecurityContextDeny while setting up Kube-API
On Master node
ps -ef | grep kube-apiserver
And check enable plugins
--enable-admission-plugins=LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook,DenyEscalatingExec
Ref: SecurityContextDeny
cd /etc/kubernetes
cp apiserver.conf apiserver.conf.bak
vim apiserver.conf
find SecurityContextDeny keywords and delete it.
:wq
systemctl restart kube-apiserver
then fixed it

Why won't kubernetes v1.5 recognize service.spec.loadBalancerIp?

I'm running kubernetes v1.5 (api reference here). The field service.spec.loadBalancerIp should exist but I keep getting the following error when I attempt to set it.
error: error validating ".../service.yaml": error validating data: found invalid field loadBalancerIp for v1.ServiceSpec; if you choose to ignore these errors, turn validation off with --validate=false
service.yaml:
kind: Service
apiVersion: v1
metadata:
name: some-service
spec:
type: LoadBalancer
loadBalancerIp: xx.xx.xx.xx
selector:
deployment: some-deployment
ports:
- protocol: TCP
port: 80
kubectl version output:
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:53:09Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:34:32Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
I'm running my cluster on gke.
Any thoughts?
You have a typo in your spec. It should be loadBalancerIP, not loadBalancerIp. Note the uppercase P