How do you enable Feature Gates in K8s? - kubernetes

I need to enable a few Feature Gates on my bare-metal K8s cluster(v1.13). I've tried using the kubelet flag --config to enable them, as kubelet --feature-gates <feature gate> throws an error saying that the feature has been deprecated.
I've created a .yml file with the following configuration:
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
feature-gates:
VolumeSnapshotDataSource=true
and after running: "kubelet --config , I got the following error:
I0119 21:59:52.987945 29087 server.go:417] Version: v1.14.2
I0119 21:59:52.988165 29087 plugins.go:103] No cloud provider specified.
W0119 21:59:52.988188 29087 server.go:556] standalone mode, no API client
F0119 21:59:52.988203 29087 server.go:265] failed to run Kubelet: no client provided, cannot use webhook authentication
Does anyone know what could be happening and how to fix this problem?

You don't apply --feature-gates to the kubelet. You do it to the API-server. Depending on how have you installed kubernetes on bare metal, you would need to either stop API-server, edit the command you start it with and add the following parameter:
--feature-gates=VolumeSnapshotDataSource=true
Or, if it is in a pod, find the manifest, edit it and re-deploy it (it should happen automatically, once you finish editing). It should look like this:
...
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=10.132.0.48
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --feature-gates=VolumeSnapshotDataSource=true
image: k8s.gcr.io/kube-apiserver:v1.16.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 10.132.0.48
path: /healthz
port: 6443
scheme: HTTPS
...

It(VolumeSnapshotDataSource) is a default feature in 1.17 beta releases. It needs to be enabled in API server if the kubernetes version is less than 1.17.

Related

'No healthy upstream' error when Envoy proxy is set up manually

I have a very simple environment with a client, a server and an envoy proxy, each running on a separate docker, communicating over http.
When I set it using docker-compose it works.
However, when I set up the dockers and the network manually (with docker network create, setting the aliases, etc.), I get a "503 - no healthy upstream" message when the client tries to send requests to the server. curl to the network alias works from the envoy container. Any idea what is the difference between using docker-compose and setting up the network and containers manually?
envoy.yaml:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains: ["*"]
routes:
- match: { prefix: "/" }
route: { cluster: service }
http_filters:
- name: envoy.filters.http.router
typed_config: {}
clusters:
- name: service
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: round_robin
load_assignment:
cluster_name: service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: server-stub
port_value: 5000
admin:
access_log_path: "/tmp/envoy.log"
address:
socket_address:
address: 0.0.0.0
port_value: 9901
The docker-compose file that worked (but I don't want to use docker-compose, I am using scripts that set up each docker separately):
version: "3.8"
services:
envoy:
image: envoyproxy/envoy:v1.16-latest
ports:
- "10000:10000"
- "9901:9901"
volumes:
- ./envoy.yaml:/etc/envoy/envoy.yaml
server-stub:
build:
context: .
dockerfile: Dockerfile
ports:
- "5000:5000"
I can't reproduce this. It works fine with your docker-compose file, and it works fine manually. Here are the manual steps I took:
$ docker network create test-net
$ docker container run --network test-net --name envoy -p 10000:10000 -p 9901:9901 --mount type=bind,src=/home/john/projects/tester/envoy.yaml,dst=/etc/envoy/envoy.yaml envoyproxy/envoy:v1.16-latest
$ docker run --network test-net --name server-stub johnharris85/simple-hostname-reporter:3
My sample app also listens on port 5000. I used your exact envoy config. Using Docker 20.10.8 if relevant.

Unable to create namespace quota using helm

I am facing the following issue related specifying namespace quota.
namespace quota specified is not getting created via helm.
My file namspacequota.yaml is as shown below
apiVersion: v1
kind: ResourceQuota
metadata:
name: namespacequota
namespace: {{ .Release.Namespace }}
spec:
hard:
requests.cpu: "3"
requests.memory: 10Gi
limits.cpu: "6"
limits.memory: 12Gi
Below command used for installation
helm install privachart3 . -n test-1
However the resourcequota is not getting created.
kubectl get resourcequota -n test-1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NAME CREATED AT
gke-resource-quotas 2021-01-20T06:14:16Z
I can define the resource-quota using the below kubectl command.
kubectl apply -f namespacequota.yaml --namespace=test-1
The only change required in the file above is commenting of line number-5 that consist of release-name.
kubectl get resourcequota -n test-1
NAME CREATED AT
gke-resource-quotas 2021-01-20T06:14:16Z
namespacequota 2021-01-23T07:30:27Z
However in this case, when i am trying to install the chart, the PVC is created, but the POD is not getting created.
The capacity is not an issue as i am just trying to create a single maria-db pod using "Deployment".
Command used for install given below
helm install chart3 . -n test-1
Output observed given below
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NAME: chart3
LAST DEPLOYED: Sat Jan 23 08:38:50 2021
NAMESPACE: test-1
STATUS: deployed
REVISION: 1
TEST SUITE: None
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
I got the answer from another the Git forum.
Upon setting a namespace quota we need to explicitly set the POD's resource.
In my case i just needed to specify the resource limit under the image.
- image: wordpress:4.8-apache
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Post that i am now able to observe the PODs as well
[george#dis ]$ kubectl get resourcequota -n geo-test
NAME AGE REQUEST LIMIT
gke-resource-quotas 31h count/ingresses.extensions: 0/100, count/ingresses.networking.k8s.io: 0/100, count/jobs.batch: 0/5k, pods: 2/1500, services: 2/500
namespace-quota 7s requests.cpu: 500m/1, requests.memory: 128Mi/1Gi limits.cpu: 1/3, limits.memory: 256Mi/3Gi
[george#dis ]$
.
[george#dis ]$ kubectl get pod -n geo-test
NAME READY STATUS RESTARTS AGE
wordpress-7687695f98-w7m5b 1/1 Running 0 32s
wordpress-mysql-7ff55f869d-2w6zs 1/1 Running 0 32s
[george#dis ]$

How to enforce authentication and authorization modules on insecure kubernetes api server port

I have enabled the API server over insecure port on the private subnet, with the following flag
- --insecure-port=8080
- --insecure-bind-address=0.0.0.0
As a result of this it bypasses authentication and authorization modules. which is perfectly well documented in the https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/
I tried to adding the flag --anonymous-auth=false doesn't solve the purpose
Here is the complete list of API command
- kube-apiserver
- --advertise-address=192.0.3.6
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --anonymous-auth=false
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=8080
- --insecure-bind-address=0.0.0.0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
As per security I know insecure shouldn't be used for communication this is complete isolated network and i'm trying to enable authentication and authorization modules over the insecure port
By default, the insercure port will bypass authentication and authorization modules, as its primary task is to bosstrap and test the server, not to actually act as the main port.
The authentication and authorization modules can be enabled in the secure port.
Wrapping up, the port you want to secure, is not meant to have these modules enabled.

MongoDB statefulset updating

I am in the process of deploying a mongodb ReplicaSet on GKE.
My deployment works, however I would like to enable auth on Mongo.
I have connected to my pod
kubectl exec -it {pod_name} mongo admin
Created an Admin user and also a user for my database. I was then thinking I could update mongo-statefulset.yaml with the --auth flag and apply the updated yaml.
Something like
.....
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongod-container
image: mongo:3.6
command:
- mongod
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
- "--auth"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
.....
But running kubectl apply -f mongo-statefulset.yaml just produces
service/mongo-svc unchanged
statefulset.apps/mongo configured
Should I restart my pods for this to now take effect?
Try to do rolling update:
The RollingUpdate update strategy will update all Pods in a StatefulSet, in reverse ordinal order, while respecting the StatefulSet guarantees.
Patch the web StatefulSet to apply the RollingUpdate update strategy.
$ kubectl patch statefulset your_statefulset_name -p '{"spec":{...}}}'
Don't to forget to add env label with credentials you have created on your pod like:
env:
- name: MONGODB_USERNAME
value: admin
- name: MONGODB_PASSWORD
value: password
I hope it helps.
You could try
kubectl delete -f mongo-statefulset.yaml && kubectl apply -f mongo-statefulset.yaml

How to configure prometheus with alertmanager?

docker-compose.yml:
This is the docker-compose to run the prometheus, node-exporter and alert-manager service. All the services are running great. Even the health status in target menu of prometheus shows ok.
version: '2'
services:
prometheus:
image: prom/prometheus
privileged: true
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- ./alertmanger/alert.rules:/alert.rules
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- '9090:9090'
node-exporter:
image: prom/node-exporter
ports:
- '9100:9100'
alertmanager:
image: prom/alertmanager
privileged: true
volumes:
- ./alertmanager/alertmanager.yml:/alertmanager.yml
command:
- '--config.file=/alertmanager.yml'
ports:
- '9093:9093'
prometheus.yml
This is the prometheus config file with targets and alerts target sets. The alertmanager target url is working fine.
global:
scrape_interval: 5s
external_labels:
monitor: 'my-monitor'
# this is where I have simple alert rules
rule_files:
- ./alertmanager/alert.rules
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node-exporter'
static_configs:
- targets: ['node-exporter:9100']
alerting:
alertmanagers:
- static_configs:
- targets: ['some-ip:9093']
alert.rules:
Just a simple alert rules to show alert when service is down
ALERT service_down
IF up == 0
alertmanager.yml
This is to send the message on slack when alerting occurs.
global:
slack_api_url: 'https://api.slack.com/apps/A90S3Q753'
route:
receiver: 'slack'
receivers:
- name: 'slack'
slack_configs:
- send_resolved: true
username: 'tara gurung'
channel: '#general'
api_url: 'https://hooks.slack.com/services/T52GRFN3F/B90NMV1U2/QKj1pZu3ZVY0QONyI5sfsdf'
Problems:
All the containers are working fine I am not able to figure out the exact problem.What am I really missing. Checking the alerts in prometheus shows.
Alerts
No alerting rules defined
Your ./alertmanager/alert.rules file is not included in your docker config, so it is not available in the container. You need to add it to the prometheus service:
prometheus:
image: prom/prometheus
privileged: true
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- ./alertmanager/alert.rules:/alertmanager/alert.rules
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- '9090:9090'
And probably give an absolute path inside prometheus.yml:
rule_files:
- "/alertmanager/alert.rules"
You also need to make sure you alerting rules are valid. Please see the prometheus docs for details and examples. You alert.rules file should look something like this:
groups:
- name: example
rules:
# Alert for any instance that is unreachable for >5 minutes.
- alert: InstanceDown
expr: up == 0
for: 5m
Once you have multiple files, it may be better to add the entire directory as a volume rather than individual files.
If you need answers to this question see the explanation on this link
How to make alert rules visible on Prometheus User Interface?
Your alert rules inside the prometheus.yml should look like this
rule_files:
- "/etc/prometheus/alert.rules.yml"
You need to stop the alertmanager and prometheus containers and run this
docker run -d --name prometheus_ops -p 9191:9090 -v $(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml -v $(pwd)/alert.rules.yml:/etc/prometheus/alert.rules.yml prom/prometheus
Verify if you can see the alert.rule config path : Prometheus container ID and go to cd /etc/prometheus
docker exec -it fa99f733f69b sh