Cannot create priviledged containers - kubernetes

I am using instructions from https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker-multinode.md to setup a multinode kubernetes cluster on vmware vcloud infrastructure.
I was able to get the cluster working but when I tried the nfs example I was not able to create the nfs container. So I recreated all the VMs and rebuilt kubernetes from source using:
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
sed -i 's/allow_privileged: .*/allow_privileged: true/g' cluster/saltbase/pillar/privilege.sls
./build/run.sh hack/build-cross.sh
cp _output/dockerized/bin/linux/$(dpkg --print-architecture)/kubectl /usr/local/bin
chmod +x /usr/local/bin/kubectl
and continued to setup the kubernetes cluster and retried the NFS example and I get the following error:
kubectl create -f nfs-server-pod.yaml
The Pod "nfs-server" is invalid.
spec.containers[0].securityContext.privileged: forbidden '<*>(0xc20931650c)true'
I tried with both the master and 1.0.3 release and had the same result.
Can you please tell me how to resolve this issue and Thanks for your support

We thought that turning privileged containers off by default would be good for security. It turns out to just be a pain point for a lot of people, so we're working to turn it on by default in kubernetes v1.1.
The --allow-privileged flag has to be set on both the kubelet and the apiserver - please check that

Related

Can we run sonobuoy to be k8s conformance on a Rancher cluster

We setup a rancher cluster with 3 nodes for testing and I would like to apply for k8s conformance using this rancher cluster. However, while running sonobuoy it returns error
ERRO[0000] could not create sonobuoy client: failed to get rest config: invalid configuration: no configuration has been provided
It seems like Rancher does not have any kubernates binaries built-in (Kubectl, kubeadm etc). May I know if it is possible to be k8s conformance on a rancher cluster?
You should have kubeernetes cluster kubeconfig localy where you are running sonobuoy.
from Rancher documentation: How to Manage Kubernetes With Kubectl:
RKE:
When you create a Kubernetes cluster with RKE, RKE creates a
kube_config_rancher-cluster.yml file in the local directory that
contains credentials to connect to your new cluster with tools like
kubectl.
You can copy this file to $HOME/.kube/config or, if you are working
with multiple Kubernetes clusters
Rancher-Managed Kubernetes Clusters:
Within Rancher, you can download a kubeconfig file through the web UI
and use it to connect to your Kubernetes environment with kubectl.
From the Rancher UI, click on the cluster you would like to connect to
via kubectl. On the top right-hand side of the page, click the
Kubeconfig File button: Click on the button for a detailed look at
your config file as well as directions to place in ~/.kube/config.
Upon copying your configuration to ~/.kube/config, you will be able to
run kubectl commands without having to specify the –-kube-config file
location:
Check First launch with sonobuoy requests for a configuration - maybe it will be useful for you.
Also, look here - just for you: Conformance tests for Rancher 2.x Kubernetes
Run Conformance Test
Once you Rancher Kubernetes cluster is active, Fetch it's kubeconfig.yml file and save it locally.
Download a sonobuoy binary release of the CLI, or build it yourself by running:
$ go get -u -v github.com/heptio/sonobuoy
Configure your kubeconfig file by running:
$ export KUBECONFIG="/path/to/your/cluster/kubeconfig.yml"
Run sonobuoy:
$ sonobuoy run
Watch the logs:
$ sonobuoy logs
Check the status:
$ sonobuoy status
Once the status commands shows the run as completed, you can download the results tar.gz file:
$ sonobuoy retrieve

Is it possible to use cloud code extension in vscode to deploy kubernetes pods on a non-GKE cluster?

This is my very first post here and looking for some advise please.
I am learning Kubernetes and trying to get cloud code extension to deploy Kubernetes manifests on non-GKE cluster. Guestbook app can be deployed using cloud code extension to local K8 cluster(such as MiniKube or Docker-for-Desktop).
I have two other K8 clusters as below and I cannot deploy manifests via cloud code. I am not entirely sure if this is supposed to work or not as I couldn't find any docs or posts on this. Once the GCP free trial is finished, I would want to deploy my test apps on our local onprem K8 clusters via cloud code.
3 node cluster running on CentOS VMs(built using kubeadm)
6 node cluster on GCP running on Ubuntu machines(free trial and built using Hightower way)
Skaffold is installed locally on MAC and my local $HOME/.kube/config has contexts and users set to access all 3 clusters.
➜
guestbook-1 kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
docker-desktop docker-desktop docker-desktop
* kubernetes-admin#kubernetes kubernetes kubernetes-admin
kubernetes-the-hard-way kubernetes-the-hard-way admin
Error:
Running: skaffold dev -v info --port-forward --rpc-http-port 57337 --filename /Users/testuser/Desktop/Cloud-Code-Builds/guestbook-1/skaffold.yaml -p cloudbuild --default-repo gcr.io/gcptrial-project
starting gRPC server on port 50051
starting gRPC HTTP server on port 57337
Skaffold &{Version:v1.19.0 ConfigVersion:skaffold/v2beta11 GitVersion: GitCommit:63949e28f40deed44c8f3c793b332191f2ef94e4 GitTreeState:dirty BuildDate:2021-01-28T17:29:26Z GoVersion:go1.14.2 Compiler:gc Platform:darwin/amd64}
applying profile: cloudbuild
no values found in profile for field TagPolicy, using original config values
Using kubectl context: kubernetes-admin#kubernetes
Loaded Skaffold defaults from \"/Users/testuser/.skaffold/config\"
Listing files to watch...
- python-guestbook-backend
watching files for artifact "python-guestbook-backend": listing files: unable to evaluate build args: reading dockerfile: open /Users/adminuser/Desktop/Cloud-Code-Builds/src/backend/Dockerfile: no such file or directory
Exited with code 1.
skaffold config file skaffold.yaml not found - check your current working directory, or try running `skaffold init`
I have the docker and skaffold file in the path as shown in the image and have authenticated the google SDK in vscode. Any help please ?!
I was able to get this working in the end. What helped in this particular case was removing skaffold.yaml, then skaffold init, generated new skaffold.yaml. And, Cloud Code was then able deploy pods on both remote clusters. Thanks for all your help.

Error when installing Spinnaker on Kubernetes on prem cluster

I'm trying to install Spinnaker on a Kubernetes setup onprem.
Following instructions from https://www.spinnaker.io/setup/
Install and run Halyard as Docker on the Kubernetes master.
Run everything as root
mkdir ~/.hal on Kubemaster. Created the service account as instrcuted in the site.
Copied the kubeconfig file from ./kube/config into ~/.hal/kubeconfig as it didnt work with docker -v option, there was some permission issue, so made it work this way
docker run halyard command -- all up and running fine.
Ran Bash and Inside halyard.
Now when I do these two things inside halyard
Point kubectl to the kubeconfig by export KUBECONFIG command
Enable kubernetes provider "hal config provider kubernetes enable"
The command gets executed sometimes successfully or it fails with this warning after timeout error
Getting object contents of versions.yml
Unexpected error comparing versions: com.netflix.spinnaker.halyard.core.error.v1.HalException: Could not load "versions.yml" from config bucket: www.googleapis.com.*
Even if it somehow manages to run successfully. When I run these,
CONTEXT=$(kubectl config current-context)
hal config provider kubernetes account add my-k8s-account --context $CONTEXT
It fails with the same error as above.
Total weird stuff. Its intermittent. Does it have something to do with the kubeconfig file? Any pointers or help would be greatly appreciated.
Thanks.
As noted in comments these kind of errors could result when there lack of network connectivity from inside the container.
As Vikram mentioned in his comment:
Yes, that was the problem. Azure support recommended installing a CNI plugin and it resolved the issue. So, it seems like inside of Azure VM without a Public IP, the CNI plugin is needed for a VM To connect to internet.
To configure CNI plugin on Azure platform use this guide.
Hope it helps.

Waiting for pods: apiserver get stuck

I am trying to implement auditing policy
My yaml
~/.minikube/addons$ cat audit-policy.yaml
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
- level: Metadata
Pods got stuck
minikube start --extra-config=apiserver.Authorization.Mode=RBAC --extra-config=apiserver.Audit.LogOptions.Path=/var/logs/audit.log --extra-config=apiserver.Audit.PolicyFile=/etc/kubernetes/addons/audit-policy.yaml
😄 minikube v0.35.0 on linux (amd64)
💡 Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
🔄 Restarting existing virtualbox VM for "minikube" ...
⌛ Waiting for SSH access ...
📶 "minikube" IP address is 192.168.99.101
🐳 Configuring Docker as the container runtime ...
✨ Preparing Kubernetes environment ...
▪ apiserver.Authorization.Mode=RBAC
▪ apiserver.Audit.LogOptions.Path=/var/logs/audit.log
▪ apiserver.Audit.PolicyFile=/etc/kubernetes/addons/audit-policy.yaml
🚜 Pulling images required by Kubernetes v1.13.4 ...
🔄 Relaunching Kubernetes v1.13.4 using kubeadm ...
⌛ Waiting for pods: apiserver
Why?
I can do this
minkub start
Then I go for minikube ssh
$ sudo bash
$ cd /var/logs
bash: cd: /var/logs: No such file or directory
ls
cache empty lib lock log run spool tmp
How to apply extra-config?
I don't have good news. Although you made some mistakes with the /var/logs it does not matter in this case, as there seems to be no way of implement auditing policy in Minikube (I mean there is, few ways at least but they all seem to fail).
You can try couple of ways presented in GitHub issues and other links I will provide, but I tried probably all of them and they do not work with current Minikube version. You might try to make this work with earlier versions maybe, as it seems like at some point it was possible with the way you have provided in your question, but as for now in the updated version it is not. Anyway I have spend some time on trying the ways from the links and couple of my own ideas but no success, maybe you will be able to find the missing piece.
You can find more information in this documents:
Audit Logfile Not Created
Service Accounts and Auditing in Kubernetes
fails with -extra-config=apiserver.authorization-mode=RBAC and audit logging: timed out waiting for kube-proxy
How do I enable an audit log on minikube?
Enable Advanced Auditing Webhook Backend Configuration

Failed to create pod sandbox kubernetes cluster

I have an weave network plugin.
inside my folder /etc/cni/net.d there is a 10-weave.conf
{
"name": "weave",
"type": "weave-net",
"hairpinMode": true
}
My weave pods are running and the dns pod is also running
But when i want to run a pod like a simple nginx wich will pull an nginx image
The pod stuck at container creating , describe pod gives me the error , failed create pod sandbox.
When i run journalctl -u kubelet i get this error
cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
is my network plugin not good configured ?
i used this command to configure my weave network
kubectl apply -f https://git.io/weave-kube-1.6
After this won't work i also tried this command
kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d ‘\n’)”
I even tried flannel and that gives me the same error.
The system i am setting kubernetes on is a raspberry pi.
I am trying to build a raspberry pi cluster with 3 nodes and 1 master with kubernetes
Dose anyone have ideas on this?
Thank you all for responding to my question. I solved my problem now. For anyone who has come to my question in the future the solution was as followed.
I cloned my raspberry pi images because i wanted a basicConfig.img for when i needed to add a new node to my cluster of when one gets down.
Weave network (the plugin i used) got confused because on every node and master the os had the same machine-id. When i deleted the machine id and created a new one (and reboot the nodes) my error got fixed. The commands to do this was
sudo rm /etc/machine-id
sudo rm /var/lib/dbus/machine-id
sudo dbus-uuidgen --ensure=/etc/machine-id
Once again my patience was being tested. Because my kubernetes setup was normal and my raspberry pi os was normal. I founded this with the help of someone in the kubernetes community. This again shows us how important and great are IT community is. To the people of the future who will come to this question. I hope this solution will fix your error and will decrease the amount of time you will be searching after a stupid small thing.
Looking at the pertinent code in Kubernetes and in CNI, the specific error you see seems to indicate that it cannot find any files ending in .json, .conf or .conflist in the directory given.
This makes me think it could be something as the conf file not being present on all the hosts, so I would verify that as a first step.