Kubernetes: Run container as non-root if there is no user specified - kubernetes

How can I make every container run as non-root in Kubernetes?
Containers that do not specify a user, as in this example, and also do not specify a SecurityContext in the corresponding deployment, should still be able to be executed in the cluster - but without running as root. What options do you have here?
FROM debian:jessie
RUN apt-get update && apt-get install -y \
git \
python \
vim
CMD ["echo", "hello world"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: mynamespace
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: hello-world
name: hello-world

you can add Pod Security Policy to your cluster, there is an option (below) you can add to prevent any deployment from running without specifying a non-root user:
spec:
runAsUser:
rule: MustRunAsNonRoot
for more info about Pod Security Policy please go to this link:
https://kubernetes.io/docs/concepts/security/pod-security-policy/

Related

What is the current equivalent of `kubectl run --generator=run/v1`

I'm working through Kubernetes in Action (copyright 2018), and at least one of the examples is out-of-date with respect to current versions of kubectl.
Currently I'm stuck in section 2.3 on just trying to demo a simple web-server docker container ("kubia"):
kubectl run kubia --image=Dave/kubia --port=8080 --generator=run/v1
the --generator option has been removed from current versions of kubectl. What command(s) achieve the same end in the current version of kubectl?
Note: I'm literally just 2 chapters into learning about Kubernetes, so I don't really know what a deployment or anything else (so the official kubernetes docuementation doesn't help), I just need the simplest way to verify that that I can, in fact, run this container in my minikube "cluster".
in short , you can use following commands to create pods and deployments (imperative way) using following commands which are similar to the commands mentioned in that book :
To create a pod named kubia with image Dave/kubia
kubectl run kubia --image=Dave/kubia --port=8080
To create a deployment named kubia with image Dave/kubia
kubectl create deployment kubia --image=Dave/kubia --port=8080
You can just instantiated the pod, since --generator has been deprecated.
apiVersion: v1
kind: Pod
metadata:
name: kubia
spec:
containers:
- name: kubia
image: Dave/kubia
ports:
- containerPort: 8080
Alternatively, you can use a deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia-deployment
labels:
app: kubia
spec:
replicas: 1
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: Dave/kubia
ports:
- containerPort: 8080
Save either one to a something.yaml file and run
kubectl create -f something.yaml
And to clean up
kubectl delete -f something.yaml
✌️
If someone who read same book (Kubernetes in Action, copyright 2018) have same issue in the future, just run pod instead of the replication controller and expose pod instead of rc in following chapter.

How to trigger a kubernetes/openshift job restart when ever a specific pod in the cluster will restart?

For example, I have a pod running a server in it and I have a job in my cluster that is doing some yaml patching on the server deployment.
Is there a way we can set up some kind of trigger or anything that will rerun the job when ever the respective deployment change happens?
You can add your job spec into the deployment as initContainer like that:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
replicas: 1
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
initContainers:
- name: init
image: centos:7
command:
- "bin/bash"
- "-c"
- "do something useful"
containers:
- name: nginx
image: nginx
In this case every time you rollout the deployment, job defined in initContainers will run.

How to pass number of pods by command line

I am using eks to deploy pods to my nodegroups. This is my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: molding-app
namespace: new-simulator
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: eks-pods
image: 088562811725.dkr.ecr.ap-south-1.amazonaws.com/eks_pods:latest
ports:
- containerPort: 8080
- containerPort: 10010
I just wanted to know if there is anyway I could pass number of replicas through command line instead of writing it in the deployment file?
You can do it adding parameter --replicas.
$ kubectl create deployment molding-app --image=088562811725.dkr.ecr.ap-south-1.amazonaws.com/eks_pods:latest --replicas=3 -n <namespace>
deployment.apps/molding-app created
Later you can change it using scale
$ kubectl scale deployment molding-app --replicas=10 -n <namespace>
deployment.extensions/molding-app scaled
More details can be found in Kubernetes documentation about scaling deployment.
You can scale it from command line by using
kubectl scale deployment molding-app --replicas=3 -n namespace

Can not login to superset after installing it on AWS kuberbetes

I installed amancevice/superset to AWS Kubernetes.
When i open Load balancer DNS, I can see superset login page but default login is not working admin/admin.
Is there anything i missed?
Here's my yaml file i used to install superset
Superset.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: superset-deployment
namespace: default
spec:
selector:
matchLabels:
app: superset
template:
metadata:
labels:
app: superset
spec:
containers:
- name: superset
image: amancevice/superset:latest
ports:
- containerPort: 8088
kubectl create -f superset.yaml
To initialize the database with an admin user, Run superset-init helper script
kubectl exec -it superset superset-init
MoreDetails Here

Error deployment aspnetcore webapi to minikube

When I try to execute this command kubectl apply -f mydeployment.yaml I receive an error error: SchemaError(io.k8s.api.core.v1.ContainerState): invalid object doesn't have additional properties. What can I do to deploy my aspnetcore webapi successfully to my local Kubernetes cluster?
I've already tried to upgrade minikube by running the command choco upgrade minikube. It says I've already have te latest version. minikube v1.0.0 is the latest version available based on your source(s).
My deployment.yaml I've created looks like this.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
spec:
replicas: 3
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
name: myfirstdockerapi
image: myfirstdockerapi
ports:
- containerPort: 80
Cleanup everything before you start:
rm -rf ~/.minikube
As per documentation:
You must use a kubectl version that is within one minor version difference of
your cluster. For example, a v1.2 client should work with v1.1, v1.2, and v1.3
master. Using the latest version of kubectl helps avoid unforeseen issues.
Minikube resources on Github you can find here:
To avoid interaction issues - Update default Kubernetes version to v1.14.0 #3967
NOTE: , we also recommend updating kubectl to a recent release (v1.13+)
For the latest version of minikube please follow official documentation here.
Kubernetes blog - here,
Stackoverlow here,
Choco here,
In the attached deployment there was indentation problem (corrected) so please try again.
spec:
containers:
- name: myfirstdockerapi
image: myfirstdockerapi
ports:
- containerPort: 80
The containers element expects a list, so you need to prefix each entry with a dash.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
spec:
replicas: 3
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: myfirstdockerapi
image: myfirstdockerapi
ports:
- containerPort: 80
If you are unsure you can always use kubectl to validate your file without creating it:
kubectl apply -f sample.yaml --validate --dry-run Just in case make sure that your kubectl version matches the version of your kubernetes cluster.