Kubernetes - How to change tokens for hyperkube apiserver - kubernetes

We are using hyperkube's apiserver and configuring it via a manifest file:
"containers":[
{
"name": "apiserver",
"image": "gcr.io/google_containers/hyperkube-amd64:v1.2.1",
"command": [
"/hyperkube",
"apiserver",
"--service-cluster-ip-range=192.168.0.0/23",
"--service-node-port-range=9000-9999",
"--bind-address=127.0.0.1",
"--etcd-servers=http://127.0.0.1:4001",
"--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota",
"--client-ca-file=/srv/kubernetes/ca.crt",
"--basic-auth-file=/srv/kubernetes/basic_auth.csv",
"--min-request-timeout=300",
"--tls-cert-file=/srv/kubernetes/server.cert",
"--tls-private-key-file=/srv/kubernetes/server.key",
"--token-auth-file=/srv/kubernetes/known_tokens.csv",
"--allow-privileged=true",
"--v=4"
],
"volumeMounts": [
{
"name": "data",
"mountPath": "/srv/kubernetes"
}
]
}
I'm trying to figure out how to set up a different set of tokens than in /srv/kubernetes/known_tokens.csv to have users "superuser" and "reader", instead of admin, kubelet, and kube_proxy. How can I do this?

Your manifest is using the exposed volume path /srv/kubernetes, so should be able to map that to another persistent volume (http://kubernetes.io/docs/user-guide/volumes/) and setup the new files there.
You can do that by specifying a volume:
"volumes": [
{
"name": "data",
"hostPath": {
"path": "/foo"
}
}
]

Related

POD is being terminated and created again due to scale up and it's running twice

I have an application that runs a code and at the end it sends an email with a report of the data. When I deploy pods on GKE , certain pods get terminated and a new pod is created due to Auto Scale, but the problem is that the termination is done after my code is finished and the email is sent twice for the same data.
Here is the JSON file of the deploy API:
{
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"name": "$name",
"namespace": "$namespace"
},
"spec": {
"template": {
"metadata": {
"name": "********"
},
"spec": {
"priorityClassName": "high-priority",
"containers": [
{
"name": "******",
"image": "$dockerScancatalogueImageRepo",
"imagePullPolicy": "IfNotPresent",
"env": $env,
"resources": {
"requests": {
"memory": "2000Mi",
"cpu": "2000m"
},
"limits":{
"memory":"2650Mi",
"cpu":"2650m"
}
}
}
],
"imagePullSecrets": [
{
"name": "docker-secret"
}
],
"restartPolicy": "Never"
}
}
}
}
and here is a screen-shot of the pod events:
Any idea how to fix that?
Thank you in advance.
"Perhaps you are affected by this "Note that even if you specify .spec.parallelism = 1 and .spec.completions = 1 and .spec.template.spec.restartPolicy = "Never", the same program may sometimes be started twice." from doc. What happens if you increase terminationgraceperiodseconds in your yaml file? – "
#danyL
my problem was that I had another jobs that deploy pods on my nodes with more priority , so it was trying to terminate my running pods but the job was already done and the email was already sent , so i fixed the problem by fixing the request and the limit resources on all my json files , i don't know if it's the perfect solution but for now it solved my problem.
Thank you all for you help

How to mount volume inside pod using "kubectl" CLI

I want to create a pod using kubectl CLI which will mount hostpath /etc/os-release inside pod container and display content of /etc/os-release file.
Is is possible to do it in using one-liner kubectl command?
kubectl run -i --rm busybox --image=busybox --overrides='{
"apiVersion": "v1",
"spec": {
"containers": [
{
"image": "busybox",
"name": "busybox",
"command": ["cat", "/etc/os-release"],
"resources": {},
"volumeMounts": [
{
"mountPath": "/etc/os-release",
"name": "release"
}
]
}
],
"volumes": [
{
"name": "release",
"hostPath": {
"path": "/etc/os-release",
"type": "File"
}
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Never"
},
"status": {}
}'
NAME=Buildroot
VERSION=2019.02.10
ID=buildroot
VERSION_ID=2019.02.10
PRETTY_NAME="Buildroot 2019.02.10"
pod "busybox" deleted

Error when setting up glusterfs on Kubernetes: volume create: heketidbstorage: failed: Host not connected

I'm following this instruction to setup glusterfs on my kubernetes cluster. At heketi-client/bin/heketi-cli setup-openshift-heketi-storage part, heketi-cli tells me :
Error: volume create: heketidbstorage: failed: Host 192.168.99.25 not connected
or sometimes:
Error: volume create: heketidbstorage: failed: Staging failed on 192.168.99.26. Error: Host 192.168.99.25 not connected
heketi.json is
{
"_port_comment": "Heketi Server Port Number",
"port": "8080",
"_use_auth": "Enable JWT authorization. Please enable for deployment",
"use_auth": false,
"_jwt": "Private keys for access",
"jwt": {
"_admin": "Admin has access to all APIs",
"admin": {
"key": "7319"
},
"_user": "User only has access to /volumes endpoint",
"user": {
"key": "7319"
}
},
"_glusterfs_comment": "GlusterFS Configuration",
"glusterfs": {
"_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
"executor": "kubernetes",
"_db_comment": "Database file name",
"db": "/var/lib/heketi/heketi.db",
"kubeexec": {
"rebalance_on_expansion": true
},
"sshexec": {
"rebalance_on_expansion": true,
"keyfile": "/etc/heketi/private_key",
"fstab": "/etc/fstab",
"port": "22",
"user": "root",
"sudo": false
}
},
"_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.",
"backup_db_to_kube_secret": false
}
topology-sample.json is
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"redis-test25"
],
"storage": [
"192.168.99.25"
]
},
"zone": 1
},
"devices": [
{
"name": "/dev/sda7",
"destroydata": true
}
]
},
{
"node": {
"hostnames": {
"manage": [
"redis-test26"
],
"storage": [
"192.168.99.26"
]
},
"zone": 1
},
"devices": [
{
"name": "/dev/sda7",
"destroydata": true
}
]
},
{
"node": {
"hostnames": {
"manage": [
"redis-test01"
],
"storage": [
"192.168.99.113"
]
},
"zone": 1
},
"devices": [
{
"name": "/dev/sda7",
"destroydata": true
}
]
}
]
}
]
}
The heketi-cli is v8.0.0 and kubernetes is v1.12.3
How do I fix this problem?
Update: Just found that I missed the iptables part, but now the message becomes
Error: volume create: heketidbstorage: failed: Host 192.168.99.25 is not in 'Peer in Cluster' state
seems that one of the glusterfs pod cannot connect to others, I tried kubectl exec -i glusterfs-59ftx -- gluster peer status:
Number of Peers: 2
Hostname: 192.168.99.26
Uuid: 6950db9a-3d60-4625-b642-da5882396bee
State: Peer Rejected (Disconnected)
Hostname: 192.168.99.113
Uuid: 78983466-4499-48d2-8411-2c3e8c70f89f
State: Peer Rejected (Disconnected)
while the other one said:
Number of Peers: 1
Hostname: 192.168.99.26
Uuid: 23a0114d-65b8-42d6-8067-7efa014af68d
State: Peer in Cluster (Connected)
I solved these problems by myself.
For first part, the reason is that I didn't setup iptables in every nodes according to Infrastructure Requirements.
For second part according to this article, delete all file in /var/lib/glusterd except glusterd.info and then start over from Kubernete Deploy.

Kubernetes : hostPath storage permissions

Problem : Not able to write in the directory inside the container.
I am using hostPath storage for the persistent storage requirements. I am not using PV anc PVC to use hospath, instead of that, using it's volume plugin. for example
{
"apiVersion": "v1",
"id": "local-nginx",
"kind": "Pod",
"metadata": {
"name": "local-nginx"
},
"spec": {
"containers": [
{
"name": "local-nginx",
"image": "fedora/nginx",
"volumeMounts": [
{
"mountPath": "/usr/share/nginx/html/test",
"name": "localvol"
}
]
}
],
"volumes": [
{
"name": "localvol",
"hostPath": {
"path": "/logs/nginx-logs"
}
}
]
}
}
Note: nginx pod is just for exmaple.
My directory on host is getting created as "drwxr-xr-x. 2 root root 6 Apr 23 18:42 /logs/nginx-logs"
and same permissions are reflecting inside the pod, but as it's 755, other user i.e. user inside the pod is not able to write/create file inside the mounted dir.
Questions:
Is there any way out to avoid the problem specified above?
Is there any way to specify the directory permission in case of Hostpath storage?
Is there any field which I can set in the following definition to give the required permission?
"volumes":{
"name": "vol",
"hostPath": {
"path": "/any/path/it/will/be/replaced"}}
I think the problem you are encountering is not related to the user or group (your pod definition does not have RunAsUser spec, so by default it is run as root), but rather to the SELinux policy. In order to mount a host directory to the pod with rw permissions, it should have the following label: svirt_sandbox_file_t . You can check the current SElinux label with the following command: ls -laZ <your host directory> and change it with chcon -Rt svirt_sandbox_file_t <your host directory>.

Specify ECR image instead of S3 file in Cloud Formation Elastic Beanstalk template

I'd like to reference an EC2 Container Registry image in the Elastic Beanstalk section of my Cloud Formation template. The sample file references an S3 bucket for the source bundle:
"applicationVersion": {
"Type": "AWS::ElasticBeanstalk::ApplicationVersion",
"Properties": {
"ApplicationName": { "Ref": "application" },
"SourceBundle": {
"S3Bucket": { "Fn::Join": [ "-", [ "elasticbeanstalk-samples", { "Ref": "AWS::Region" } ] ] },
"S3Key": "php-sample.zip"
}
}
}
Is there any way to reference an EC2 Container Registry image instead? Something like what is available in the EC2 Container Service TaskDefinition?
Upload a Dockerrun file to S3 in order to do this. Here's an example dockerrun:
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "my-bucket",
"Key": "mydockercfg"
},
"Image": {
"Name": "quay.io/johndoe/private-image",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080:80"
}
],
"Volumes": [
{
"HostDirectory": "/var/app/mydb",
"ContainerDirectory": "/etc/mysql"
}
],
"Logging": "/var/log/nginx"
}
Use this file as the s3 key. More info is available here.