Create one liner (Imperative way) command in kubernetes - kubernetes

Create one liner (Imperative way) command in kubernetes
kubectl run test --image=ubuntu:latest --limits="cpu=200m,memory=512Mi" --requests="cpu=200m,memory=512Mi" --privileged=false
And also I need to set securityContext in one liner, is it possible? basically I need to run container as securityContext/runAsUser not as root account.
Yes declarative works, but I'm looking for an imperative way.

Posting this answer as a community wiki to highlight the fact that the solution was posted in the comments (a link to another answer):
Hi, check this answer: stackoverflow.com/a/37621761/5747959 you can solve this with --overrides – CLNRMN 2 days ago
Feel free to edit/expand.
Citing $ kubectl run --help:
--overrides='': An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field.
Following on --overrides example that have additionals field included and to be more specific to this particular question (securityContext wise):
kubectl run -it ubuntu --rm --overrides='
{
"apiVersion": "v1",
"spec": {
"securityContext": {
"runAsNonRoot": true,
"runAsUser": 1000,
"runAsGroup": 1000,
"fsGroup": 1000
},
"containers": [
{
"name": "ubuntu",
"image": "ubuntu",
"stdin": true,
"stdinOnce": true,
"tty": true,
"securityContext": {
"allowPrivilegeEscalation": false
}
}
]
}
}
' --image=ubuntu --restart=Never -- bash
By above override you will use a securityContext to constrain your workload.
Side notes!
The example above is specific to running a Pod that you will exec into (bash)
The --overrides will override the other specified parameters outside of it (for example: image)
Additional resources:
Kubernetes.io: Docs: Tasks: Configure pod container: Security context
Kubernetes.io: Docs: Concepts: Security: Pod security standards

Related

Unable to retrieve custom metrics from prometheus-adapter

i am trying to experiment with scaling one of my application pods running on my raspberry pi kubernetes cluster using HPA + custom metrics but ran into several issues which despite reading the documentations on https://github.com/DirectXMan12/k8s-prometheus-adapter and troubleshooting for the past 2 days, i am still having difficulties grasping why some problems are happening.
Firstly, i built an ARM-compatible image of k8s-prometheus-adapter and install it using helm. I can confirm its running properly by checking the pod logs.
I have also set up a script which sends raspberry pis temperature to pushgateway and i can query via this Prometheus query node_temp, which will return the following series
node_temp{job="kube4"} 42
node_temp{job="kube1"} 44
node_temp{job="kube2"} 39
node_temp{job="kube3"} 40
Now i want to be able to scale one of my application pods using the above temperature values as an experiment to understand better how it works.
Below is my k8s-prometheus-adapter helm values.yml file
image:
repository: jaanhio/k8s-prometheus-adapter-arm
tag: latest
logLevel: 7
prometheus:
url: http://10.17.0.12
rules:
default: false
custom:
- seriesQuery: 'etcd_object_counts'
resources:
template: <<.Resource>>
name:
as: "etcd_object"
metricsQuery: count(etcd_object_counts)
- seriesQuery: 'node_temp'
resources:
template: <<.Resource>>
name:
as: "node_temp"
metricsQuery: count(node_temp)
After installing via helm, i ran kubectl get apiservices and can see v1beta1.custom.metrics.k8s.io listed.
i then ran kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq and got the following
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "jobs.batch/node_temp",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "jobs.batch/etcd_object",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
]
i then tried to query the value of the registered node_temp metrics using kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/jobs/*/node_temp but got the following response
Error from server (InternalError): Internal error occurred: unable to list matching resources
Questions:
Why is the node_temp metrics associated with jobs.batch resource type?
Why am i not able to retrieve the value of metrics via kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/jobs/*/node_temp?
What is a definitive way of figuring the path of the query? e.g /apis/custom.metrics.k8s.io/v1beta1/jobs/*/node_temp i kinda trial and error until i got see somewhat of a response. i also see some other path with namespaces in the query e.g /apis/custom.metrics.k8s.io/v1beta1/namespaces/*/metrics/foo_metrics
Any help and advice will be greatly appreciate!
Why is the node_temp metrics associated with jobs.batch resource type?
It picks the labels attached to the prometheus metrics and tries to interpret them, in this case u have clearely "job-kube4"
Why am i not able to retrieve the value of metrics via kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/jobs/*/node_temp?
Metrics are namespaced, see the "namespaced:true" so you'll need "/apis/custom.metrics.k8s.io/v1beta1/namespaces//jobs//node_temp"
What is a definitive way of figuring the path of the query? e.g /apis/custom.metrics.k8s.io/v1beta1/jobs//node_temp i kinda trial and error until i got see somewhat of a response. i also see some other path with namespaces in the query e.g /apis/custom.metrics.k8s.io/v1beta1/namespaces//metrics/foo_metrics
Check https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md#api-paths

Unable to access vault ui while running vault in docker: 404 page not found

I am running vault in docker like:
$ docker run -it --rm -p 8200:8200 vault:0.9.1
I have unsealed the vault:
$ VAULT_ADDR=http://localhost:8200 VAULT_SKIP_VERIFY="true" vault operator unseal L6M8O7Xg7c8vBe3g35s25OWeruNDfaQzQ5g9UZ2bvGM=
Key Value
--- -----
Seal Type shamir
Initialized false
Sealed false
Total Shares 1
Threshold 1
Version 0.9.1
Cluster Name vault-cluster-52a8c4b5
Cluster ID 96ba7037-3c99-5b6e-272e-7bcd6e5cc45c
HA Enabled false
However, I can't access the UI http://localhost:8200/ui in firefox. The error is:
404 page not found
Do you know what I am doing wrong? Does the vault docker image in docker hub have UI compiled in it?
Web UI was opensourced in v0.10.0, so v0.9.1 doesn't have Web UI. Here is blog announcing release and CHANGELOG for v0.10.0 - take a look at FEATURES subsection.
To see Web UI in web browser try running this command:
$ docker run -it --rm -p 8200:8200 vault:0.10.0
However, I would suggest using more recent Vault version, as there have been many improvements and bug fixes in the meantime. Also features added in the Web UI, so if you follow latest documentation, some of the things described there might not be available in older versions.
I observed this behavior with the Vault 0.10.3
(https://releases.hashicorp.com/vault/0.10.3/vault_0.10.3_linux_amd64.zip)
when I put the
adjustment that enabled ui in the very bottom of the vault configuration file
(etc. config.json), so
Config that returns a 404 error looks like one below:
{
"listener": [{
"tcp": {
"address" : "0.0.0.0:8200",
"tls_disable" : 1
}
}],
"api_addr": "http://172.16.94.10:8200",
"storage": {
"consul" : {
"address" : "127.0.0.1:8500",
"path": "vault"
}
}
},
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
"ui":"true"
}
and one that works with Vault 0.10.3 has a ui in the very top of its configuration file:
{
"ui":"true",
"listener": [{
"tcp": {
"address" : "0.0.0.0:8200",
"tls_disable" : 1
}
}],
"api_addr": "http://172.16.94.10:8200",
"storage": {
"consul" : {
"address" : "127.0.0.1:8500",
"path": "vault"
}
}
},
"max_lease_ttl": "10h",
"default_lease_ttl": "10h"
}

Amazon EKS: generate/update kubeconfig via python script

When using Amazon's K8s offering, the EKS service, at some point you need to connect the Kubernetes API and configuration to the infrastructure established within AWS. Especially we need a kubeconfig with proper credentials and URLs to connect to the k8s control plane provided by EKS.
The Amazon commandline tool aws provides a routine for this task
aws eks update-kubeconfig --kubeconfig /path/to/kubecfg.yaml --name <EKS-cluster-name>
Question: do the same through Python/boto3
When looking at the Boto API documentation, I seem to be unable to spot the equivalent for the above mentioned aws routine. Maybe I am looking at the wrong place.
is there a ready-made function in boto to achieve this?
otherwise how would you approach this directly within python (other than calling out to aws in a subprocess)?
There isn't a method function to do this, but you can build the configuration file yourself like this:
# Set up the client
s = boto3.Session(region_name=region)
eks = s.client("eks")
# get cluster details
cluster = eks.describe_cluster(name=cluster_name)
cluster_cert = cluster["cluster"]["certificateAuthority"]["data"]
cluster_ep = cluster["cluster"]["endpoint"]
# build the cluster config hash
cluster_config = {
"apiVersion": "v1",
"kind": "Config",
"clusters": [
{
"cluster": {
"server": str(cluster_ep),
"certificate-authority-data": str(cluster_cert)
},
"name": "kubernetes"
}
],
"contexts": [
{
"context": {
"cluster": "kubernetes",
"user": "aws"
},
"name": "aws"
}
],
"current-context": "aws",
"preferences": {},
"users": [
{
"name": "aws",
"user": {
"exec": {
"apiVersion": "client.authentication.k8s.io/v1alpha1",
"command": "heptio-authenticator-aws",
"args": [
"token", "-i", cluster_name
]
}
}
}
]
}
# Write in YAML.
config_text=yaml.dump(cluster_config, default_flow_style=False)
open(config_file, "w").write(config_text)
This is explained in Create kubeconfig manually section of https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html, which is in fact referenced from the boto3 EKS docs. The manual method there is very similar to #jaxxstorm's answer except that it doesn't shown the python code you would need, however it also does not assume heptio anthenticator (it shows token and IAM authenticator approaches).
I faced same problem decided to implement it as a Python package
it can be installed via
pip install eks-token
and then simply do
from eks_token import get_token
response = get_token(cluster_name='<value>')
More details and examples here
Amazon's aws tool is included in the python package awscli, so one option is to add awscli as a python dependency and just call it from python. The code below assumes that kubectl is installed (but you can remove the test if you want).
kubeconfig depends on ~/.aws/credentials
One challenge here is that the kubeconfig file generated by aws has a users section like this:
users:
- name: arn:aws:eks:someregion:1234:cluster/somecluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- someregion
- eks
- get-token
- --cluster-name
- somecluster
command: aws
So if you you mount it into a container or move it to a different machine you'll get this error when you try to use it:
Unable to locate credentials. You can configure credentials by running "aws configure".
Based on that user section, kubectl is running aws eks get-token and it's failing because the ~/.aws dir doesn't have the credentials that it had when the kubeconfig file was generated.
You could get around this by also staging the ~/.aws dir everywhere you want to use the kubeconfig file, but I have automation that takes a lone kubeconfig file as a parameter, so I'll be modifying the user section to include the necessary secrets as env vars.
Be aware that this makes it possible for whoever gets that kubeconfig file to use the secrets we've included for other things. Whether this is a problem will depend on how much power your aws user has.
Assume Role
If your cluster uses RBAC, you might need to specify which role you want for your kubeconfig file. The code below does this by first generating a separate set of creds and then using them to generate the kubeconfig file.
Role assumption has a timeout (I'm using 12 hours below), so you'll need to call the script again if you can't manage your mischief before the token times out.
The Code
You can generate the file like:
pip install awscli boto3 pyyaml sh
python mkkube.py > kubeconfig
...if you put the following in mkkube.py
from pathlib import Path
from tempfile import TemporaryDirectory
from time import time
import boto3
import yaml
from sh import aws, sh
aws_access_key_id = "AKREDACTEDAT"
aws_secret_access_key = "ubREDACTEDaE"
role_arn = "arn:aws:iam::1234:role/some-role"
cluster_name = "mycluster"
region_name = "someregion"
# assume a role that has access
sts = boto3.client(
"sts",
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
)
assumed = sts.assume_role(
RoleArn=role_arn,
RoleSessionName="mysession-" + str(int(time())),
DurationSeconds=(12 * 60 * 60), # 12 hrs
)
# these will be different than the ones you started with
credentials = assumed["Credentials"]
access_key_id = credentials["AccessKeyId"]
secret_access_key = credentials["SecretAccessKey"]
session_token = credentials["SessionToken"]
# make sure our cluster actually exists
eks = boto3.client(
"eks",
aws_session_token=session_token,
aws_access_key_id=access_key_id,
aws_secret_access_key=secret_access_key,
region_name=region_name,
)
clusters = eks.list_clusters()["clusters"]
if cluster_name not in clusters:
raise RuntimeError(f"configured cluster: {cluster_name} not found among {clusters}")
with TemporaryDirectory() as kube:
kubeconfig_path = Path(kube) / "config"
# let awscli generate the kubeconfig
result = aws(
"eks",
"update-kubeconfig",
"--name",
cluster_name,
_env={
"AWS_ACCESS_KEY_ID": access_key_id,
"AWS_SECRET_ACCESS_KEY": secret_access_key,
"AWS_SESSION_TOKEN": session_token,
"AWS_DEFAULT_REGION": region_name,
"KUBECONFIG": str(kubeconfig_path),
},
)
# read the generated file
with open(kubeconfig_path, "r") as f:
kubeconfig_str = f.read()
kubeconfig = yaml.load(kubeconfig_str, Loader=yaml.SafeLoader)
# the generated kubeconfig assumes that upon use it will have access to
# `~/.aws/credentials`, but maybe this filesystem is ephemeral,
# so add the creds as env vars on the aws command in the kubeconfig
# so that even if the kubeconfig is separated from ~/.aws it is still
# useful
users = kubeconfig["users"]
for i in range(len(users)):
kubeconfig["users"][i]["user"]["exec"]["env"] = [
{"name": "AWS_ACCESS_KEY_ID", "value": access_key_id},
{"name": "AWS_SECRET_ACCESS_KEY", "value": secret_access_key},
{"name": "AWS_SESSION_TOKEN", "value": session_token},
]
# write the updates to disk
with open(kubeconfig_path, "w") as f:
f.write(yaml.dump(kubeconfig))
awsclipath = str(Path(sh("-c", "which aws").stdout.decode()).parent)
kubectlpath = str(Path(sh("-c", "which kubectl").stdout.decode()).parent)
pathval = f"{awsclipath}:{kubectlpath}"
# test the modified file without a ~/.aws/ dir
# this will throw an exception if we can't talk to the cluster
sh(
"-c",
"kubectl cluster-info",
_env={
"KUBECONFIG": str(kubeconfig_path),
"PATH": pathval,
"HOME": "/no/such/path",
},
)
print(yaml.dump(kubeconfig))

Is it possible, and how to limit kubernetes job to create a maxium number of pods if always fail?

As a QA in our company I am daily user of kubernetes, and we use kubernetes job to create performance tests pods. One advantage of job, according to the docs, is
to create one Job object in order to reliably run one Pod to completion
But in our tests this feature will create infinite pods if previous ones fail, which will occupy resources of our team's shared cluster, and deleting such pods will take a lot of time. see this image:
Currently the job manifest is like this:
{
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"name": "upgradeperf",
"namespace": "ntg6-grpc26-tts"
},
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "upgradeperfjob",
"image":
"mycompany.com:5000/ncs-cd-qa/upgradeperf:0.1.1",
"command": [
"python",
"/jmeterwork/jmeter.py",
"-gu",
"git#gitlab-pri-eastus2.dev.mycompany.net:mobility-ncs-tools/tts-cdqa-tool.git",
"-gb",
"upgradeperf",
"-t",
"JMeter/testcases/ttssvc/JMeterTestPlan_ttssvc_cmpsize.jmx",
"-JtestDataFile",
"JMeter/testcases/ttssvc/testData/avaml_opus.csv",
"-JthreadNum",
"3",
"-JthreadLoopCount",
"1500",
"-JresultsFile",
"results_upgradeperf_cavaml_opus_t3_l1500.csv",
"-Jhost",
"mtl-blade32-03.mycompany.com",
"-Jport",
"28416"
]
}
],
"restartPolicy": "Never",
"imagePullSecrets": [
{
"name": "docker-registry-secret"
}
]
}
}
}
}
In some cases, such as misconfiguring of ip/ports, 'reliably run one Pod to completion' is impossible and recreating pods is waste of time and resource.
So is it possible, and how to limit kubernetes job to create a maxium number(say 3) of pods if always fail?
Depending on your kubernetes version, you can resolve this problem with these methods:
set the option: restartPolicy: OnFailure, then the failed container will be restarted in the same Pod, so you will not get lots of failed Pods, instead you will get a Pod with lots of restart.
From Kubernetes 1.8 on, There is a parameter backoffLimit to control the restart policy of failed job. This parameter defines the retry times of the job before treating the job to be failed, default 6 times. For this parameter to work you must set the parameter restartPolicy: Never .
You probably didn't set restartPolicy: Never in your pod spec, add that and I would expect it matches your expected behaviors better.

k8s - Kubernetes - Service Update - Error

I'm trying to update a service using :
kubectl update service my-service \
--patch='{ "apiVersion":"v1", "spec": { "selector": { "build":"2"} } }'
I receive the following Error :
Error from server: service "\"apiVersion\":\"v1\"," not found
I have tried the following :
moving the service name to the end
Removing the apiVersion
Maybe the kubectl update is not available for service ?
For now I was making my updates by simply stoping and restarting my service. But sometime, the corresponding forwarding-port changes. So it seems to not be the good choice ...
PS:
v0.19
api_v1
I am not sure if patch is 100% working yet, but if you are going to do this, you at least need to put apiVersion inside metadata, like so:
--patch='{ metadata:{ "apiVersion":"v1" }, "spec": { "selector": { "build":"2"} } }'