When using Amazon's K8s offering, the EKS service, at some point you need to connect the Kubernetes API and configuration to the infrastructure established within AWS. Especially we need a kubeconfig with proper credentials and URLs to connect to the k8s control plane provided by EKS.
The Amazon commandline tool aws provides a routine for this task
aws eks update-kubeconfig --kubeconfig /path/to/kubecfg.yaml --name <EKS-cluster-name>
Question: do the same through Python/boto3
When looking at the Boto API documentation, I seem to be unable to spot the equivalent for the above mentioned aws routine. Maybe I am looking at the wrong place.
is there a ready-made function in boto to achieve this?
otherwise how would you approach this directly within python (other than calling out to aws in a subprocess)?
There isn't a method function to do this, but you can build the configuration file yourself like this:
# Set up the client
s = boto3.Session(region_name=region)
eks = s.client("eks")
# get cluster details
cluster = eks.describe_cluster(name=cluster_name)
cluster_cert = cluster["cluster"]["certificateAuthority"]["data"]
cluster_ep = cluster["cluster"]["endpoint"]
# build the cluster config hash
cluster_config = {
"apiVersion": "v1",
"kind": "Config",
"clusters": [
{
"cluster": {
"server": str(cluster_ep),
"certificate-authority-data": str(cluster_cert)
},
"name": "kubernetes"
}
],
"contexts": [
{
"context": {
"cluster": "kubernetes",
"user": "aws"
},
"name": "aws"
}
],
"current-context": "aws",
"preferences": {},
"users": [
{
"name": "aws",
"user": {
"exec": {
"apiVersion": "client.authentication.k8s.io/v1alpha1",
"command": "heptio-authenticator-aws",
"args": [
"token", "-i", cluster_name
]
}
}
}
]
}
# Write in YAML.
config_text=yaml.dump(cluster_config, default_flow_style=False)
open(config_file, "w").write(config_text)
This is explained in Create kubeconfig manually section of https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html, which is in fact referenced from the boto3 EKS docs. The manual method there is very similar to #jaxxstorm's answer except that it doesn't shown the python code you would need, however it also does not assume heptio anthenticator (it shows token and IAM authenticator approaches).
I faced same problem decided to implement it as a Python package
it can be installed via
pip install eks-token
and then simply do
from eks_token import get_token
response = get_token(cluster_name='<value>')
More details and examples here
Amazon's aws tool is included in the python package awscli, so one option is to add awscli as a python dependency and just call it from python. The code below assumes that kubectl is installed (but you can remove the test if you want).
kubeconfig depends on ~/.aws/credentials
One challenge here is that the kubeconfig file generated by aws has a users section like this:
users:
- name: arn:aws:eks:someregion:1234:cluster/somecluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- someregion
- eks
- get-token
- --cluster-name
- somecluster
command: aws
So if you you mount it into a container or move it to a different machine you'll get this error when you try to use it:
Unable to locate credentials. You can configure credentials by running "aws configure".
Based on that user section, kubectl is running aws eks get-token and it's failing because the ~/.aws dir doesn't have the credentials that it had when the kubeconfig file was generated.
You could get around this by also staging the ~/.aws dir everywhere you want to use the kubeconfig file, but I have automation that takes a lone kubeconfig file as a parameter, so I'll be modifying the user section to include the necessary secrets as env vars.
Be aware that this makes it possible for whoever gets that kubeconfig file to use the secrets we've included for other things. Whether this is a problem will depend on how much power your aws user has.
Assume Role
If your cluster uses RBAC, you might need to specify which role you want for your kubeconfig file. The code below does this by first generating a separate set of creds and then using them to generate the kubeconfig file.
Role assumption has a timeout (I'm using 12 hours below), so you'll need to call the script again if you can't manage your mischief before the token times out.
The Code
You can generate the file like:
pip install awscli boto3 pyyaml sh
python mkkube.py > kubeconfig
...if you put the following in mkkube.py
from pathlib import Path
from tempfile import TemporaryDirectory
from time import time
import boto3
import yaml
from sh import aws, sh
aws_access_key_id = "AKREDACTEDAT"
aws_secret_access_key = "ubREDACTEDaE"
role_arn = "arn:aws:iam::1234:role/some-role"
cluster_name = "mycluster"
region_name = "someregion"
# assume a role that has access
sts = boto3.client(
"sts",
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
)
assumed = sts.assume_role(
RoleArn=role_arn,
RoleSessionName="mysession-" + str(int(time())),
DurationSeconds=(12 * 60 * 60), # 12 hrs
)
# these will be different than the ones you started with
credentials = assumed["Credentials"]
access_key_id = credentials["AccessKeyId"]
secret_access_key = credentials["SecretAccessKey"]
session_token = credentials["SessionToken"]
# make sure our cluster actually exists
eks = boto3.client(
"eks",
aws_session_token=session_token,
aws_access_key_id=access_key_id,
aws_secret_access_key=secret_access_key,
region_name=region_name,
)
clusters = eks.list_clusters()["clusters"]
if cluster_name not in clusters:
raise RuntimeError(f"configured cluster: {cluster_name} not found among {clusters}")
with TemporaryDirectory() as kube:
kubeconfig_path = Path(kube) / "config"
# let awscli generate the kubeconfig
result = aws(
"eks",
"update-kubeconfig",
"--name",
cluster_name,
_env={
"AWS_ACCESS_KEY_ID": access_key_id,
"AWS_SECRET_ACCESS_KEY": secret_access_key,
"AWS_SESSION_TOKEN": session_token,
"AWS_DEFAULT_REGION": region_name,
"KUBECONFIG": str(kubeconfig_path),
},
)
# read the generated file
with open(kubeconfig_path, "r") as f:
kubeconfig_str = f.read()
kubeconfig = yaml.load(kubeconfig_str, Loader=yaml.SafeLoader)
# the generated kubeconfig assumes that upon use it will have access to
# `~/.aws/credentials`, but maybe this filesystem is ephemeral,
# so add the creds as env vars on the aws command in the kubeconfig
# so that even if the kubeconfig is separated from ~/.aws it is still
# useful
users = kubeconfig["users"]
for i in range(len(users)):
kubeconfig["users"][i]["user"]["exec"]["env"] = [
{"name": "AWS_ACCESS_KEY_ID", "value": access_key_id},
{"name": "AWS_SECRET_ACCESS_KEY", "value": secret_access_key},
{"name": "AWS_SESSION_TOKEN", "value": session_token},
]
# write the updates to disk
with open(kubeconfig_path, "w") as f:
f.write(yaml.dump(kubeconfig))
awsclipath = str(Path(sh("-c", "which aws").stdout.decode()).parent)
kubectlpath = str(Path(sh("-c", "which kubectl").stdout.decode()).parent)
pathval = f"{awsclipath}:{kubectlpath}"
# test the modified file without a ~/.aws/ dir
# this will throw an exception if we can't talk to the cluster
sh(
"-c",
"kubectl cluster-info",
_env={
"KUBECONFIG": str(kubeconfig_path),
"PATH": pathval,
"HOME": "/no/such/path",
},
)
print(yaml.dump(kubeconfig))
Related
Goal
I am trying to dynamically create state machines locally from generated Cloud Formation (CFN) templates. I need to be able to do so without deploying to an AWS account or creating the definition strings manually.
Question
How do I "build" a CFN template into a definition string that can be used locally?
Is it possible to achieve my original goal? If not, how are others successfully testing SFN locally?
Setup
I am using Cloud Development Kit (CDK) to write my state machine definitions and generating CFN json templates using cdk synth. I have followed the instructions from AWS here to create a local Docker container to host Step Functions (SFN). I am able to use the AWS CLI to create, run, etc. state machines successfully on my local SFN Docker instance. I am also hosting a DynamoDB Docker instance and using sam local start-lambda to host my lambdas. This all works as expected.
To make local testing easier, I have written a series of bash scripts to dynamically parse the CFN templates and create json input files by calling the AWS CLI. This works sucessfully when writing simple state machines with no references (no lambdas, resources from other stacks, etc.). The issue arises when I want to create and test a more complicated state machine. A state machine DefinitionString in my generated CFN templates looks something like:
{'Fn::Join': ['', ['{
"StartAt": "Step1",
"States": {
{
"StartAt": "Step1",
"States": {
"Step1": {
"Next": "Step2",
"Retry": [
{
"ErrorEquals": [
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException"
],
"IntervalSeconds": 2,
"MaxAttempts": 6,
"BackoffRate": 2
}
],
"Type": "Task",
"Resource": "arn:', {'Ref': 'AWS::Partition'}, ':states:::lambda:invoke",
"Parameters": {
"FunctionName": "', {'Fn::ImportValue': 'OtherStackE9E150CFArn77689D69'}, '",
"Payload.$": "$"
}
},
"Step2": {
"Next": "Step3",
"Retry": [
{
"ErrorEquals": [
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException"
],
"IntervalSeconds": 2,
"MaxAttempts": 6,
"BackoffRate": 2
}
],
"Type": "Task",
"Resource": "arn:', {'Ref': 'AWS::Partition'}, ':states:::lambda:invoke",
"Parameters": {
"FunctionName": "', {'Fn::ImportValue': 'OtherStackE9E150CFArn77689D69'}, '",
"Payload.$": "$"
}
}
}
}
]
},
"TimeoutSeconds": 10800
}']]}
Problem
The AWS CLI does not support json objects, the CFN functions like 'Fn::Join' are not supported, and there are no references allowed ({'Ref': 'AWS::Partition'}) in the definition string.
There is not going to be any magic here to get this done. The CDK renders CloudFormation and that CloudFormation is not truly ASL, as it contains references to other resources, as you pointed out.
One direction you could go would to be to deploy the SFN to a sandbox stack, and allow CFN to dereference all the values and produce the SFN ASL in the service, then re-extract that ASL for local testing.
It's hacky, but I don't know any other way to do it, unless you want to start writing parses that turn all those JSON intrinsics (like Fn:Join) into static strings.
Create one liner (Imperative way) command in kubernetes
kubectl run test --image=ubuntu:latest --limits="cpu=200m,memory=512Mi" --requests="cpu=200m,memory=512Mi" --privileged=false
And also I need to set securityContext in one liner, is it possible? basically I need to run container as securityContext/runAsUser not as root account.
Yes declarative works, but I'm looking for an imperative way.
Posting this answer as a community wiki to highlight the fact that the solution was posted in the comments (a link to another answer):
Hi, check this answer: stackoverflow.com/a/37621761/5747959 you can solve this with --overrides – CLNRMN 2 days ago
Feel free to edit/expand.
Citing $ kubectl run --help:
--overrides='': An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field.
Following on --overrides example that have additionals field included and to be more specific to this particular question (securityContext wise):
kubectl run -it ubuntu --rm --overrides='
{
"apiVersion": "v1",
"spec": {
"securityContext": {
"runAsNonRoot": true,
"runAsUser": 1000,
"runAsGroup": 1000,
"fsGroup": 1000
},
"containers": [
{
"name": "ubuntu",
"image": "ubuntu",
"stdin": true,
"stdinOnce": true,
"tty": true,
"securityContext": {
"allowPrivilegeEscalation": false
}
}
]
}
}
' --image=ubuntu --restart=Never -- bash
By above override you will use a securityContext to constrain your workload.
Side notes!
The example above is specific to running a Pod that you will exec into (bash)
The --overrides will override the other specified parameters outside of it (for example: image)
Additional resources:
Kubernetes.io: Docs: Tasks: Configure pod container: Security context
Kubernetes.io: Docs: Concepts: Security: Pod security standards
I'm trying to upload files into my IBM Cloud object store using cli. The command is the following:
:~ ibmcloud cos object-put --bucket Backup --body Downloads/DRIVING_MIVUE/Normal/F/FILE201217-151749F.MP4
FAILED
Mandatory Flag '--key' is missing
NAME:
ibmcloud cos object-put - Upload an object to a bucket.
USAGE:
ibmcloud cos object-put --bucket BUCKET_NAME --key KEY [--body FILE_PATH] [--cache-control CACHING_DIRECTIVES] [--content-disposition DIRECTIVES] [--content-encoding CONTENT_ENCODING] [--content-language LANGUAGE] [--content-length SIZE] [--content-md5 MD5] [--content-type MIME] [--metadata STRUCTURE] [--region REGION] [--output FORMAT] [--json]
OPTIONS:
--bucket BUCKET_NAME The name (BUCKET_NAME) of the bucket.
--key KEY The KEY of the object.
...
What does KEY mean here?
I tried to provide a string, like below, but I got an error.
ibmcloud cos object-put --bucket Backup --body Downloads/DRIVING_MIVUE/Normal/F/FILE201217-151749F.MP4 --key FILE201217-151749F
FAILED
The specified key does not exist.
The object key (or key name) uniquely identifies the object in a bucket. The following are examples of valid object key names:
4my-organization
my.great_photos-2014/jan/myvacation.jpg
videos/2014/birthday/video1.wmv
For example, when I run the below command
ibmcloud cos object-put --bucket vmac-code-engine-bucket --region us-geo --key test/package.json --body package.json
The file package.json on my machine will be uploaded to the test
folder(directory) of COS bucket vmac-code-engine-bucket
Optionally, you can also pass the MAP of metadata to store
{
"file_name": "file_20xxxxxxxxxxxx45.zip",
"label": "texas",
"state": "Texas",
"Date_to": "2019-11-09T16:00:00.000Z",
"Sha256sum": "9e39dxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx8ce6b68ede3a47",
"Timestamp": "Thu, 17 Oct 2019 09:22:13 GMT"
}
For other parameters, refer the command documentation here
For more information, refer the documentation here
Based on what I have observed:-
Key should be Name of the object.
Body should be file path of the object that needs to be uploaded.
I have installed the metric server on kubernetes, but its not working and logs
unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:xxx: unable to fetch metrics from Kubelet ... (X.X): Get https:....: x509: cannot validate certificate for 1x.x.
x509: certificate signed by unknown authority
I was able to get metrics if modified the deployment yaml and added
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
this now collects metrics, and kubectl top node returns results...
but logs still show
E1120 11:58:45.624974 1 reststorage.go:144] unable to fetch pod metrics for pod dev/pod-6bffbb9769-6z6qz: no metrics known for pod
E1120 11:58:45.625289 1 reststorage.go:144] unable to fetch pod metrics for pod dev/pod-6bffbb9769-rzvfj: no metrics known for pod
E1120 12:00:06.462505 1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-1x.x.x.eu-west-1.compute.internal: unable to get CPU for container ...discarding data: missing cpu usage metric, unable to fully scrape metrics from source
so questions
1) All this works on minikube, but not on my dev cluster, why would that be?
2) In production i dont want to do insecure-tls.. so can someone please explain why this issue is arising... or point me to some resource.
Kubeadm generates the kubelet certificate at /var/lib/kubelet/pki and those certificates (kubelet.crt and kubelet.key) are signed by different CA from the one which is used to generate all other certificates at /etc/kubelet/pki.
You need to regenerate the kubelet certificates which is signed by your root CA (/etc/kubernetes/pki/ca.crt)
You can use openssl or cfssl to generate the new certificates(I am using cfssl)
$ mkdir certs; cd certs
$ cp /etc/kubernetes/pki/ca.crt ca.pem
$ cp /etc/kubernetes/pki/ca.key ca-key.pem
Create a file kubelet-csr.json:
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"<node_name>",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "US",
"ST": "NY",
"L": "City",
"O": "Org",
"OU": "Unit"
}]
}
Create a ca-config.json file:
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
Now generate the new certificates using above files:
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \
--config=ca-config.json -profile=kubernetes \
kubelet-csr.json | cfssljson -bare kubelet
Replace the old certificates with newly generated one:
$ scp kubelet.pem <nodeip>:/var/lib/kubelet/pki/kubelet.crt
$ scp kubelet-key.pem <nodeip>:/var/lib/kubelet/pki/kubelet.key
Now restart the kubelet so that new certificates will take effect on your node.
$ systemctl restart kubelet
Look at the following tickets to get the context of issue:
https://github.com/kubernetes-incubator/metrics-server/issues/146
Hope this helps.
I currently have something like this in my cluseterConfig.json file.
"ClientIdentities": [
{
"Identity": "{My Domain}\\{My Security Group}",
"IsAdmin": true
}
]
My questions are:
My cluster is stood up and running. Can I add a second security group to this cluster while running? I've search through the powershell commands and didn't see one that matched this but I may have missed it?
If I can't do this while the cluster is running do I need delete the cluster and recreate? If I do need to recreate I'm zeroing in on the word ClientIdentities. I'm assuming I can have multiple identities and my config should look something like
ClientIdentities": [{
"Identity": "{My Domain}\\{My Security Group}",
"IsAdmin": true
},
{
"Identity": "{My Domain}\\{My Second Security Group}",
"IsAdmin": false
}
]
Thanks,
Greg
Yes, it is possible to update ClientIdentities once the cluster is up using a configuration upgrade.
Create a new JSON file with the added client identities.
Modify the clusterConfigurationVersion in the JSON config.
Run Start-ServiceFabricClusterConfigurationUpgrade -ClusterConfigPath "Path to new JSON"