How to access kubernetes API from the (non-master) nodes or remote - kubernetes

I'm having a trouble setting up access to kubernetes cluster from the outside. This is what I'm trying to achieve:
- Have ability to access to kube cluster from the outside (from nodes that are not "master" and even from any remote) to be able to do kube actions only on specific namespace.
My logic was to do following:
Create new namespace (let's call it testns)
Create service account (testns-account)
Create role which gives access for creating any type of kube resource inside testns namespace
Create role binding which binds service account with role
Generate token from service account
Now, my logic was that I need to have token + api server URL to access kube cluster with limited "permissions" but that doesnt seem like it is enough.
What would be the easiest way to achieve this? For start, I could have access with kubectl just to verify that limited permissions on namespace work but eventually, I would have some client side code which doing the access and creates kube resources with these limited permissions.

You need to generate a kubeconfig from the token. There are scripts to handle this. Here it is for posterity:
!/usr/bin/env bash
# Copyright 2017, Z Lab Corporation. All rights reserved.
# Copyright 2017, Kubernetes scripts contributors
#
# For the full copyright and license information, please view the LICENSE
# file that was distributed with this source code.
set -e
if [[ $# == 0 ]]; then
echo "Usage: $0 SERVICEACCOUNT [kubectl options]" >&2
echo "" >&2
echo "This script creates a kubeconfig to access the apiserver with the specified serviceaccount and outputs it to stdout." >&2
exit 1
fi
function _kubectl() {
kubectl $# $kubectl_options
}
serviceaccount="$1"
kubectl_options="${#:2}"
if ! secret="$(_kubectl get serviceaccount "$serviceaccount" -o 'jsonpath={.secrets[0].name}' 2>/dev/null)"; then
echo "serviceaccounts \"$serviceaccount\" not found." >&2
exit 2
fi
if [[ -z "$secret" ]]; then
echo "serviceaccounts \"$serviceaccount\" doesn't have a serviceaccount token." >&2
exit 2
fi
# context
context="$(_kubectl config current-context)"
# cluster
cluster="$(_kubectl config view -o "jsonpath={.contexts[?(#.name==\"$context\")].context.cluster}")"
server="$(_kubectl config view -o "jsonpath={.clusters[?(#.name==\"$cluster\")].cluster.server}")"
# token
ca_crt_data="$(_kubectl get secret "$secret" -o "jsonpath={.data.ca\.crt}" | openssl enc -d -base64 -A)"
namespace="$(_kubectl get secret "$secret" -o "jsonpath={.data.namespace}" | openssl enc -d -base64 -A)"
token="$(_kubectl get secret "$secret" -o "jsonpath={.data.token}" | openssl enc -d -base64 -A)"
export KUBECONFIG="$(mktemp)"
kubectl config set-credentials "$serviceaccount" --token="$token" >/dev/null
ca_crt="$(mktemp)"; echo "$ca_crt_data" > $ca_crt
kubectl config set-cluster "$cluster" --server="$server" --certificate-authority="$ca_crt" --embed-certs >/dev/null
kubectl config set-context "$context" --cluster="$cluster" --namespace="$namespace" --user="$serviceaccount" >/dev/null
kubectl config use-context "$context" >/dev/null
cat "$KUBECONFIG"

Related

How to retrieve the pod/container in which run a given process

Using crictl an containerd, is there an easy way to find to which pod/container belongs a given process, using it's PID` on the host machine?
For example, how can I retrieve the name of the pod which runs the process below (1747):
root#k8s-worker-node:/# ps -ef | grep mysql
1000 1747 1723 0 08:58 ? 00:00:01 mysqld
Assuming that you're looking at the primary process in a pod, you could do something like this:
crictl ps -q | while read cid; do
if crictl inspect -o go-template --template '{{ .info.pid }}' $cid | grep -q $target_pid; then
echo $cid
fi
done
This walks through all the crictl managed pods and checks the pod pid against the value of the $target_pid value (which you have set beforehand to the host pid in which you are interested).
1. Using pid2pod
pid2pod is a dedicated tool: https://github.com/k8s-school/pid2pod
Example:
# Install
$ curl -Lo ./pid2pod https://github.com/k8s-school/pid2pod/releases/download/v0.0.1/pid2pod-linux-amd64
$ chmod +x ./pid2pod
$ mv ./pid2pod /some-dir-in-your-PATH/pid2pod
# Run
$ ./pid2pod 1525
NAMESPACE POD CONTAINER PRIMARY PID
kube-system calico-node-6kt29 calico-node 1284
2. Using sysdig OSS
Install sysdig and run:
sudo csysdig -pc
You'll get something in the htop's style:
3. Custom script
Using #Iarsks answer, I propose a solution based on command belown which provides the pod name for a given PID:
$ pid=1254
$ nsenter -t $pid -u hostname
coredns-558bd4d5db-rxz9r
This solution display namespace, pod, container and container's primary PID. It is possible to copy paste the script below in a file named get_pid.sh and then run ./get_pid.sh 2345 for example.
#!/bin/bash
# Display pod information about a process, using its host PID as input
set -euo pipefail
usage() {
cat << EOD
Usage: `basename $0` PID
Available options:
-h this message
Display pod information about a process, using its host PID as input:
- display namespace, pod, container, and primary process pid for this container if the process is running in a pod
- else exit with code 1
EOD
}
if [ $# -ne 1 ] ; then
usage
exit 2
fi
pid=$1
is_running_in_pod=false
pod=$(nsenter -t $pid -u hostname 2>&1)
if [ $? -ne 0 ]
then
printf "%s %s:\n %s" "nsenter command failed for pid" "$pid" "$pod"
fi
cids=$(crictl ps -q)
for cid in $cids
do
current_pod=$(crictl inspect -o go-template --template '{{ index .info.config.labels "io.kubernetes.pod.name"}}' "$cid")
if [ "$pod" == "$current_pod" ]
then
tmpl='NS:{{ index .info.config.labels "io.kubernetes.pod.namespace"}} POD:{{ index .info.config.labels "io.kubernetes.pod.name"}} CONTAINER:{{ index .info.config.labels "io.kubernetes.container.name"}} PRIMARY PID:{{.info.pid}}'
crictl inspect --output go-template --template "$tmpl" "$cid"
is_running_in_pod=true
break
fi
done
if [ "$is_running_in_pod" = false ]
then
echo "Process $pid is not running in a pod."
exit 1
fi
WARNING: this solution does not work if two pods have the same name (even in different namespaces)

How to copy kubernetes one secrets value to another secretes within same namespace

I am using kubernetes and its resources like secrets. During deployment one secret has been created (say test-secret) with some values inside it.
Now I need to renamed this secretes (dev-secret) within the same namespace.
How can I rename the secret or how can I copy test-secret value to dev-secret.
Please let me know the correct approach for this.
There is no specific way to do this. The Kubernetes API does not have "rename" as an operation. In this particular case you would kubectl get server test-secret -o yaml, clean up the metadata: sections that don't apply anymore, edit the name, and kubectl apply it again.
Extending #coderanger answer:
If you still have secret config yaml file you can do
kubectl delete -f </path/to/secret-config-yaml>
change metadata.name object and issue
kubectl apply -f </path/to/secret-config-yaml>
I needed to do something similar: rename K8s secrets.
I searched everywhere, but could not find a good way to do it.
So I wrote a bash script for copying secrets into new secrets with a new name.
In my case, I also wanted to do this in batch, as I had many secrets with the same prefix that I needed to change.
I don't work with bash all the time, so there might be better ways... but it did the trick for me.
I hope it helps!
#!/bin/bash
# Copies K8s secrets with names containing the NAME_PART into new
# secrets where the NAME_PART was replaced with NEW_NAME_PART.
# i.e. if NAME_PART is "test-abc" and NEW_NAME_PART is "test-xyz", a secret names test-abc-123
# will be copied into a new secret named test-xyz-123
#
# Pre-requisites:
# - have kubectl installed and pointing to the cluster you want to alter
#
# NOTE: tested with kubectl v1.18.0 and K8s v1.21.5-eks-bc4871b
# configure the NAME_PARTs here
NAME_PART=test-abc
NEW_NAME_PART=test-xyz
WORK_DIR=work_secret_copy
mkdir -p $WORK_DIR
echo "Getting secrets from K8s..."
allSecrets=`kubectl get secrets | tail -n +2 | cut -d " " -f1`
matchingSecrets=`echo $allSecrets | tr ' ' '\n' | grep $NAME_PART`
#printf "All secrets:\n $allSecrets \n"
#printf "Secrets:\n $secrets \n"
for secret in $matchingSecrets; do
newSecret=${secret/$NAME_PART/$NEW_NAME_PART}
echo "Copying secret $secret to $newSecret"
# skip this secret if one with the new name already exists
if [[ $(echo $allSecrets | tr ' ' '\n' | grep -e "^$newSecret\$") ]]; then
echo "Secret $newSecret already exists, skipping..."
continue
fi
kubectl get secret $secret -o yaml \
| grep -v uid: \
| grep -v time: \
| grep -v creationTimestamp: \
| sed "s/$secret/$newSecret/g" \
> $WORK_DIR/$newSecret.yml
kubectl apply -f $WORK_DIR/$newSecret.yml
done

Mongo DB Atlas. Is it safe to whitelist all ip because someone attempting to access the database needs a password

I have a google app engine with my express server. I also have my db in MongoDB Atlas. I currently have my MongoDB Atlas whitelisting all ip. The connection string is in the code for my express server running on Google Cloud. Presumable any attacker trying to get into the database would still need a user name and password for the connection string.
Is it safe to do this?
If it's not safe, then how do I whitelist my google app engine on Mongo Atlas?
Is it safe to do this?
"Safe" is a relative term. It is safer than having an unauthed database open to the internet, but the weakest link is now your password.
A whitelist is an additional layer of security, so that if someone knows or can guess your password, they can't just connect from anywhere. They must be connecting from a set of known IP addresses. This makes the attack surface smaller, so the database is less likely to be broken into by a random person in the internet.
If it's not safe, then how do I whitelist my google app engine on Mongo Atlas?
You would need to determine the IP ranges of your application, and plug in that range into the whitelist.
here is an answer i left elsewhere. hope it helps someone who comes across this:
this script will be kept up to date on my gist
why
mongo atlas provides a reasonably priced access to a managed mongo DB. CSPs where containers are hosted charge too much for their managed mongo DB. they all suggest setting an insecure CIDR (0.0.0.0/0) to allow the container to access the cluster. this is obviously ridiculous.
this entrypoint script is surgical to maintain least privileged access. only the current hosted IP address of the service is whitelisted.
usage
set as the entrypoint for the Dockerfile
run in cloud init / VM startup if not using a container (and delete the last line exec "$#" since that is just for containers
behavior
uses the mongo atlas project IP access list endpoints
will detect the hosted IP address of the container and whitelist it with the cluster using the mongo atlas API
if the service has no whitelist entry it is created
if the service has an existing whitelist entry that matches current IP no change
if the service IP has changed the old entry is deleted and new one is created
when a whitelist entry is created the service sleeps for 60s to wait for atlas to propagate access to the cluster
env
setup
create API key for org
add API key to project
copy the public key (MONGO_ATLAS_API_PK) and secret key (MONGO_ATLAS_API_SK)
go to project settings page and copy the project ID (MONGO_ATLAS_API_PROJECT_ID)
provide the following values in the env of the container service
SERVICE_NAME: unique name used for creating / updating (deleting old) whitelist entry
MONGO_ATLAS_API_PK: step 3
MONGO_ATLAS_API_SK: step 3
MONGO_ATLAS_API_PROJECT_ID: step 4
deps
bash
curl
jq CLI JSON parser
# alpine / apk
apk update \
&& apk add --no-cache \
bash \
curl \
jq
# ubuntu / apt
export DEBIAN_FRONTEND=noninteractive \
&& apt-get update \
&& apt-get -y install \
bash \
curl \
jq
script
#!/usr/bin/env bash
# -- ENV -- #
# these must be available to the container service at runtime
#
# SERVICE_NAME
#
# MONGO_ATLAS_API_PK
# MONGO_ATLAS_API_SK
# MONGO_ATLAS_API_PROJECT_ID
#
# -- ENV -- #
set -e
mongo_api_base_url='https://cloud.mongodb.com/api/atlas/v1.0'
check_for_deps() {
deps=(
bash
curl
jq
)
for dep in "${deps[#]}"; do
if [ ! "$(command -v $dep)" ]
then
echo "dependency [$dep] not found. exiting"
exit 1
fi
done
}
make_mongo_api_request() {
local request_method="$1"
local request_url="$2"
local data="$3"
curl -s \
--user "$MONGO_ATLAS_API_PK:$MONGO_ATLAS_API_SK" --digest \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--request "$request_method" "$request_url" \
--data "$data"
}
get_access_list_endpoint() {
echo -n "$mongo_api_base_url/groups/$MONGO_ATLAS_API_PROJECT_ID/accessList"
}
get_service_ip() {
echo -n "$(curl https://ipinfo.io/ip -s)"
}
get_previous_service_ip() {
local access_list_endpoint=`get_access_list_endpoint`
local previous_ip=`make_mongo_api_request 'GET' "$access_list_endpoint" \
| jq --arg SERVICE_NAME "$SERVICE_NAME" -r \
'.results[]? as $results | $results.comment | if test("\\[\($SERVICE_NAME)\\]") then $results.ipAddress else empty end'`
echo "$previous_ip"
}
whitelist_service_ip() {
local current_service_ip="$1"
local comment="Hosted IP of [$SERVICE_NAME] [set#$(date +%s)]"
if (( "${#comment}" > 80 )); then
echo "comment field value will be above 80 char limit: \"$comment\""
echo "comment would be too long due to length of service name [$SERVICE_NAME] [${#SERVICE_NAME}]"
echo "change comment format or service name then retry. exiting to avoid mongo API failure"
exit 1
fi
echo "whitelisting service IP [$current_service_ip] with comment value: \"$comment\""
response=`make_mongo_api_request \
'POST' \
"$(get_access_list_endpoint)?pretty=true" \
"[
{
\"comment\" : \"$comment\",
\"ipAddress\": \"$current_service_ip\"
}
]" \
| jq -r 'if .error then . else empty end'`
if [[ -n "$response" ]];
then
echo 'API error whitelisting service'
echo "$response"
exit 1
else
echo "whitelist request successful"
echo "waiting 60s for whitelist to propagate to cluster"
sleep 60s
fi
}
delete_previous_service_ip() {
local previous_service_ip="$1"
echo "deleting previous service IP address of [$SERVICE_NAME]"
make_mongo_api_request \
'DELETE' \
"$(get_access_list_endpoint)/$previous_service_ip"
}
set_mongo_whitelist_for_service_ip() {
local current_service_ip=`get_service_ip`
local previous_service_ip=`get_previous_service_ip`
if [[ -z "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] has not yet been whitelisted"
whitelist_service_ip "$current_service_ip"
elif [[ "$current_service_ip" == "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] IP has not changed"
else
echo "service [$SERVICE_NAME] IP has changed from [$previous_service_ip] to [$current_service_ip]"
delete_previous_service_ip "$previous_service_ip"
whitelist_service_ip "$current_service_ip"
fi
}
check_for_deps
set_mongo_whitelist_for_service_ip
# run CMD
exec "$#"

Running a command on all kubernetes pods of a service

Hey I'm running a kubernetes cluster and I want to run a command on all pods that belong to a specific service.
As far as I know kubectl exec can only run on a pod and tracking all my pods is a ridiculous amount of work (which is one of the benefits of services).
Is there any way or tool that gives you the ability to "broadcast" to all pods in a service?
Thanks in advance for your help!
Here's a simple example with kubectl pipe to xargs, printing env of each pod:
k get pod \
-l {your label selectors} \
--field-selector=status.phase=Running \
-o custom-columns=name:metadata.name --no-headers \
| xargs -I{} kubectl exec {} env
As Bal Chua wrote, kubectl has no way to do this, but you can use bash script to do this:
#!/usr/bin/env bash
PROGNAME=$(basename $0)
function usage {
echo "usage: $PROGNAME [-n NAMESPACE] [-m MAX-PODS] -s SERVICE -- COMMAND"
echo " -s SERVICE K8s service, i.e. a pod selector (required)"
echo " COMMAND Command to execute on the pods"
echo " -n NAMESPACE K8s namespace (optional)"
echo " -m MAX-PODS Max number of pods to run on (optional; default=all)"
echo " -q Quiet mode"
echo " -d Dry run (don't actually exec)"
}
function header {
if [ -z $QUIET ]; then
>&2 echo "###"
>&2 echo "### $PROGNAME $*"
>&2 echo "###"
fi
}
while getopts :n:s:m:qd opt; do
case $opt in
d)
DRYRUN=true
;;
q)
QUIET=true
;;
m)
MAX_PODS=$OPTARG
;;
n)
NAMESPACE="-n $OPTARG"
;;
s)
SERVICE=$OPTARG
;;
\?)
usage
exit 0
;;
esac
done
if [ -z $SERVICE ]; then
usage
exit 1
fi
shift $(expr $OPTIND - 1)
while test "$#" -gt 0; do
if [ "$REST" == "" ]; then
REST="$1"
else
REST="$REST $1"
fi
shift
done
if [ "$REST" == "" ]; then
usage
exit 1
fi
PODS=()
for pod in $(kubectl $NAMESPACE get pods --output=jsonpath={.items..metadata.name}); do
echo $pod | grep -qe "^$SERVICE" >/dev/null 2>&1
if [ $? -eq 0 ]; then
PODS+=($pod)
fi
done
if [ ${#PODS[#]} -eq 0 ]; then
echo "service not found in ${NAMESPACE:-default}: $SERVICE"
exit 1
fi
if [ ! -z $MAX_PODS ]; then
PODS=("${PODS[#]:0:$MAX_PODS}")
fi
header "{pods: ${#PODS[#]}, command: \"$REST\"}"
for i in "${!PODS[#]}"; do
pod=${PODS[$i]}
header "{pod: \"$(($i + 1))/${#PODS[#]}\", name: \"$pod\"}"
if [ "$DRYRUN" != "true" ]; then
kubectl $NAMESPACE exec $pod -- $REST
fi
done
Here:
kubectl -n alex get pods -l app=alex-admin-api -o name | xargs -I{} kubectl -n alex exec {} -- cat alexAdminApi.log >> alex-admin-api_pods.logs
I have written a simple kubectl plugin that "boardcast"s commands to all pods, using Tmux. Assuming that all your pods in the service should share the same labels in their spec, app=foobar for instance, you can use the command below,
kubectl tmux-exec -l app=foobar bash
The plugin is available on Github: predatorray/kubectl-tmux-exec. Hope it will help you!
there is also a kubectl plugin called kubectl-exec-all https://github.com/jpdasma/kubectl-exec-all that is doing a good job.
Can't work with krew index but you can save the content of https://raw.githubusercontent.com/jpdasma/kubectl-exec-all/master/unix/exec-all.sh to /usr/local/bin/kubectl-exec_all and don't forget to make it executable by running
chmod +x /usr/local/bin/kubectl-exec_all
then it work like this:
kubectl plugin exec-all daemonset docker-daemon -- docker system prune -a -f

How to configure kubectl with cluster information from a .conf file?

I have an admin.conf file containing info about a cluster, so that the following command works fine:
kubectl --kubeconfig ./admin.conf get nodes
How can I config kubectl to use the cluster, user and authentication from this file as default in one command? I only see separate set-cluster, set-credentials, set-context, use-context etc. I want to get the same output when I simply run:
kubectl get nodes
Here are the official documentation for how to configure kubectl
http://kubernetes.io/docs/user-guide/kubeconfig-file/
You have a few options, specifically to this question, you can just copy your admin.conf to ~/.kube/config
The best way I've found was to use an environment variable:
export KUBECONFIG=/path/to/admin.conf
I just alias the kubectl command into separate ones for my dev and production environments via .bashrc
alias k8='kubectl'
alias k8prd='kubectl --kubeconfig ~/.kube/config_prd.conf'
I prefer this method as it requires me to define the environment for each command.. whereas using an environment variable could potentially lead you to running a command within the wrong environment
Before answers have been very solid and informative, I will try to add
my 2 cents here
Configure kubeconfig file knowing its precedence
If you’re using kubectl, here’s the preference that takes effect while determining which kubeconfig file is used.
use --kubeconfig flag, if specified
use KUBECONFIG environment variable, if specified
use $HOME/.kube/config file
With this, you can easily override kubeconfig file you use per the kubectl command:
#
# using --kubeconfig flag
#
kubectl get pods --kubeconfig=file1
kubectl get pods --kubeconfig=file2
#
# or
# using `KUBECONFIG` environment variable
#
KUBECONFIG=file1 kubectl get pods
KUBECONFIG=file2 kubectl get pods
#
# or
# merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG= $HOME/.kube/config:file2:file3 kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
NOTE: The --minify flag allows us to extract only info about that context, and the --flatten flag allows us to keep the credentials unredacted.
For your example
kubectl get pods --kubeconfig=/path/to/admin.conf
#
# or:
#
KUBECONFIG=/path/to/admin.conf kubectl get pods
#
# or:
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date)
KUBECONFIG= $HOME/.kube/config:/path/to/admin.conf kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
Although this precedence list not officially specified in the documentation it is codified here. If you’re developing client tools for Kubernetes, you should consider using cli-runtime library which will bring the standard --kubeconfig flag and $KUBECONFIG detection to your program.
ref article: https://ahmet.im/blog/mastering-kubeconfig/
I name all cluster configs as .kubeconfig and this lives in project directory.
Then in .bashrc or .bash_profile I have the following export:
export KUBECONFIG=.kubeconfig:$HOME/.kube/config
This way when I'm in the project directory kubectl will load local .kubeconfig.
Hope that helps
kubectl uses ~/.kube/config as the default configuration file. So you could just copy your admin.conf over it.
Because there is no built-in kubectl config merge command at the moment (follow this) you can add this function to your .bashrc (or .zshrc):
function kmerge() {
if [ $# -eq 0 ]
then
echo "Please pass the location of the kubeconfig you wish to merge"
fi
KUBECONFIG=~/.kube/config:$1 kubectl config view --flatten > ~/.kube/mergedkub && mv ~/.kube/mergedkub ~/.kube/config
}
Then you can just run from termial:
kmerge /path/to/admin.conf
and the config file will be merged to ~/.kube/config.
You can now switch to the new context with:
kubectl config use-context <new-context-name>
Or if you're using kubectx (recommended) you can run: kubectx <new-context-name>.
(The kmerge function is based on #MichaelSp answer at this post).
Kubernetes keeps the path to search for config files in $KUBECONFIG
If you want to add one more config path on top of the existing KUBECONFIG without overriding it (and keeping ~/.kube/config as the default path to search).
Just run the following each time you want to add a conf file to the KUBECONFIG path
export KUBECONFIG=${KUBECONFIG:-~/.kube/config}:/path/to/admin.conf
You can check it worked by listing the available contexts
kubectl config get-contexts
Then select the one you want to use
kubectl config use-context <context-name>
Manage your config files proper,place below in your profile file, source the .profile / .bash_profile
for kconfig in $HOME/.kube/config $(find $HOME/.kube/ -iname "*.config")
do
if [ -f "$kconfig" ];then
export KUBECONFIG=$KUBECONFIG:$kconfig
fi
done
switch the contexts from kubectl
When you type kubectl I guess you prefer to know which cluster you are pointing. Maybe it's worth creating an alias for that?
alias kube-mycluster='kubectl --kubeconfig ~/.kube/mycluster.conf'
This is possible:
export KUBECONFIG=~/.kube/config:~/.kube/cluster0:~/.kube/cluster1:~/.kube/cluster3
and:
kubectl config use-context cluster0