Makefile target to add k8s cluster config - kubernetes

I want, in one command with args to config kubeconfig, that is able to connect to k8s cluster.
I tried the following which does not work.
cfg:
mkdir ~/.kube
kube: cfg
touch config $(ARGS)
In the args the user should pass the config file content of the cluster (kubeconfig).
If there is a shorter way please let me know.
update
I've used the following which (from the answer) is partially solve the issue.
kube: cfg
case "$(ARGS)" in \
("") printf "Please provide ARGS=/some/path"; exit 1;; \
(*) cp "$(ARGS)" /some/where/else;; \
esac
The problem is because of the cfg which is creating the dir in case the user not providing the args and in the second run when providing the path the dir is already exist and you get an error, is there a way to avoid it ? something like if the arg is not provided dont run the cfg

I assume the user input is the pathname of a file. The make utility can take variable assignments as arguments, in the form of make NAME=VALUE. You refer to these in your Makefile as usual, with $(NAME). So something like
kube: cfg
case "$(ARGS)" in \
("") printf "Please provide ARGS=/some/path"; exit 1;; \
(*) cp "$(ARGS)" /some/where/else;; \
esac
called with
make ARGS=/some/path/file kube
would then execute cp /some/path/file /some/where/else. If that is not what you were asking, please rephrase the question, providing exact details of what you want to do.

Related

What parameter(s) do I have to pass `gsutil` to access a Google Cloud local storage? (storage-testbench)

For test purposes, I want to run the storage-testbench simulator. It allows me to send REST commands to a local server which is supposed to work like a Google Cloud Storage facility.
In my tests, I want to copy 3 files from my local hard drive to that local GCS-like storage facility using gsutil cp .... I found out that in order to connect to that specific server, I need additional options on the command line as follow:
gsutil \
-o "Credentials:gs_json_host=127.0.0.1" \
-o "Credentials:gs_json_port=9000" \
-o "Boto:https_validate_certificates=False" \
cp -p test my-file.ext gs://bucket-name/my-file.ext
See .boto for details on defining the credentials.
Unfortunately, I get this error:
CommandException: No URLs matched: test
The name at the end (test) is the project identifier (-p test). There is an example in the README.md of the storage-testbench project, although it's just a variable in a URI.
How do I make the cp command work?
Note:
The gunicorn process shows that the first GET from the cp command works as expected. It returns a 200. So the issue seems to be inside gsutil. Also, I'm able to create the bucket just fine:
gsutil \
-o "Credentials:gs_json_host=127.0.0.1" \
-o "Credentials:gs_json_port=9000" \
-o "Boto:https_validate_certificates=False" \
mb -p test gs://bucket-name
Trying the mb a second time gives me a 509 as expected.
More links:
gsutil global options
gsutil cp ...

Pass Mongodb Atlas Operator env vars from travis to kubernetes deploy.sh

I am trying to adapt the quickstart guide for Mongo Atlas Operator here Atlas Operator Quickstart to use secure env variables set in TravisCI.
I want to put the quickstart scripts into my deploy.sh, which is triggered from my travis.yaml file.
My travis.yaml already sets one global variable like this:
env:
global:
- SHA=$(git rev-parse HEAD)
Which is consumed by the deploy.sh file like this:
docker build -t mydocker/k8s-client:latest -t mydocker/k8s-client:$SHA -f ./client/Dockerfile ./client
but I'm not sure how to pass vars set in the Environment variables bit in the travis Settings to deploy.sh
This is the section of script I want to pass variables to:
kubectl create secret generic mongodb-atlas-operator-api-key \
--from-literal="orgId=$MY_ORG_ID" \
--from-literal="publicApiKey=$MY_PUBLIC_API_KEY" \
--from-literal="privateApiKey=$MY_PRIVATE_API_KEY" \
-n mongodb-atlas-system
I'm assuming the --from-literal syntax will just put in the literal string "orgId=$MY_ORG_ID" for example, and I need to use pipe syntax - but can I do something along the lines of this?:
echo "$MY_ORG_ID" | kubectl create secret generic mongodb-atlas-operator-api-key --orgId-stdin
Or do I need to put something in my travis.yaml before_install script?
Looks like the echo approach is fine, I've found a similar use-case to yours, have a look here.

Kubernetes kubectl copy command failing

I have a pod running python image as 199 user. My code app.py is place in /tmp/ directory, Now when I run copy command to replace the running app.py then the command simply fails with file exists error.
Please try to use the --no-preserve=true flag with kubectl cp command. It will pass --no-same-owner and --no-same-permissions flags to the tar utility while extracting the copied file in the container.
GNU tar manual suggests to use --skip-old-files or --overwrite flag to tar --extract command, to avoid error message you encountered, but to my knowledge, there is no way to add this optional argument to kubectl cp.

Copy a file into kubernetes pod without using kubectl cp

I have a use case where my pod is run as non-rootuser and its running a python app.
Now I want to copy file from master node to running pod. But when I try to run
kubectl cp app.py 103000-pras-dev/simplehttp-777fd86759-w79pn:/tmp
This command hungs up but when i run pod as root user and then run the same command
it executes successfully. I was going through the code of kubectl cp where it internally uses tar command.
Tar command has got multiple flags like --overwrite --no-same-owner, --no-preserve and few others. Now from kubectl cp we can't pass all those flag to tar. Is there any way by which I can copy file using kubectl exec command or any other way.
kubectl exec simplehttp-777fd86759-w79pn -- cp app.py /tmp/ **flags**
If the source file is a simple text file, here's my trick:
#!/usr/bin/env bash
function copy_text_to_pod() {
namespace=$1
pod_name=$2
src_filename=$3
dest_filename=$4
base64_text=`cat $src_filename | base64`
kubectl exec -n $namespace $pod_name -- bash -c "echo \"$base64_text\" | base64 -d > $dest_filename"
}
copy_text_to_pod my-namespace my-pod-name /path/of/source/file /path/of/target/file
Maybe base64 is not necessary. I put it here in case there is some special character in the source file.
Meanwhile I found a hack, disclaimer this is not the exact kubectl cp just a workaround.
I have written a go program where I have created a goroutine to read file and attached that to stdin and ran kubectl exec tar command with proper flags. Here is what I did
reader, writer := io.Pipe()
copy := exec.CommandContext(ctx, "kubectl", "exec", pod.Name, "--namespace", pod.Namespace, "-c", container.Name, "-i",
"--", "tar", "xmf", "-", "-C", "/", "--no-same-owner") // pass all the flags you want to
copy.Stdin = reader
go func() {
defer writer.Close()
if err := util.CreateMappedTar(writer, "/", files); err != nil {
logrus.Errorln("Error creating tar archive:", err)
}
}()
Helper function definition
func CreateMappedTar(w io.Writer, root string, pathMap map[string]string) error {
tw := tar.NewWriter(w)
defer tw.Close()
for src, dst := range pathMap {
if err := addFileToTar(root, src, dst, tw); err != nil {
return err
}
}
return nil
}
Obviously, this thing doesn't work because of permission issue but *I was able to pass tar flags
If it is only a text file it can be also "copied" via netcat.
1) You have to be logged on both nodes
$ kubectl exec -ti <pod_name> bash
2) Make sure to have netcat, if not install them
$ apt-get update
$ apt-get install netcat-openbsd
3) Go to the folder with permissions i.e.
/tmp
4) Inside the container where you have python file write
$ cat app.py | nc -l <random_port>
Example
$ cat app.py | nc -l 1234
It will start listening on provided port.
5) Inside the container where you want have the file
$ nc <PodIP_where_you_have_py_file> > app.py
Example
$ nc 10.36.18.9 1234 > app.py
It must be POD IP, it will not recognize pod name. To get ip use kubectl get pods -o wide
It will copy content of app.py file to the other container file. Unfortunately, you will need to add permissions manual or you can use script like (sleep is required due to speed of "copying"):
#!/bin/sh
nc 10.36.18.9 1234 > app.py | sleep 2 |chmod 770 app.py;
Copy a file into kubernetes pod without using kubectl cp
kubectl cp is bit of a pain to work with. For example:
installing kubectl and configuring it (might need it on multiple machines). In our company, most people only have a restrictive kubectl access from rancher web GUI. No CLI access is provided for most people.
network restrictions in enterprises
Large file downloads/uploads may stop or freeze sometimes probably because traffic goes through k8s API server.
weird tar related errors keep popping up etc..
One of the reasons for lack of support to copy the files from a pod(or other way around) is because k8s pods were never meant to be used like a VM.. They are meant to be ephemeral. So, the expectation is to not store/create any files on the pod/container disk.
But sometimes we are forced to do this, especially while debugging issues or using external volumes..
Below is the solution we found effective. This might not be right for you/your team.
We now instead use azure blob storage as a mediator to exchange files between a kubernetes pod and any other location. The container image is modified to include azcopy utility (Dockerfile RUN instruction below will install azcopy in your container).
RUN /bin/bash -c 'wget https://azcopyvnext.azureedge.net/release20220511/azcopy_linux_amd64_10.15.0.tar.gz && \
tar -xvzf azcopy_linux_amd64_10.15.0.tar.gz && \
cp ./azcopy_linux_amd64_*/azcopy /usr/bin/ && \
chmod 775 /usr/bin/azcopy && \
rm azcopy_linux_amd64_10.15.0.tar.gz && \
rm -rf azcopy_linux_amd64_*'
Checkout this SO question for more on azcopy installation.
When we need to download a file,
we simply use azcopy to copy the file from within the pod to azure blob storage. This can be done either programmatically or manually.
Then we download the file to local machine from azure blob storage explorer. Or some job/script can pick up this file from blob container.
Similar thing is done for upload as well. The file is first placed in blob storage container. This can be done manually using blob storage explorer or can be done programmatically. Next, from within the pod azcopy can pull the file from blob storage and place it inside the pod.
The same can be done with AWS (S3) or GCP or using any other cloud provider.
Probably even SCP, SFTP, RSYNC can be used.

How to send data to command line after calling .sh file?

I want to install Anaconda through EasyBuild. EasyBuild is a software to manage software installation on clusters. Anaconda can be installed with sh Anaconda.sh.
However, after running I have to accept the License agreement and give the installation location on the command line by entering <Enter>, yes <Enter>, path/where/to/install/ <Enter>.
Because this has to be installed automatically I want to do the accepting of terms and giving the install location in one line. I tried to do it like this:
sh Anaconda.sh < <(echo) >/dev/null < <(echo yes) >/dev/null \
< <(echo /apps/software/Anaconda/1.8.0-Linux-x86_64/) > test.txt
From the test.txt I can read that the first echo works as <Enter>, but I can't figure out how to accept the License agreement, as it sees it now as not sending yes:
Do you approve the license terms? [yes|no]
[no] >>> The license agreement wasn't approved, aborting installation.
How can I send the yes correctly to the script input?
Edit: Sorry, I missed the part about having to enter more then one thing. You can take a look at writing expect scripts. thegeekstuff.com/2010/10/expect-examples. You may need to install it however.
You could try piping with the following command: yes yes | sh Anaconda.sh. Read the man pages for more information man yes.
Expect is a great way to go and probably the most error proof way. If you know all the questions I think you could do this by just writing a file with the answers in the correct order, one per line and piping it in.
That install script is huge so as long as you can verify you know all the questions you could give this a try.
In my simple tests it works.
I have a test script that looks like this:
#!/bin/sh
echo -n "Do you accept "
read ANS
echo $ANS
echo -n "Install path: "
read ANS
echo $ANS
and an answers file that looks like this:
Y
/usr
Running it like so works... perhaps it will work for your monster install file as well.
cat answers | ./test.sh
Do you accept Y
Install path: /usr
If that doesn't work then the script is likely flushing and you will have to use expect or pexpect.
Good luck!
Actually, I downloaded and looked at the anaconda install script. Looks like it takes command line arguments.
/bin/bash Anaconda-2.2.0-Linux-x86_64.sh -h
usage: Anaconda-2.2.0-Linux-x86_64.sh [options]
Installs Anaconda 2.2.0
-b run install in batch mode (without manual intervention),
it is expected the license terms are agreed upon
-f no error if install prefix already exists
-h print this help message and exit
-p PREFIX install prefix, defaults to /home/cody.stevens/anaconda
Use the -b and -p options...
so use it like so:
/bin/bash Anaconda-2.2.0-Linux-x86_64.sh -b -p /usr
Also of note.. that script explicitly says not to run with '.' or 'sh' but 'bash' so they must have some dependency on a feature of bash.
--
Cody