kubectl run - How to pass some commands to be executed before reaching the interactive terminal? - kubernetes

When using kubectl run -ti with an interactive terminal, I would like to be able to pass a few commands in the kubectl run command to be run before the interactive terminal comes up, commands like apt install zip for example. In this way, I do not need to wait for the interactive terminal to come up and then run those common commands. Is there a way do so this?
Thanks

You can use the shell's exec to hand control over from your initial "outer" bash, responsible for doing the initialization steps you want, over to a fresh one (fresh in the sense that it does not have -c and can optionally be a login shell) which runs after your pre-steps:
kubectl run sample -it --image=ubuntu:20.04 -- \
bash -c "apt update; apt install -y zip; exec bash -il"

Related

Bash script from a BAT file not running after connecting to a kubectl pod in Google Cloud Shell editor

For my project, I have to connect to a postgres Database in Google Cloud Shell using a series of commands:
gcloud config set project <project-name> gcloud auth activate-service-account <keyname>#<project-name>.iam.gserviceaccount.com --key-file=<filename>.json gcloud container clusters get-credentials banting --region <region> --project <project> kubectl get pods -n <node> kubectl exec -it <pod-name> -n <node> bash apt-get update apt install postgresql postgresql-contrib psql -h <hostname> -p <port> -d <database> -U <userId>`
I am a beginner to this and just running the scripts provided to me by copy pasting till now.
But to make things easier, I have created a .bat file in the Shell editor with all the above commands and tried to run it using bash <filename>
But once the kubectl exec -it <pod-name> -n <node> bash command runs and new directory is opened like below, the rest of the commands do not run.
Defaulted container "<container>" out of: <node>, istio-proxy, istio-init (init) root#<pod-name>:/#
So how can I make the shell run the rest of these scripts from the .bat file:
apt-get update apt install postgresql postgresql-contrib psql -h <hostname> -p <port> -d <database> -U <userId>`
Cloud Shell is a Linux instance and default to the Bash shell.
BAT commonly refers to Windows|DOS batch files.
On Linux, shell scripts are generally .sh.
Your script needs to be revised in order to pass the commands intended for the kubectl exec command to the Pod and not to the current script.
You can try (!) the following. It creates a Bash (sub)shell on the Pod and runs the commands listed after -c in it:
gcloud config set project <project-name>
gcloud auth activate-service-account <keyname>#<project-name>.iam.gserviceaccount.com \
--key-file=<filename>.json
gcloud container clusters get-credentials banting \
--region <region> \
--project <project>
kubectl get pods -n <node>
kubectl exec -it <pod-name> -n <node> bash -c "apt-get update && apt install postgresql postgresql-contrib && psql -h <hostname> -p <port> -d <database> -U <userId>"
However, I have some feedback|recommendations:
It's unclear whether even this approach will work because your running psql but doing nothing with it. In theory, I think you could then pass a script to the psql command too but then your script is becoming very janky.
It is considered not good practice to install software in containers as you're doing. The recommendation is to create the image that you want to run beforehand and use that. It is recommended that containers be immutable
I encourage you to use long flags when you write scripts as short flags (-n) can be confusing whereas --namespace= is more clear (IMO). Yes, these take longer to type but your script is clearer as a result. When you're hacking on the command-line, short flags are fine.
I encourage you to not use gcloud config set e.g. gcloud config set project ${PROJECT}. This sets global values. And its use is confusing because subsequent commands use the values implicitly. Interestingly, you provide a good example of why this can be challenging. Your subsequent command gcloud container clusters get-credentials --project=${PROJECT} explicitly uses the --project flag (this is good) even though you've already implicitly set the value for project using gcloud config set project.

Why does docker run exit my terminal session?

I am running Docker Desktop 3.5.1 on MacOS Big Sur and I am totally confused about the following behaviour:
If I run docker run -it --rm postgres psql --help I get the psql usage information (all as expected) and I can continue to run commands in my terminal. Edit to clarify: the docker container exits and terminates as expected, but my zsh session remains active (also as expected).
However, if I run psql with an invalid flag, say, docker run -it --rm postgres psql -m then I get
/usr/lib/postgresql/13/bin/psql: invalid option -- 'm'
Try "psql --help" for more information.
[Process completed]
and my terminal session exits. Edit to clarify: the docker container exits as expected, but it takes the host zsh session with it (unexpected).
What I'm trying to work out is why does my terminal session exit and how can I avoid this happening?
To keep a session open you can execute bash like this:
docker run --rm -it postgres /bin/bash
Then you can run as many psql commands as you like and it wont exit unless bash exits.
edit:
It seems terminal closing behaviour can be configured in OS
https://stackoverflow.com/a/17910412/657477
Very weird behaviour but #ErangaHeshan's comments pointed me to some nonsense inside my .zprofile file. As soon as that was commented out then psql in docker stopped taking down my host zsh session on exit.

kubectl exec fails with the error "Unable to use a TTY - input is not a terminal or the right kind of file"

I am running a jenkins pipeline with the following command:
kubectl exec -it kafkacat-5f8fcfcc57-2txhc -- kafkacat -b cord-kafka -C -t BBSim-OLT-0-Events -o s#1585031458
which is running fine on the terminal of the machine the pipeline is running on, but on the actual pipeline I get the following error: "Unable to use a TTY - input is not a terminal or the right kind of file"
Any tips on how to go about resolving this?
When the flags -it are used with kubectl exec, it enables the TTY interactive mode. Given the error that you mentioned, it seems that Jenkins doesn't allocate a TTY.
Since you are running the command in a Jenkins job, I would assume that your command is not necessarily interactive. A possible solution for the problem would be to simply remove the -t flag and try to execute the following instead:
kubectl exec -i kafkacat-5f8fcfcc57-2txhc -- kafkacat -b cord-kafka -C -t BBSim-OLT-0-Events -o s#1585031458
For windows git bash:
alias kubectl='winpty kubectl'
$ kubectl exec -it <container>
Or just use winpty before the desired command.
For Windows GitBash users, use Powershell and NOT GitBash
Remove the -t option. That requests a TTY, which as you noted does not exist in Jenkins.
Just a hint for anyone that gets stuck like I did with kafkacat suddenly returning no data after removing the -t.
Turns out if there's no tty then kafkacat defaults to Producer mode, I never used the -C flag because it's the default to be a Consumer, but in this case it's required.

rkt/image building: acbuild run instructions "ignored"

I'm experiencing unexpected behavior using acbuild run. To get used to rkt the idea was to start with a CentOS7 based container running a SSH host. The bare CentOS 7 container referenced below as centos7.aci was created on a up-to-date CentOS7 install using the instructions given here.
The script used to build the SSHd ACI is
#! /bin/bash
acbuild begin ./centos7.aci
acbuild run -- yum install -y openssh-server
acbuild run -- mkdir /var/run/sshd
acbuild run -- sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
acbuild run -- sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
acbuild run -- ssh-keygen -A -C "" -N "" -q
acbuild run -- echo 'root:screencast' | chpasswd
acbuild set-name centos7-sshd
acbuild set-exec -- /usr/sbin/sshd -D
acbuild port add ssh tcp 22
acbuild write --overwrite centos7-sshd.aci
acbuild end
When it's spinned up using rkt run --insecure-options=image ./centos7-sshd.aci
the server runs but connection attempts fail because the password is not accepted. If I use rkt enter to get into the running container and re-run echo 'root:screencast' | chpasswd inside, I can login. So that acbuild run instruction has just not worked for some reason... To test a bit more, I replaced it by
acbuild run -- mkdir ~/.ssh
acbuild run -- echo "<rkt host SSH public key>“ >> ~/.ssh/authorized_keys
to enable key based instead of password login. It doesn't work: the key is refused. The reason is obvious once you look into the container: there's no authorized_keys file in ~/.ssh/. If I add a
acbuild run -- touch ~/.ssh/authorized_keys instruction before the key appending attempt, the file is created but it's still empty. So again a acbuild run instruction didn't work - without error notice. May it be related to the fact that both „ignored“ instructions use operators like >> and | ? All commands shown in the examples I've seen don't use any such operators yet the docs don't mention anything and a Google search doesn't help either. In dockerfile RUN instructions they also work fine... what is going wrong here?
P.S.: I tried to use the chroot instead of the default systemd-nspawn engine in the „ignored“ acbuild run instructions => same results
P.P.S.: there's no acbuild tag yet on StackOverflow so I had to tag this as rkt - could somebody with enough reputation create one please? Thx
Ok, I understood what happens using the the acbuild run --debug option.
When
acbuild run -- echo 'root:screencast' | chpasswd
gets executed it returns Running: [echo root:screencast] , the pipe is executed on the host machine. To get the intended result it should be
acbuild run -- /bin/sh -c "echo 'root:screencast' | chpasswd"
or in generic form
acbuild run -- /bin/sh -c "<cmd with pipes>"
as explained here

How can the terminal in Jupyter automatically run bash instead of sh

I love the terminal feature and works very well for our use case where I would like students to do some work directly from a terminal so they experience that environment. The shell that launches automatically is sh and does not pick up all of my bash defaults. I can type "bash" and everything works perfectly. How can I make "bash" the default?
Jupyter uses the environment variable $SHELL to decide which shell to launch. If you are running jupyter using init then this will be set to dash on Ubuntu systems. My solution is to export SHELL=/bin/bash in the script that launches jupyter.
I have tried the ultimate way of switching system-wide SHELL environment variable by adding the following line to the file /etc/environment:
SHELL=/bin/bash
This works on Ubuntu environment. Every now and then, the SHELL variable always points to /bin/bash instead of /bin/sh in Terminal after a reboot.
Also, setting up CRON job to launch jupyter notebook at system startup triggered the same issue on jupyter notebook's Terminal.
It turns out that I need to include variable setting and sourcing statements for Bash init file like ~/.bashrc in CRON job statement as follows via the command $ crontab -e :
#reboot source /home/USERNAME/.bashrc && \
export SHELL=/bin/bash && \
/SOMEWHERE/jupyter notebook --port=8888
In this way, I can log in the Ubuntu server via a remote web browser (http://server-ip-address:8888/) with opening jupyter notebook's Terminal default to Bash as same as local environment.
You can add this to your jupyter_notebook_config.py
c.NotebookApp.terminado_settings = {'shell_command': ['/bin/bash']}
With Jupyter running on Ubuntu 15.10, the Jupyter shell will default into /bin/sh which is a symlink to /bin/dash.
rm /bin/sh
ln -s /bin/bash /bin/sh
That fix got Jupyter terminal booting into bash for me.