kubectl exec fails with the error "Unable to use a TTY - input is not a terminal or the right kind of file" - kubernetes

I am running a jenkins pipeline with the following command:
kubectl exec -it kafkacat-5f8fcfcc57-2txhc -- kafkacat -b cord-kafka -C -t BBSim-OLT-0-Events -o s#1585031458
which is running fine on the terminal of the machine the pipeline is running on, but on the actual pipeline I get the following error: "Unable to use a TTY - input is not a terminal or the right kind of file"
Any tips on how to go about resolving this?

When the flags -it are used with kubectl exec, it enables the TTY interactive mode. Given the error that you mentioned, it seems that Jenkins doesn't allocate a TTY.
Since you are running the command in a Jenkins job, I would assume that your command is not necessarily interactive. A possible solution for the problem would be to simply remove the -t flag and try to execute the following instead:
kubectl exec -i kafkacat-5f8fcfcc57-2txhc -- kafkacat -b cord-kafka -C -t BBSim-OLT-0-Events -o s#1585031458

For windows git bash:
alias kubectl='winpty kubectl'
$ kubectl exec -it <container>
Or just use winpty before the desired command.

For Windows GitBash users, use Powershell and NOT GitBash

Remove the -t option. That requests a TTY, which as you noted does not exist in Jenkins.

Just a hint for anyone that gets stuck like I did with kafkacat suddenly returning no data after removing the -t.
Turns out if there's no tty then kafkacat defaults to Producer mode, I never used the -C flag because it's the default to be a Consumer, but in this case it's required.

Related

kubectl run - How to pass some commands to be executed before reaching the interactive terminal?

When using kubectl run -ti with an interactive terminal, I would like to be able to pass a few commands in the kubectl run command to be run before the interactive terminal comes up, commands like apt install zip for example. In this way, I do not need to wait for the interactive terminal to come up and then run those common commands. Is there a way do so this?
Thanks
You can use the shell's exec to hand control over from your initial "outer" bash, responsible for doing the initialization steps you want, over to a fresh one (fresh in the sense that it does not have -c and can optionally be a login shell) which runs after your pre-steps:
kubectl run sample -it --image=ubuntu:20.04 -- \
bash -c "apt update; apt install -y zip; exec bash -il"

How do I send a command to a remote system via ssh with concourse

I have the need to start a java rest server with concourse that lives on an Ubuntu 18.04 machine. The version of concourse my company uses is 5.5.11. The server code is written in Java, so a simple java -jar <uber.jar> suffices from the command line (see below). In production, I will not have this simple luxury, hence my question.
I have an scp command working that copies the .jar from concourse to the target Ubuntu machine:
scp -i /tmp/key.p8 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ./${NEW_DIR}/${ARTIFACT_NAME}.${ARTIFACT_FILE_TYPE} ${SRV_ACCOUNT_USER}#${JAVA_VM_HOST}:/var/www
Note that my private key is passed with -i and I can confirm that is working.
I followed this other SO Q&A that seemed to be promising: Getting ssh to execute a command in the background on target machine
, but after trying a few permutations of the suggested solution and other answers, I still don't have my rest service kicked off.
I've tried a few permutations of this line in my concourse script:
ssh -f -i /tmp/pvt_key1.p8 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${SRV_ACCOUNT_USER}#${JAVA_VM_HOST} "bash -c 'nohup java -jar /var/www/${ARTIFACT_NAME}.${ARTIFACT_FILE_TYPE} -c \"/opt/testcerts/clientkeystore\" -w \"password\" > /dev/null 2>&1 &'"
I've tried with and without the -f and -t switches in ssh, with and without the file stream redirection, with and without nohup and the Linux background ('&') command and various ways to escape the quotes.
At the bash prompt, this line successfully starts my server. The two switches are needed to point to the certificate and provide the password:
java -jar rest-service.jar -c "/opt/certificates/clientkeystore" -w "password"
I really think this is possible to do in Concourse, but I'm stuck at this point.
After a lot of trial an error, it seems I needed to do this:
ssh -f -i /tmp/pvt_key1.p8 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${SRV_ACCOUNT_USER}#${JAVA_VM_HOST} "bash -c 'sudo java -jar /var/www/${ARTIFACT_NAME}.${ARTIFACT_FILE_TYPE} -c \"/path/to/my/certificate\" -w \"password\" > /var/www/log.txt 2>&1 &'"
The key was I was missing the 'sudo' portion of the command. Using nohup as opposed to putting in a Linux bash background indicator ('&') seems to give me an error in the pipeline. This works for me, but others are welcome to post responses with better answers or methods that might be a better practice.

One command to restore local PostgreSQL dump into kubectl pod?

I'm just seeing if there is one command to restore a local backup to a pod running postgres. I've been unable to get it working with:
kubectl exec -it <pod> -- <various commands>
So I've just resorted to:
kubectl cp <my-file> <my-pod>:<my-file>
Then restoring it.
Thinking there is likely a better way, so thought I'd ask.
You can call pg_restore command directly in the pod specifying path to your local file as a dump source (connection options may vary depending on image you're using), e.g:
kubectl exec -i POD_NAME -- pg_restore -U USERNAME -C -d DATABASE < dump.sql
If the file was in s3 or another location available to the pod, you could always have a script inside the container that can download the file and perform the restore, in a single bash file.
That should allow you to perform the restore in a single command.
cat mybackup.dmp | kubectl exec -i ... -- pgrestore ...
Or something like that.

rkt/image building: acbuild run instructions "ignored"

I'm experiencing unexpected behavior using acbuild run. To get used to rkt the idea was to start with a CentOS7 based container running a SSH host. The bare CentOS 7 container referenced below as centos7.aci was created on a up-to-date CentOS7 install using the instructions given here.
The script used to build the SSHd ACI is
#! /bin/bash
acbuild begin ./centos7.aci
acbuild run -- yum install -y openssh-server
acbuild run -- mkdir /var/run/sshd
acbuild run -- sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
acbuild run -- sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
acbuild run -- ssh-keygen -A -C "" -N "" -q
acbuild run -- echo 'root:screencast' | chpasswd
acbuild set-name centos7-sshd
acbuild set-exec -- /usr/sbin/sshd -D
acbuild port add ssh tcp 22
acbuild write --overwrite centos7-sshd.aci
acbuild end
When it's spinned up using rkt run --insecure-options=image ./centos7-sshd.aci
the server runs but connection attempts fail because the password is not accepted. If I use rkt enter to get into the running container and re-run echo 'root:screencast' | chpasswd inside, I can login. So that acbuild run instruction has just not worked for some reason... To test a bit more, I replaced it by
acbuild run -- mkdir ~/.ssh
acbuild run -- echo "<rkt host SSH public key>“ >> ~/.ssh/authorized_keys
to enable key based instead of password login. It doesn't work: the key is refused. The reason is obvious once you look into the container: there's no authorized_keys file in ~/.ssh/. If I add a
acbuild run -- touch ~/.ssh/authorized_keys instruction before the key appending attempt, the file is created but it's still empty. So again a acbuild run instruction didn't work - without error notice. May it be related to the fact that both „ignored“ instructions use operators like >> and | ? All commands shown in the examples I've seen don't use any such operators yet the docs don't mention anything and a Google search doesn't help either. In dockerfile RUN instructions they also work fine... what is going wrong here?
P.S.: I tried to use the chroot instead of the default systemd-nspawn engine in the „ignored“ acbuild run instructions => same results
P.P.S.: there's no acbuild tag yet on StackOverflow so I had to tag this as rkt - could somebody with enough reputation create one please? Thx
Ok, I understood what happens using the the acbuild run --debug option.
When
acbuild run -- echo 'root:screencast' | chpasswd
gets executed it returns Running: [echo root:screencast] , the pipe is executed on the host machine. To get the intended result it should be
acbuild run -- /bin/sh -c "echo 'root:screencast' | chpasswd"
or in generic form
acbuild run -- /bin/sh -c "<cmd with pipes>"
as explained here

docker exec -it returns "cannot enable tty mode on non tty input"

docker exec -it command returns following error "cannot enable tty mode on non tty input"
level="fatal" msg="cannot enable tty mode on non tty input"
I am running docker(1.4.1) on centos box 6.6.
I am trying to execute the following command
docker exec -it containerName /bin/bash
but I am getting following error
level="fatal" msg="cannot enable tty mode on non tty input"
Running docker exec -i instead of docker exec -it fixed my issue. Indeed, my script was launched by CRONTAB which isn't a terminal.
As a reminder:
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
-i, --interactive=false Keep STDIN open even if not attached
-t, --tty=false Allocate a pseudo-TTY
If you're getting this error in windows docker client then you may need to use the run command as below
$ winpty docker run -it ubuntu /bin/bash
just use "-i"
docker exec -i [your-ps] [command]
If you're on Windows and using docker-machine and you're using GIT Bash or Cygwin, to "get inside" a running container you'll need to do the following:
docker-machine ssh default to ssh into the virtual machine (Virtualbox most likely)
docker exec -it <container> bash to get into the container.
EDIT:
I've recently discovered that if you use Windows PowerShell you can docker exec directly into the container, with Cygwin or Git Bash you can use winpty docker exec -it <container> bash and skip the docker-machine ssh step above.
I get "cannot enable tty mode on non tty input" for the following command on windows with boot2docker
docker exec -it <containerIdOrName> bash
Below command fixed the problem
winpty docker exec -it <containerIdOrName> bash
docker exec runs a new command in an already-running container. It is not the way to start a new container -- use docker run for that.
That may be the cause for the "non tty input" error. Or it could be where you are running docker. Is it a true terminal? That is, is a full tty session available? You might want to check if you are in an interactive session with
[[ $- == *i* ]] && echo 'Interactive' || echo 'Not interactive'
from https://unix.stackexchange.com/questions/26676/how-to-check-if-a-shell-is-login-interactive-batch
I encountered this same error message in Windows 7 64bit using Mintty shipped with Git for Windows.
$docker run -i -t ubuntu /bin/bash
cannot enable tty mode on non tty input
I tried to prefix the above command with winpty as other answers suggested but running it showed me another error message below:
$ winpty docker run -i -t ubuntu /bin/bash
exec: "D:\\Git\\usr\\bin\\bash": executable file not found in $PATH
docker: Error response from daemon: Container command not found or does not exist..
Then I happened to run the following command which gave me what I want:
$ winpty docker run -i -t ubuntu bash
root#512997713d49:/# ls
bin dev home lib64 mnt proc run srv tmp var
boot etc lib media opt root sbin sys usr
root#512997713d49:/#
I'm running docker exec -it under jenkins jobs and getting error 'cannot enable tty mode on non tty input'. No output to docker exec command is returned. My job login sequence was:
jenkins shell -> ssh user#<testdriver> -> ssh root#<sut> -> su - <user> -> docker exec -it <container>
I made a change to use -T flag in the initial ssh from jenkins. "-T - Disable pseudo-terminal allocation". And use -i flag with docker exec instead of -it. "-i - interactive. -t - allocate pseudo tty.". This seems to have solved my problem.
jenkins shell -> ssh -T user#<testdriver> -> ssh root#<sut> -> su - <user> -> docker exec -i <container>
Behaviour kindof matches this docker exec tty bug: https://github.com/docker/docker/issues/8755. Workaround on that docker bug discussion suggests using this:
docker exec -it <CONTAINER> script -qc <COMMAND>
Using that workaround didn't solve my problem. It is interesting though. Try these using different flags and under different ssh invocations, you can see 'not a tty' even with using -t with docker exec:
$ docker exec -it <CONTAINER> script -qc 'tty'
/dev/pts/0
$ docker exec -it <CONTAINER> 'tty'
not a tty
$ docker exec -it <CONTAINER> bash -c 'tty'
not a tty