Using Putty to SSH ignoring all warnings in Perl [duplicate] - perl

This question already has answers here:
Putty won't cache the keys to access a server when run script in hudson
(11 answers)
Closed 3 years ago.
I'm writing a Perl script to SSH into remote linux and maci machines from a windows. For that I'm running plink (putty link) command using qx. The problem is that when I try to run the plink command it gives a prompt
The server's host key is not cached in the registry. You
have no guarantee that the server is the computer you
think it is. ...... If you do not trust this host, press Return to abandon the
connection. Store key in cache? (y/n)
I have to automate the process of running a command remotely. So, I somehow want to bypass this warning.
I could think of two ways of doing this but don't know how to accomplish these
Somehow bypass this warning from putty itself through some command line options or other commands
Some Perl way of passing input to plink when prompted
Can anyone suggest how to do this either in one of above ways or some other solutions.

I solved it using pipes to pass Y to plink when prompted - echo Y | plink -ssh <user>#<host> -pw <password> <command>.
For more details refer to this answer. Also note the answer by #clay where he says
For internal servers, the blind echo y | ... trick is probably adequate (and super simple). However, for external servers accessed over the internet, it is much more secure to accept the server host key once rather than blindly accepting every time.
This was the case with me - I was using plink to ssh to internal servers.

Related

Unable to ssh to a remote machine through shell script while accessing it from UI

I have a Linux machine where I have created a cgi script (JarPatch.cgi), the code of which looks like this:
#!/usr/bin/perl
use warnings;
print "Content-type: text/html\n\n";
system ("sh JarPatch.sh");
The code of JarPatch.sh looks like this:
#!/bin/bash
echo "Inside jar patching tool";
PJS_DEV=app4915#slcai833.us.oracle.com;
ssh -f $PJS_DEV "cd /slot/ems4915/appmgr/tmp; echo stopping server ; ./find_stop_servers.sh;"
echo "Exit jar patching tool";
This script will basically shut down a server running on the remote machine
Problem statement is this:
When I execute this cgi script through Linux terminal. I can see that the ssh commands are getting executed. Server is shut down.
When I access the cgi script through a windows machine in a browser, the shell script is invoked but ssh seems not to be working.
Can any one give me a pointer to resolve this issue please?
I am new to perl/shell integration. So might be missing something small as well.
Thanks
When you ssh from windows machine all connections are made as webuser which is not authorized to ssh into remote machine. On other hand when you ssh from linux terminal you are able to ssh as user there is authorized to do so. This is because linux user has its ssh key on remote server.
You can also try to look into ProxyCommand which might come to rescue but i have no idea how it will work with windows.
Other approach is to create ssh keys for webuser and put them into remomte server which will be security risk.
When you run it as yourself ssh is offering your keys to authenticate you. When you run it through the webserver, the webserver user is trying to run the ssh command, and does not have your ssh keys to offer, so is probably being prompted for a password, and not successfully logging in.
You could fix this by generating ssh keys for the webserver user and sharing that key with the target system as well, which has some security implications to say the least.

Running emacs in server mode so it's possible to connect from remote locations [duplicate]

This question already has answers here:
Using Emacs server and emacsclient on other machines as other users
(4 answers)
remote emacs client connects, but doesn't create new frame in terminal
(1 answer)
Closed 8 years ago.
Is it possible to run emacs in server mode so remote clients can connect from remote locations via network? I'm just looking for the way to run emacs on remote powerful server and edit buffers locally using emacsclient while running compile command remotelly. This looks much better approach then using ssh session. Should not depend on network latency.
I think the best approach is: http://www.emacswiki.org/emacs/TrampMode
Basing on my comment above I'd recommend the following workflow:
Retrieve the sources you work on to the local directory (via scp or git, whatever)
Introduce the required changed to the code
To compile the code on remote server specify a custom compile-command which will:
Push changed files back to the remote server. E.g.: scp -r my-sources/ user-name#example.net:my-sources or via git push remote my-dev-branch
Run the compilation command through ssh and show the output. E.g.: ssh user-name#example.net -C "cd ~/my-sourcesl; make && ./bin/compiled-app"
note: for smooth commands execution through ssh, there is should a configured key-based authentication.
The significant drawback here is that at least it might not run X11 applications correctly (or at all)

Unable to continue perform perl script after ssh to another unix domain

Encounter a question here whereby need help from you guys
I am writing a perl script that will be executed in a UNIX machine. In that script, I will perform an operation of 'ssh' to port over to other Unix domain (from A ssh to B). The problem now is after I port over to domain B, I still need to perform some operations from the perl script (for example: echo Hello World!). The issue here is after port over to the new unix domain, the following script after 'ssh' could not be performed as the script still over the "old domain". Is that anyway to solve this issue or any better way to achieve the same objective?
You can use the Expect module to open an SSH connection and execute commands over it via Perl.
If you need help beyond that, you'll have to explain more specifically what you are trying to do. It is possible that you have the wrong design for solving your task.
Try doing it like this:
ssh <servername> "echo 'hello world'";
Also check ssh with -t option.For checking if this echo command is running on the server or the localhost, try some other command like ls".
Note: : ssh connection will get closed when script will terminate.

Emacs-Tramp: Not working properly

I'm trying to use Tramp/Emacs-23 in Ubuntu 12.04 in order to edit the remote host files. My remote host has two step authentication (RSA+Passwd). I use multiplexing through .ssh/config to ensure that tramp can directly connect to the remote shell without having to provide passwords.
My problem however is that I have 3 different remote hosts. When I try to connect to remote host through tramp without the initial multiplexing (through terminal), the TRAMP hangs with a message stating "Tramp: Waiting for prompts from remote shell". I used the below mentioned commands in .ssh/config to ensure the connection gets lost after a specified interval upon no prompt.
Host *
ServerAliveCountMax=30
ServerAliveInterval=5
However this doesn't seem to have any effect on the tramp connection. It will be of help if someone can help me in fixing this issue.
Sorry that your question has been left hanging so long.
I can offer a couple of things to try, use the tramp protocol sshx instead of ssh, it seems to cope better with most non-vanilla ssh connections.
e.g.
/sshx:user#host:path/filename
The other thing to try is adding your ssh key passphrase to the session at startup, run an ssh-agent on the machine, and connect to it at startup, then run ssh-add to enter the passphrase once.
As a side note, upgrade your Emacs to 24.3 there's a lot of new/great stuff in there since 23.x

ssh-agent across ssh sessions on shared host [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I ssh into a shared host (WebFaction) and then use ssh-agent to establish a connection to a mercurial repository (BitBucket). I call the agent like so:
eval `ssh-agent`
This then spews out the pid of the agent and sets its relevant environment variables. I then use ssh-add as follows to add my identity (after typing my passphrase):
ssh-add /path/to/a/key
My ssh connection eventually times out and I'm disconnected from the server. When I log back in, I can no longer connect to the Hg server and so I do this:
ps aux | grep 1234.*ssh-agent`
kill -SIGHUP 43210
And then repeat the two commands at the top of the post (ie. invoke the agent using eval and call ssh-add).
I'm sure that there's a well established idiom for avoiding this process and maintaining a "reference" to the agent that was spawned initially. I've tried redirecting I/O of the first command to a file (in the hope of sourcing it in my .bashrc), but I only get the agent's pid.
How can I avoid having to go through this process each time I ssh into the host?
My *NIX skills are weak, so constructive criticism on any aspect of the post is welcome, not just my use of ssh-agent.
Short answer:
With ssh-agent running locally and identities added, ssh -A user#host.webfaction.com provides the secure shell on the remote host with the local agent's identities.
Long answer:
As Charles suggested, agent forwarding is the solution.
At first, I thought that I could just issue an ssh user#host.webfaction.com and then, from within the secure session on the remote host, connect to the BitBucket repository using hg+ssh. But that failed, and so I investigated the ForwardAgent and AgentForwardingEnabled flags.
Thinking that I'd have to settle for a workaround in .bashrc that involved keeping my private key on the remote host, I went looking for a shell-script solution but was spared from this kludge by this answer in SuperUser, which is perfect and works without any client configuration (I'm not sure how the sshd server is configured on WebFaction).
Aside: in my question, I posted the following:
ps aux | grep 1234.*ssh-agent`
kill -SIGHUP 43210
but this is actually inefficient and requires the user to know his/her uid (available via /etc/passwd). pgrep is much easier:
pgrep -u username process-name