SSH versus FTP: Push local files to a server via Terminal - command-line

I am a junior front-end developer and I am working on my process using command line. I would like to push my local changes to my server without having to use an FTP client like Filezilla. I am manually dragging and dropping files using the client and would like to learn how developers perform this process. I am building a static site using SiteLeaf on a Mac. Thanks in advance for help with this workflow.

If your target has SSH installed you can use SCP:
$ scp -r your_remote_user#remote_address:/path/to/save/dir /local/dir/to/transfer
This can also be used to transfer single files: just remove the -r (recursive) option and specify files path instead of directories.

Related

Is possible use the deploy connection to move files or another actions?

I have my pipeline to publish my code in an own server and works good, but now i would like do more actions like move differents files o delete them using the deploy group, is possible?
I don't know if I'm wrong, but i don't want open the server using a new connection through SSH.
Thanks for all!
Is possible use the deploy connection to move files or another actions?
I suppose you are have the connection through SSH to deploy your code to your server.
To more actions like move different files o delete them, you can try to use the SSH task, select the SSH service connection you deployed:
Then delete/remove the file with shell command, like:
file="file_you_want_to_delete"
if [ -f $file ] ; then
rm $file
fi
Code from: shell script to remove a file if it already exist.
Alternatively, you can use the Remote delete task to move files, which you can just provide:
Enter Remote Computer IP address
Enter User name and password for remote machine
Hope this helps.

Click to deploy MEAN Stack on Google Compute Engine Clone Repo Locally

On Compute Engine, using the click-to-deploy option for MEAN, how can we clone the repo of the sample app it locally creates so that we can start editing and pushing changes?
I tried gcloud init my-project however all it seems to do is initialize an empty repo. And indeed when I go to "source code" section for that project, there is nothing there.
How do I get the source code for this particular instance, setup a repo locally for it and then deploy changes to the same instance? Any help would be greatly appreciated.
OK, well I have made some progress. Once you click-to-deploy GCE will present you with a command to access your MEAN stack application through an SSH tunnel.
It will look something like this:
gcloud compute ssh --ssh-flag=-L3000:localhost:3000 --project=project-id --zone us-central1-f instance-name
You can change the port numbers as long as your firewall rules allow that specific port.
https://console.developers.google.com/project/your-project-id/firewalls/list
Once you SSH in, you will see the target directory, named the same as you told mean-io to use as the name of the application when you ran mean init
I first made a copy of this folder where mine was named "flow" cp -r flow flow-bck and then I removed some unnecessary directories with:
cd flow-bck && rm -rf node_modules bower_components .bower* .git
All of this to setup copying that folder to my local machine using gcloud compute copy-files availabe after installing Google Cloud SDK.
On my local machine, I ran the following:
gcloud compute copy-files my-instance-name:/remote/path/to/flow-bck /local/path/to/destination --zone the-instance-region
Above 'my-instance-name', '/remote/path/to', '/local/path/to', and 'the-instance-region' obviously need to changed to your deployment's info, etc.
This copied all the files from the remote instance to a folder called flow-bck on local found at the defined local path. I renamed this folder to what it is on remote flow and then did:
cd flow && npm install
This installed all the needed modules and stuff for MEAN io. Now the important part about this is you have to kill your remote ssh connection so that you can start running the local version of the app, because the ssh tunnel will be using that same port (3000) already, unless you changed it when you tunneled in.
Then in my local app directory flow I ran gulp to start the local version of the app on port 3000. So it loads up and runs just fine. I needed to create a new user as it's obviously not the same database.
Also I know this is basic stuff, but not too long ago I would have forgotten to start mongodb process by running mongod beforehand. In any case, mongo must be running before you can start the app locally.
Now the two things I haven't done yet, is editing and deploying a new version based on this... and the nagging question of whether this is all even necessary. That'd be great to find that this is all done with a few simple commands.

What is the best practice to move my dockerfile and related files to the server? (private files)

I have a dockerfile, a .sh file, a nginx config file and the private keys. But on a clean server how to add those files for the first time (before making the docker image) to the server?
Should I FTP and put those files there?
Should I git pull my project? // but I still need the keys or I can use password
What you do?
I'm not using digital ocean, and I would not like to have a private paid docker repo like https://registry.hub.docker.com/plans/
You use a physical server or a vps ?
If you can ssh to your server, add files have so many ways.
1.The most easy way is use sftp (you can find some sftp client to do this or use ftp command line tools), only need your ssh login permission, and you can upload these file to your user home directory.
2.The other way is use scp, command like:
scp YOUR_FILE username#ipaddressORhostname:/home/username/
this also only need your ssh login permission
git or ftp is not a good way to push files fisrt time to server.
Most git remotes repos (GitHub, GitLab etc) will support a https access mechanism, the data is encrypted in transit much like sftp and ssh. you'll get a password challenge. No keys...
git remote add myHttpsRemote https://my/foo/bar/project.git
git pull myHttpsRemote [branch]

SFTP inline put without interaction

I am trying to automate an application deployment as part of this I need to upload a file to a server. I have created a minimal user and configured chroot for the SFTP server but I can't work out how to upload a file non interactive.
At present I am doing scp myfile buildUser#myserver.com:newBuilds/
I tried sftp buildUser#myserver.com myfile (newBuilds is the chroot dir) but this didn't upload anything but it did connect.
The reason for favouring this aproach and NOT using scp is that its a lot more difficult to restrict scp access (from the information I have learned).
If you are using OpenSSH server, chrooting works for both SCP and SFTP.
For instructions see:
https://www.techrepublic.com/article/chroot-users-with-openssh-an-easier-way-to-confine-users-to-their-home-directories/
So I believe your question is irrelevant.
Anyway, sftp (assuming OpenSSH) is not really designed for command-line-only upload. You typically use -b switch to specify batch file with put command.
sftp buildUser#myserver.com -b batchfile
With batchfile containing:
put /local/path /remote/path
If you really need command-line-only upload, see:
Single line sftp from terminal or
Using sftp like scp
So basically, you can use various forms of input redirection like:
sftp buildUser#myserver.com <<< 'put /local/path /remote/path'
Or simply use scp, instead of sftp. Most servers support both. And actually OpenSSH scp supports SFTP protocol since 8.7.
Since OpenSSH 9.0 is even uses SFTP by default. In 8.7 through 8.9, the SFTP has to be selected via -s parameter. See my answer to already mentioned Single line sftp from terminal.
You can pass inline commands to SFTP like this:
sftp -o PasswordAuthentication=no user#host <<END
lcd /path/to/local/dir
cd /path/to/remote/dir
put file
END
I resolved this issue by approaching it from a different side. I tried configuring chroot for sftp but could not get this to work. My solution was to use rssh and only allow scp. This works for me because the user I am trying to restrict is known and authenticated user.

CruiseControl.net connecting to BitBucket using SSH and running as a service

here's my situation.
I'm running Cruise Control as a Windows Service and trying to get it to connect to a Mercurial Repository on BitBucket over SSH.
I'm pretty sure that everything's configured OK (PuttyGen, Pagaent, etc). I'm remoting onto the server using the same account that I am using to run the service and if I issue hg pull -b ssh://#bitbucket.org// from a command line everything works. I added -v to the ssh configuration in mercurial.ini and I can see all of the steps that are taken.
If I run CC.NET from a command prompt then it builds fine. In the console window I can see the same logging from the SSH operation.
However, if I run CC.NET as a service (using the same user account that I'm logged in on) the call to BitBucket times out. I can find no way to work out why either. The build log doesn't help and neither do ccnet.log or ccnet.trace in the temp directory. I was expecting one of them to contain the logging from the SSH operation, but they don't.
Can anyone help? Is it that running as a service prevents it from connecting to Pagaent (I've started Pagaent by adding it to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run). When I did the pull from the command line I had to OK a dialog, but only once. Is it waiting on the same dialog now that it's running as a service?
Getting close to my wits end here.
Thanks
I did get it working in the end. The trick was to create the public key without a passphrase. When running as a service the solution has to be completely non-interactive and the passphrase option with pagaent.exe just isn't.
Here are the steps:
Use PutTTygen to generate a secure key WITHOUT a passphrase. If you really do need one then you can add it to the mercurial.ini file, but defeats the point for me as it's in plain sight anyway.
Copy a mercurial.ini to two locations: C:\Windows\System32\config\systemprofile and C:\Windows\SysWOW64\config\systemprofile. Probably only one of these was really necessary, but I didn't have the time to experiment. The first is the home directory for the system user when running 64 bit apps, the SysWOW64 location for 32 bit. Make sure that if you do the same as me then keep both files in sync - or go one further and work out which is the correct location.
Add something like this line under the [ui] key in both files:
ssh = "D:\Program Files\TortoiseHg\TortoisePlink.exe" -ssh -2 -C -batch -v -i "[Path to your ppk file]"
Add the passphrase to the end of the command if one was created in step 1.
Make sure that TortoisePlink.exe is specified, not Plink.exe. They should both be in the same directory.
Download psexec from http://technet.microsoft.com/en-gb/sysinternals/bb842062.aspx
Run d:\PSTools\PsExec.exe -s -i cmd.exe. This will open a command line as the system account in interactive mode.
Now do an hg pull, or hg clone or whatever.
A dialog should pop up with a confirmation message. This is a one time thing and the reason that you have to do the PsExec step. OK the dialog.
Now cc.net should be able to be run as a service under the local system account using SSH!