Working with SSH connection and Github workflow - github

in working on an experiment on a ML technique that required me to use a better machine for computational purposes, so they gave me an SSH connection to the machine. Also the data were stored in that server.
My workflow was this:
(I'm working on a headless server)
Connect my local machine via ssh and run the script for the experiments...
On that machine I could only use vim without all my setup
If I want to change something I have to change it in my local then push the changes
I pull the changes on the remote server and then I try a new experiment.
Occasionally I had to push from the remote server the results (plots and more) and then pull them from local to work on that and push again eventually.
I think there is a flaw in this, and there's a better way to manage all of these things.
Do you have some ideas?
What i need is just a clever way to do not push every change i do.

Another alternative is to use an IDE like VSCode with the Remote - SSH extension, following this tutorial.
That way, your local VSCode, on your local machine, displays and edits directly files on the remote machine, without you having to pull/push them.
Depending on that extension, you might still need a separate SSH session in order to git add/commit those modified files.

Related

GitLab - Cannot push or pull. It seems to be a permission issue

Hope someone will be be able to help: I've installed GitLab and for a few days it seems that worked ok (I could push and pull only from a client but not from the machine that runs GitLab itself), however that's no longer the case. I have been working on the server (its my own server that I've setup for development/learning/personal stuff but I don't believe I've changed anything that could affect Gitlab, so I'm don't know what to do.
At the moment I can't push or pull from either my local machine (OS X 10.8.3) not from my server (Ubuntu 12.0.4). I've run the test several times and all is green. When I do git config user.name or git config user.email it comes back with my name and email respectively. I've also searched online but couldn't find anyone in exactly the same situation, however I did try many of the approaches suggested: I've deleted and generated more ssh keys, changed config in /home/git/gitlab/config.yml to reflect my setup (I'm running apache). My GitLab is 5.2 and I've followed the instruction on GitLab's homepage. In order to make it working with apache instead of nginx I've followed the instructions here:. This question seems the closest to describe my problem, however the solution is not clearly described, so I couldn't follow. The web ineterface works fine and I can commit either from my local machine (using sshfs) and my server. I just can't push or pull. The error I get is:
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
I'd appreciate any help. I've been struggling with this for days now and I'm on the brink of give GitLab up...
Many thanks
EDIT: On my server I've got three accounts: user1 (main, first user, root), user2 a sudoer that also has admin privileges and git which also is a sudoer. After more investigating, I'm pretty sure this is a problem of me messing up with permission and the ssh key. Can someone point me out: when I generate the ssh key, which user should I be logged in as? In which computer should I generate this key? On my server or my Mac? Also, when I've tried push from my server directly (I was physically logged in the server rather than sshed to server via my Mac) GitLab was asking for git's password. I then generated a key logged as git on the server and added to GitLab through the web interface and the error appeared again (the same as before). Still not fixed.
The problem in my case was that I changed the git credentials on my local machine (when you create a new repo, you set the user name and email Git and git#localhost respectively) that I had changed and didn't realise. That's why every time I was trying to either push or pull I got the error. Once that was changed back to the correct settings, Gitlab started working again. Leaving as it might be helpful to someone.

How to let Emacs jump to the remote Linux machine?

I am working on a remote Linux machine a little complex right now. Firstly, I use Putty to login a jump machine which is also a Linux system in my office, then I use command go to jump to the remote machine outside of my office. There is also a key file I need when using putty. The Linux jump command is like this:
ssh 119.11.11.11 -p 22
The IP should be changed according to the remote machine IP.
The usual way of my development is always using Emacs Tramp to edit files remotely.
I don't want to copy my Emacs config files to the remote machine, for it is a little bit hard to sync the config files between machines. I also don't want to download the files to local for it isn't conveniently to debug.
In this suitation, how can I use Emacs to jump to the remote machine? Is it possible to do the jump by using Cygwin, Putty or something else?
My desktop is Windows 7, and my Emacs is 24.2
Assuming you can't SSH directly to the destination server, it sounds like you could resolve this by configuring a multi-hop proxy for tramp.
I've only tried that once, but it was for a slightly different situation, and I had problems getting it working; so I'll just point you at the documentation, and leave it to someone more knowledgeable to provide other details if need be.
C-hig (tramp) Multi-hops RET
I would strongly recommend using either scpc or rsyncc as the method for the second hop, if possible, as that will automatically utilise SSH ControlMaster to keep the connection open, which dramatically improves Tramp performance.
I'm not sure whether or not there's an equivalent to that for PuTTY/plink? I do know that Cygwin isn't able to support ControlMaster for some technical reasons (or at least this was the case a few years ago), so using that probably wouldn't help.
Another alternative Cygwin's SSH and PuTTY is to host a Linux(*) VM on your Windows box and run Emacs inside that (which means you can use Linux's SSH and ControlMaster). Cygwin can provide an X display in that instance. That's complicating matters, of course, so I would certainly try out the simpler options first; but if performance is lacking, and your local PC is reasonably powerful, the VM approach might surprise you.
* or similar
Ignoring Tramp entirely, sshfs is often used to mount a remote filesystem locally, in which case Emacs doesn't even know that it's talking to a remote server. I've never used it myself, and certainly not on Windows, but it could be worth a look as well.

Sensible deployment using EC2

We're currently using RightScale, and every time we deploy, we execute a script on the server or server array that we want to update. It pulls the code from a GitHub repository, creates a new folder in /var/www/releases/TIMESTAMP, and symlinks the document root, /var/www/current, to that directory.
We're looking to get a better deployment strategy, such as something where we SSH into one of the servers on the private network, and run a command-line script to deploy what we want to deploy.
However, this means that this one server has to have its public key in the authorized_keys of all of the servers we want to deploy to. Is this safe? Wouldn't this be a single server that would allow all the other servers to be accessed?
What's the best way to approach this?
Thanks!
We use a similar strategy to deploy, though we're not with Rightscale anymore.
I think generally that approach is fine and I'd be interested to learn what you think is not serious about it.
If you want to do your ssh thing, then I'd go about it the following:
Lock down ssh using security groups, e.g. open ssh only up to specific IP or servers with a deploy security-group, or similar. The disadvantage here is that you might lock yourself out when the other servers are down, etc..
I'd put public keys on each instance to allow a password-less login. If you're security concious, you rotate those keys on a monthly basis or for example, when employees are leaving, etc..
Use fabric or capistrano to log into your servers (from the deploy master) using ssh and do your deployment.
Again, I think Rightscale's approach is not unique to them. A lot of services do it like that. The reason is that e.g. when you symlink and keep the previous version around, it's easier to rollback and so on.

Emacs-client - whats the minimal installation?

Lets say I have an Emacs-Server running on some remote server, with all the libraries and software necessary for running my application.
Then I want several clients to connect to that remote machine, using Emacs-client. Does each client need a full Emacs installation, or is there a minimal installation that is just enough to communicate with the remote server, where all the action is?
Could this (Emacs-)client installation be so minimal, that almost all software-updates can be done on the server, without affecting the Emacs-clients?
Is there a reason not to run the clients remotely as well, and simply use a local display? That way, pretty much all you need on the local machines is the ssh client and the X Window server.
ssh -X (user)#(server) "emacsclient -c"
Edits for the comments:
This command starts a new client to connect to an existing Emacs server (which it assumes is already running). You can use "emacsclient -a '' -c" to automatically start emacs --daemon if there is no existing server, but I don't know whether you want the connecting user to be starting the server.
In fact, I'm pretty unsure about the whole multi-user side of this to be honest, as I've never done that before. Authentication for the above is handled by ssh, but there may well be subsequent permission issues to deal with, or similar, when the server and the clients are started by different users.
This approach should be possible with Windows/Cygwin as client and/or server, as Cygwin provides Emacs, OpenSSH, and X.org packages. (I regularly use Windows/Cygwin as a local display for Emacs running on Linux.) It may be harder to set up, though, and any permissions issues are probably different when you're using Cygwin.
I'm less sure how this would work without Cygwin. NTEmacs certainly won't talk to X.org, so I imagine you'd be terminal based in that instance. (There are probably other options, but Cygwin sounds to me like the best-integrated approach to using all of Emacs, SSH, and X on Windows).
Lastly, I imagine you're probably getting your "Connection refused" error because localhost is not running a sshd daemon? I would say that configuration of ssh is outside the scope of this question, but there are lots of resources online for that.
Depending on what you're trying to achieve, you may be able to use a combination of Emacs and Screen. By starting up Emacs from Screen on the remote machine and detaching from it, you can subsequently re-attach from a different machine that doesn't have Emacs. Again, whether this will work for you or not depends on what you're trying to do; however, for many Emacs use-cases, this can be very effective. If you're not familiar with using Screen in this manner, here is some reading material:
screen - The Terminal Multiplexer
I am not sure that would be possible. emacsclient uses tramp to connect to a remote server, and just by looking at the number of requires in the tramp elisp files (41) it seems very unlikely. You can try it yourself with the following:
zgrep -oE "\(require '[a-z-]+\)" *el.gz | sed -e 's%[a-z0-9-]\+\.el\.gz:%%g' | sort | uniq -cu | wc -l
I'm not an expert in emacsclient, but I don't think is was designed to do what you're looking for. I think the general use case is that emacsclient allows you to redirect new requests to open a file with emacs to a persistent emacs process to avoid what may be a bit of an overhead in startup time. You seem to be looking for more of a true client/server relationship.
I think to meet the goal you're aiming at you'll probably need to look a little outside emacs, probably a project unto itself - 'emacsRemoteClient. It boils down to one or two models; the file you want to edit would need to have it's path sent over to the server machine so that emacs could do some sort of remote tramp access & then spawn the xwindow locally (using the local X env or requiring an x server on windows)... or two, transferring the file to some temp location on the server box and again spawning the remote x window locally (followed by syncing the changes between the tmp & local file).
Would be cool to have something like that... but suspecting it'll involve a bit of work. Maybe we just need a version of emacs written in javascript and it can live in the cloud or on your browser... oh to have emacs keybindings in the browser ;-)
-Steve

How can I have the main repository stored on a local network?

I'm wondering how can I have the main Mercurial repository stored on the local network and every developer connected to it can do the usual things (just like with BitBucket online.)
Thanks.
Check https://www.mercurial-scm.org/wiki/PublishingRepositories
If it's only on a local network, hg serve is the easiest/fastest with 0 auth support.
of course you can always use ssh without the need of setting up a server.
hg clone ssh://your-main-server/path/to/repo
Set up a machine running the apache HTTP server (or another webserver that can host hgwebdir)
Install the hgwebdir CGI script.