How do I clone Github data to Amazon EC2? - github

I am new to both Git and Amazon EC2.
I want to clone my Github code to Amazon EC2 directly.
For that I have referred to the following URLs:
http://thelucid.com/2008/12/02/git-setting-up-a-remote-repository-and-doing-an-initial-push/
http://deductiveblog.in/2013/05/19/deploy-to-amazon-ec2-using-git/
How to push to git on EC2
I've performed the necessary changes suggested in the above URLs, but am still not able to get my data on Amazon EC2.
By following the above suggested steps, I can see one directory - but it does not display the data in it that I have in git & in my local.
So what should I do to clone all data in Amazon EC2?
I also want to know one other thing - is it possible that I can put my files directly in /var/www directory without creating .git?
Now in this, I am getting an error Permission denied (publickey). Fatal the remote end hung up unexpectedly.
For that I have checked my ssh keys as well and add it to github. I have a file authorized_keys for key and I have added the same key in github, but still it gives me a Permission denied error.
Can anyone give any suggestions how to resolve this?

You can just use the git clone command from your ec2 instance from the directory you want the repo cloned to.
git clone git://github.com/ryanb/railscasts-episodes.git
of course you need to alter the url part to your repo, this will create a folder with the files inside named after the repo in the current directory. If you want to clone the files inside the repo into the current directory do:
git clone git://github.com/ryanb/railscasts-episodes.git .
The Full Stop (.) or dot indicates the current directory in Unix.

Related

How to connect to an organization's guthub repository using ssh key using Visual Studio 2019?

I prefer to using ssh keys and there is a lot to learn for me always!
My current challenge is:
I am using Visual Studio 2019 and I have my organization's private repositories in guthub under the organization's name.
When i try to connect to one of the repositories under the name of my organization, i am getting this error:
permission denied (publickey)
I have generated ssh keys as follows:
id_rsa_personal#gmail
id_rsa_work_alias#organization
I also have a config file in ~/.ssh/config with contents as below:
#github personal
Host personal
HostName github.com
User git
IdentityFile ~/.ssh/id_rsa_personal#gmail
#github work
Host work
HostName github.com
User git
IdentityFile ~/.ssh/id_rsa_work_alias#organization
In my visual studio 2019, under MenuBar-> Git-> Manage Remotes is setup to use ssh for fetch and push
Here is what my github dashboard looks like
As you can see, it has two entries one for my work alias and one for my organization
I mostly work on repositories under the Organization.
But while adding the ssh key contents to github, i added so under my work alias account
Now when i try to do a fetch from Visual Studio 2019, i get the error
permission denied (publickey)
Examining the Your Organizations menu option, i do not see a settings option.
So, in my case, should i be configuring visual studio 2019 to be using https:// to perform git operations? please clarify
To add to sborsky's comment, assuming everything is running with the current user (the one with ~/.ssh/config), the actual URL to use would be:
work:Organization/api-1.git
No need to repeat the user, which is already specified in ~/.ssh/config
Check first if it works with:
ssh -Tv work
You should see an error message, unless:
your account is not part of the organization
and/or you have not properly added the public key to your GitHub account.

Managing multiple GIT ssh keys

Having some issues with maintaining too many SSH keys in the same computer lately.
I have created two SSH keys in my computer for UserA(Company) and UserB(Personal). Both the ID's are created using different email ID
I am able to pull and push the code changes for UserA
But UserB is where I face trouble to push my code.
I am able to pull the code for UserB(Where repo is different than UserA)
While pushing the code I get the following error
ERROR: Permission to UserB/xxxxxx.git denied to UserA.
Please make sure you have the correct access rights
and the repository exists. ```
Feel bit strange to me. Can someone help me this ?
Starting from Git 2.3.0 you can use below command
GIT_SSH_COMMAND='ssh -i private_key_file' git clone user#host:repo.git
Solved !!
Created Gitconfig for personal and work using the following link,
https://medium.com/#trionkidnapper/ssh-keys-with-multiple-github-accounts-c67db56f191e
Sometimes you have the problem of too many keys stored in the ssh-agent.
Then the server refuses the connection after it offering too many keys.
This can be solved by force ssh to use only one specific key.
GIT_SSH_COMMAND='ssh -o IdentityAgent=none -i private_key_file' git <cmd>

How have I saved Gitlab username and password in Visual Studio Code, Windows?

This is a strange question, but I cannot find how I did this:
I have three computers, all of them with visual studio code and an account in gitlab. In two of them, my operation has been
git clone ___.git
cd folder
git init
git remote add origin ___.git
And then, every time I push,... I have to enter my id and pass.
But in my first computer, I did something that I don't need to enter the id and pass anymore, it just pushes without hassle.
Then, I thought I must have done something like this
https://git-scm.com/docs/git-credential-store
But when I look for .gitconfig, which I find in the /Users folder, there I only have my user.name (which is not the one of gitlab), the email (this one is the gitlab email) but no password entry. And I cannot find any other .gitconfig files anywhere. For sure I didn't set up any SSH key.
So here the question, what did I do?
Make sure to use the GCMfW: Git-Credential-Manager-for-Windows:
git config --global credential.helper manager
Then try again: that should cache your username / password, provided you are using an HTTPS (https://...) URL, not an SSH one.

make server backup, and keep owner with rsync

I recently configured a little server for test some services, now, before an upgrade or install new software, I want to make an exact copy of my files, with owners, groups and permissions, also the symlinks.
I tried with rsync to keep the owner and group but in the machine who receives the copy I lost them.
rsync -azp -H /directorySource/ myUser#192.168.0.30:/home/myUser/myBackupDirectory
My intention is to do it with the / folder, to keep all my configurations just in case, I have 3 services who have it's own users and maybe makes modifications in folders outside it's home.
In the destination folder appear with my destination user, whether I do the copy from the server as if I do it from the destination, it doesn't keep the users and groups!, I create the same user, tried with sudo, even a friend tried with 777 folder :)
cp theoretically serves the same but doesn't work over ssh, anyway I tried to do it in the server but have many errors. As I remembered the command tar also keep the permissions and owners but have errors because the server it's working and it isn't so fast the process to restore. I remember too the magic dd command, but I made a big partition. Rsync looked the best option to do it, and to keep synchronized the backup. I saw rsync in the new version work well with owners but I have the package upgraded.
Anybody have some idea how I do this, or how is the normal process to keep my own server well backuped, to restore just making the partition again?
The services are taiga, a project manager platform, a git repository, a code reviewer, and so on, all are working well with nginx over Ubuntu Server. I haven't looked other backup methods because I thought rsync with a cron job do the work.
Your command would be fine, but you need to run as root user on the remote end (only root has permission to set file owners):
rsync -az -H /directorySource/ root#192.168.0.30:/home/myUser/myBackupDirectory
You also need to ensure that you use rsync's -o option to preserve owners, and -g to preserve groups, but as these are implied by -a your command is OK. I removed -p because that's also implied by -a.
You'll also need root access, on the local end, to do the reverse transfer (if you want to restore your files).
If that doesn't work for you (no root access), then you might consider doing this using tar. A proper archive is probably the correct tool for the job, and will contain all the correct user data. Again, root access will be needed to write that back to the file-system.

Mercurial Keyring Prompts for Password Every time

I am using the mercurial key-ring extension to store the password to my remote repository on BitBucket, so I don't have to enter it every time I push to the remote repository. Ironically, it asks me for the password to unlock the key-ring every time I need to access it; thereby completely mitigating its purpose to me. What am I doing wrong?
In my global mercurial config (~/.hgrc) I have the following lines:
[extensions]
hgext.mercurial_keyring = /etc/mercurial/mercurial_keyring.py
In my repo mercurial config (.hg/hgrc), I have:
[paths]
default = https://username#bitbucket.org/username/repo
Example:
> hg out
> comparing with https://username#bitbucket.org/username/repo
> Please enter password for encrypted keyring:
I have tried uninstalling the keyring and trying again. I've also played about with configuration settings I've found online to no avail. I also couldn't find anything on encrypted keyring and non-encrypted keyring in regards to mercurial.
How can I get it so that I don't have to enter a password at all when I perform actions to the remote repo?
I don't know if this was already the case at the time the question was asked, but now the solution is directly explained in the keyring extension wiki link in your question.
Just enabling the keyring extension is not enough, you also need to tell Mercurial the remote repo and the username in the config file.
Quote from the link:
3.2. Repository configuration (HTTP)
Edit repository-local .hg/hgrc and save there the remote repository
path and the username, but do not save the password. For example:
[paths]
myremote = https://my.server.com/hgrepo/someproject
[auth]
myremote.schemes = http https
myremote.prefix = my.server.com/hgrepo
myremote.username = mekk
Simpler form with url-embedded name can also be used:
[paths]
bitbucket = https://User#bitbucket.org/User/project_name/
Note: if both the username and password are given in .hg/hgrc, the
extension will use them without using the password database. If the
username is not given, extension will prompt for credentials every
time, also without saving the password. So, in both cases, it is
effectively reverting to the default behaviour.
Note that you don't need to specify all the information shown in those examples.
On my machine (Mercurial 5.0.2 on Windows), I'm using a simpler form which also works for multiple repos.
This is a 1:1 copy from my actual config file:
[extensions]
mercurial_keyring =
[auth]
bb.prefix = https://bitbucket.org/
bb.username = christianspecht
This uses the keyring extension to save the password for the user christianspecht, for all remote repos whose URL starts with https://bitbucket.org/.
The prefix bb can be freely picked, so you can use this to save multiple URLs/usernames at once.
This works perfectly well (at least until Bitbucket drops Mercurial support in a few weeks...) - it asks for the password once, then it's automatically saved and it never asks again.
it asks me for the password to unlock the key-ring. What am I doing wrong?
Nothing. Read the keyring docs, password for accessing keyring must be provided once for session