Why is exposing known_hosts dangerous - github

I have been looking into automating builds using GIT and docker. One of the tools I find useful is ssh-keyscan which adds the result to known_hosts and allows you to bypass the 'fingerprint' prompt when cloning a repository for the first time.
I read a comment which pretty much says that exposing this file is dangerous. I thought keyscan just adds a bunch of public keys to your known_hosts file. Why is this dangerous if anyone sees this - can they not get the exact same public keys using the same tool?
I would have thought that in the link, adding a private ssh key to the docker container would be the dangerous part since this is the part you aren't meant to share.

Related

How to return from scratch and config gitlab and github account on the same pc?

With my University, I have a GitLab account, but later I created my own account on GitHub for my personal use, and both are "linked" on my laptop. But in my mind, it's quite a mess : both are linked differently, and I kinda lose myself (one is with ssh key, the other one is... i'm not sure of it)
So, I want to delink entirely my two accounts, delete the ssh keys and all the things that are needed to return from scratch. What I need to do ? I don't want to make bad things. And then, how can I use both accounts on my laptop (using ssh keys) ?
I hope I was clear
To effectively 'start from scratch' you will need to:
Remove your local configurations
a. Remove your SSH configurations/key
b. Remove your (global) git configuration
(optionally) Revoke your SSH keys from your GitHub and GitLab accounts online
Removing your local configurations
Your SSH configuration is, by default, stored in the .ssh directory in your user profile. This is probably also where your key(s) are stored. To effectively 'start over' you can move (recommended in case you want to restore it) or delete this directory entirely.
Same goes for your git configuration, except this is .gitconfig rather than .ssh
On Windows in command prompt/powershell you can move the directory like so (assuming your in your user home directory):
move .\.ssh .\.ssh.old
move .\.gitconfig .\.gitconfig.old
On MacOS or Linux systems, you can do the following:
mv ~/.ssh ~/.ssh.old
mv ~/.gitconfig ./.gitconfig.old
Removing your SSH keys from GitLab/GitHub
GitLab
For GitLab, navigate to https://gitlab.com/-/profile/keys (replace the URL if you're using a self-hosted gitlab server) -- here you can see a list of your public SSH keys you have added to GitLab and remove them by clicking the trashcan icon.
GitHub
For GitHub, navigate to https://github.com/settings/keys -- you will see a list of keys you have added and you can remove them by clicking the "delete" button.
Between all these actions, you will effectively have "started from scratch" with respect to your computer's git configurations.

Github still asking for credentials despite successful creation of deploy key?

I have created a deploy key according to the windows instructions here
But instead of using the deploy key that has just been set up, git push instead asks for credentials, first with a pop up, then with an SSH pop up, then in the git bash command line itself! This is quite shocking because the whole purpose of a deploy key is to avoid having to provide access to an entire github account
Given I have followed github's own instructions precisely and this isn't working, I am lost as to what to do next.
Notes
Some time ago, somehow, I set up a deploy key successfully on the same (windows) server. So perhaps the > 1 key on the machine is confusing some part of the process. I am not sure this has anything to do with it though.
I can see here that github expects keys to be named id_rsa and id_rsa.pub, but given this is my second deploy key running on this particular server, I named the second set differently so as to avoid overwriting the original set (the original set are still there, there are just two more files in C:\Users\[YOUR-USER-NAME]\.ssh\)

Docker and sensitive information used at run-time

We are dockerizing an application (written in Node.js) that will need to access some sensitive data at run-time (API tokens for different services) and I can't find any recommended approach to deal with that.
Some information:
The sensitive information is not in our codebase, but it's kept on another repository in encrypted format.
On our current deployment, without Docker, we update the codebase with git, and then we manually copy the sensitive information via SSH.
The docker images will be stored in a private, self-hosted registry
I can think of some different approaches, but all of them have some drawbacks:
Include the sensitive information in the Docker images at build time. This is certainly the easiest one; however, it makes them available to anyone with access to the image (I don't know if we should trust the registry that much).
Like 1, but having the credentials in a data-only image.
Create a volume in the image that links to a directory in the host system, and manually copy the credentials over SSH like we're doing right now. This is very convenient too, but then we can't spin up new servers easily (maybe we could use something like etcd to synchronize them?)
Pass the information as environment variables. However, we have 5 different pairs of API credentials right now, which makes this a bit inconvenient. Most importantly, however, we would need to keep another copy of the sensitive information in the configuration scripts (the commands that will be executed to run Docker images), and this can easily create problems (e.g. credentials accidentally included in git, etc).
PS: I've done some research but couldn't find anything similar to my problem. Other questions (like this one) were about sensitive information needed at build-time; in our case, we need the information at run-time
I've used your options 3 and 4 to solve this in the past. To rephrase/elaborate:
Create a volume in the image that links to a directory in the host system, and manually copy the credentials over SSH like we're doing right now.
I use config management (Chef or Ansible) to set up the credentials on the host. If the app takes a config file needing API tokens or database credentials, I use config management to create that file from a template. Chef can read the credentials from encrypted data bag or attributes, set up the files on the host, then start the container with a volume just like you describe.
Note that in the container you may need a wrapper to run the app. The wrapper copies the config file from whatever the volume is mounted to wherever the application expects it, then starts the app.
Pass the information as environment variables. However, we have 5 different pairs of API credentials right now, which makes this a bit inconvenient. Most importantly, however, we would need to keep another copy of the sensitive information in the configuration scripts (the commands that will be executed to run Docker images), and this can easily create problems (e.g. credentials accidentally included in git, etc).
Yes, it's cumbersome to pass a bunch of env variables using -e key=value syntax, but this is how I prefer to do it. Remember the variables are still exposed to anyone with access to the Docker daemon. If your docker run command is composed programmatically it's easier.
If not, use the --env-file flag as discussed here in the Docker docs. You create a file with key=value pairs, then run a container using that file.
$ cat >> myenv << END
FOO=BAR
BAR=BAZ
END
$ docker run --env-file myenv
That myenv file can be created using chef/config management as described above.
If you're hosting on AWS you can leverage KMS here. Keep either the env file or the config file (that is passed to the container in a volume) encrypted via KMS. In the container, use a wrapper script to call out to KMS, decrypt the file, move it in to place and start the app. This way the config data is not exposed on disk.

Heroku and facebook - cannot even clone my application to start working

Recently I've started working on a facebook app using Heroku and their tutorial is really extensive on the matter. However, when trying to clone my application to my machine, i get the following error:
Permission denied (publickey)
fatal: The remote end hung up unexpectedly
I tried every solution I could find, resetting my key, uploading a new one, editting the key archive, but none of them seemed to solve my problem.
Does anyone have a different alternative to this?
Thanks
Im running Windows 7 Enterprise and my application is set to run on PHP.
Wherever you hold your keys try
heroku keys:add
Without an argument, it will look for the key in the default place (~/.ssh/id_rsa.pub or ~/.ssh/id_dsa.pub). If you wish to use an alternate key file, specify it as an argument. Be certain you specify the public part of the key (the file ending in .pub). The private part of the key should never be transmitted to any third party, ever.
https://devcenter.heroku.com/articles/keys
just wondering if you are Starting Command Prompt with Ruby? I had some problems with this aswell double check your ssh

Sensible deployment using EC2

We're currently using RightScale, and every time we deploy, we execute a script on the server or server array that we want to update. It pulls the code from a GitHub repository, creates a new folder in /var/www/releases/TIMESTAMP, and symlinks the document root, /var/www/current, to that directory.
We're looking to get a better deployment strategy, such as something where we SSH into one of the servers on the private network, and run a command-line script to deploy what we want to deploy.
However, this means that this one server has to have its public key in the authorized_keys of all of the servers we want to deploy to. Is this safe? Wouldn't this be a single server that would allow all the other servers to be accessed?
What's the best way to approach this?
Thanks!
We use a similar strategy to deploy, though we're not with Rightscale anymore.
I think generally that approach is fine and I'd be interested to learn what you think is not serious about it.
If you want to do your ssh thing, then I'd go about it the following:
Lock down ssh using security groups, e.g. open ssh only up to specific IP or servers with a deploy security-group, or similar. The disadvantage here is that you might lock yourself out when the other servers are down, etc..
I'd put public keys on each instance to allow a password-less login. If you're security concious, you rotate those keys on a monthly basis or for example, when employees are leaving, etc..
Use fabric or capistrano to log into your servers (from the deploy master) using ssh and do your deployment.
Again, I think Rightscale's approach is not unique to them. A lot of services do it like that. The reason is that e.g. when you symlink and keep the previous version around, it's easier to rollback and so on.