i' ve a puppet master and a couple of slave server, and i' d like to change the hostname of the master system from a puppet manifest. To do this i change the neccessary files on the system (redhat, like /etc/sysconfig/network(master) , /etc/sysconfig/puppet(all) , /etc/puppet/puppet.conf(master)), and i delete the certificates from all the machines. If the manifest on the master will be executed first, everything fine, but if it' s executed on the slave first, it got stucked while it' ll have a new certificate already, but the master doesn' t know about it yet.
Is there any way to make a dependency on different machines between modules? Or any way to overwrite a certificate on the master from the slave machine (is it safe at all)?
At the moment i' ve this code (however it doesn' t have the deletion of the certificates on the slaves):
http://pastebin.com/gMeWPpcn
. Any other recommendations are also welcomed how to solve this master hostname change and certificate problem (i read about some mac address based certificates, but in this case it' s a no go).
I'm pretty sure there's no way around this. After you change the hostname on the master and delete the all the client certificates, your clients will no longer be able to connect to the master.
In situations like this, it's probably best to do it manually. For example:
Change the hostname on the master
Delete all the client certs (on the master)
Connect to the master from each of your clients and re-add them as you did initally
Related
I am the only developer (full-stack) in my company I have too much work other than automating the deployments as of now. In the future, we may hire a DevOps guy for the same.
Problem: We have 3 servers under Load Balancer. I don't want to block 2nd & 3rd servers till the 1st server updated and repeat the same with 2nd & 3rd because there might be huge traffic for one server initially and may fail at some specif time before other servers go live.
Server 1
User's ----> Load Balancer ----> Server 2 -----> Database
Server 3
Personal Opinion: Is there a way where we can pull the code by writing any scripts in the Load Balancer. I can replace the traditional Digital Ocean load balancer with Nginx Server making it a reverse proxy.
NOTE: I know there are plenty of other questions asked in Stack
Overflow on the same but none of them solves my queries.
Solutions I know
GIT Hooks - I know somewhat about GIT Hooks but don't want to use it because if I commit to master branch by mistake then it must not get sync to my production and create havoc in the live server and live users.
Open multiple tabs of servers and do it manually (Current Scenario). Believe me its pain in the ass :)
Any suggestions or redirects to the solutions will be really helpful for me. Thanks in advance.
One of the solutions is to write ansible playbook for this. With Ansible, you can specify to run it per one host at the time and also as the last step you can include verification check that checks if your application responds with response code 200 or it can query some endpoint that indicates the status of your application. If the check fails, Ansible will stop the execution. For example, in your case, Server1 deploys fine, but on server2 it fails. The playbook will stop and you will have servers 1 and 3 running.
I have done it myself. Works fine in environments without continuous deployments.
Here is one example
Folks, how do I make sure all files of a RPM (CENTOS) where removed? The problem is that I've installed a software called shiny-proxy https://www.shinyproxy.io/ - and after 3 days running as a test server, we received a message called NetScan Detected from Germany. Now we want to clean everything up removing the RPM but it seams not that easy as something else is left on the system that continues to send and receive lots of packages (40kps). I really apologize shinyproxy folks if that is not part of their work, so far this is the last system under investigation.
your docker API is bound to your public IP and therefore directly reachable from an external network. You should not do this as it would allow anybody to run arbitrary docker instances and even commands on your docker host.
You should secure your docker install:
- bind it to 127.0.0.1 (lo) interface and adapt the shinyproxy yml file accordingly
- setup TLS mutual auth (client certificate) on the docker API (it is supported by shinyproxy)
I've read some articles recently on setting up AWS infrastructure w/o enabling SSH on Ec2 instances. My web app requires a binary to run. So how can I deploy my application to an ec2 instance w/o using ssh?
This was the article in question.
http://wblinks.com/notes/aws-tips-i-wish-id-known-before-i-started/
Although doable, like the article says, it requires to think about servers as ephemeral servers. A good example of this is web services that scale up and down depending on demand. If something goes wrong with one of the servers you can just terminate your server and spin up another one.
Generally, you can accomplish this using a pull model. For example at bootup pull your code from a git/mecurial repository and then execute scripts to setup your instance. The script will setup all the monitoring required to determine whether your server and application are up and running appropriately. You would still need an SSH client for this if you want to pull your code using ssh. (Although you could also do it through HTTPS)
You can also use configuration management tools that don't use ssh at all like Puppet or Chef. Essentially your node/server will pull all your application and server configuration from the Puppet master or the Chef server. The Puppet agent or Chef client would then perform all the configuration/deployment/monitoring changes for your application to run.
If you with this model I think one of the most critical components is monitoring. You need to know at all times if there's something wrong with one of your server and in the event something goes wrong discard the server and spin up a new one. (Even better if this whole process is automated)
Hope this helps.
Hope someone will be be able to help: I've installed GitLab and for a few days it seems that worked ok (I could push and pull only from a client but not from the machine that runs GitLab itself), however that's no longer the case. I have been working on the server (its my own server that I've setup for development/learning/personal stuff but I don't believe I've changed anything that could affect Gitlab, so I'm don't know what to do.
At the moment I can't push or pull from either my local machine (OS X 10.8.3) not from my server (Ubuntu 12.0.4). I've run the test several times and all is green. When I do git config user.name or git config user.email it comes back with my name and email respectively. I've also searched online but couldn't find anyone in exactly the same situation, however I did try many of the approaches suggested: I've deleted and generated more ssh keys, changed config in /home/git/gitlab/config.yml to reflect my setup (I'm running apache). My GitLab is 5.2 and I've followed the instruction on GitLab's homepage. In order to make it working with apache instead of nginx I've followed the instructions here:. This question seems the closest to describe my problem, however the solution is not clearly described, so I couldn't follow. The web ineterface works fine and I can commit either from my local machine (using sshfs) and my server. I just can't push or pull. The error I get is:
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
I'd appreciate any help. I've been struggling with this for days now and I'm on the brink of give GitLab up...
Many thanks
EDIT: On my server I've got three accounts: user1 (main, first user, root), user2 a sudoer that also has admin privileges and git which also is a sudoer. After more investigating, I'm pretty sure this is a problem of me messing up with permission and the ssh key. Can someone point me out: when I generate the ssh key, which user should I be logged in as? In which computer should I generate this key? On my server or my Mac? Also, when I've tried push from my server directly (I was physically logged in the server rather than sshed to server via my Mac) GitLab was asking for git's password. I then generated a key logged as git on the server and added to GitLab through the web interface and the error appeared again (the same as before). Still not fixed.
The problem in my case was that I changed the git credentials on my local machine (when you create a new repo, you set the user name and email Git and git#localhost respectively) that I had changed and didn't realise. That's why every time I was trying to either push or pull I got the error. Once that was changed back to the correct settings, Gitlab started working again. Leaving as it might be helpful to someone.
We're currently using RightScale, and every time we deploy, we execute a script on the server or server array that we want to update. It pulls the code from a GitHub repository, creates a new folder in /var/www/releases/TIMESTAMP, and symlinks the document root, /var/www/current, to that directory.
We're looking to get a better deployment strategy, such as something where we SSH into one of the servers on the private network, and run a command-line script to deploy what we want to deploy.
However, this means that this one server has to have its public key in the authorized_keys of all of the servers we want to deploy to. Is this safe? Wouldn't this be a single server that would allow all the other servers to be accessed?
What's the best way to approach this?
Thanks!
We use a similar strategy to deploy, though we're not with Rightscale anymore.
I think generally that approach is fine and I'd be interested to learn what you think is not serious about it.
If you want to do your ssh thing, then I'd go about it the following:
Lock down ssh using security groups, e.g. open ssh only up to specific IP or servers with a deploy security-group, or similar. The disadvantage here is that you might lock yourself out when the other servers are down, etc..
I'd put public keys on each instance to allow a password-less login. If you're security concious, you rotate those keys on a monthly basis or for example, when employees are leaving, etc..
Use fabric or capistrano to log into your servers (from the deploy master) using ssh and do your deployment.
Again, I think Rightscale's approach is not unique to them. A lot of services do it like that. The reason is that e.g. when you symlink and keep the previous version around, it's easier to rollback and so on.