How to prevent "vagrant halt" for production machine? - deployment

On my team, we're using vagrant for both development (dev) and production (web).
It works great, but sometimes, my team will accidentally run vagrant halt to halt their dev server, but this also brings down the production (web) server. Yikes!
Is there anyway for me to prevent them from halting the production server? Like disabling the command or password protecting?

vagrant-managed-servers plugin might help when using Vagrant with production servers.

Try adding the following to the top of your Vagrantfile:
Vagrant.configure("2") do |config|
if ARGV[0] == 'halt'
if ARGV[1] != 'dev'
puts "Sorry! No way I am letting you do that!"
ARGV.clear
end
end
end
What this does is it looks at the command line arguments supplied to Vagrant and if you just run vagrant halt it will not allow it to happen. If you run vagrant halt dev however it would allow the command through and it would halt only the box called dev

Related

Vagrant is attempting to interface with the UI in a way that requires a TTY

Problem: vagrant up fails with the error below. I am running vagrant on Windows 7 and the base box is Ubuntu )( files.vagrantup.com/precise32.box ).
how can it be fixed?
vagrant.bat up
Bringing machine 'default' up with 'virtualbox' provider...
[default] Clearing any previously set forwarded ports...
[default] Clearing any previously set network interfaces...
[default] Available bridged network interfaces:
1) Intel(R) PRO/1000 EB Network Connection with I/O Acceleration
2) Intel(R) PRO/1000 PL Network Connection
Vagrant is attempting to interface with the UI in a way that requires
a TTY. Most actions in Vagrant that require a TTY have configuration
switches to disable this requirement. Please do that or run Vagrant
with TTY.
Process finished with exit code 1
thanks
This worked for me on cygwin:
Or add this to ~/.bashrc:
export VAGRANT_DETECTED_OS=cygwin
Then I got the "Vagrant displays a message that it needs to run some internal upgrades..."
Edit - Oops! Spoke to soon. During its updates, I got Warning: Authentication failure. Retrying... until timeout :P
Edit 2 - I was able to fix it by setting config.ssh.private_key_path to the .vagrant.d/insecure_private_key in my Windows user's home directory.
I had the same error while destroying a Vagrant Box. I simply added -f and it did the job.
vagrant destroy m001 -f
This is happening because when script attempts vagrant destroy, Vagrant asks for [Yes/No] confirmation. Adding -f skips that.
I got the same error after upgrading Vagrant from 1.4 to 1.6.3 (Windows 7).
Running VAGRANT_HOME\bin\vagrant.exe manually resolved this issue for me:
Execute VAGRANT_HOME\bin\vagrant.exe
Vagrant displays a message that it needs to run some internal upgrades
"Press any key to continue"...
Once the process finished (it took several minutes), I was able to proceed with Vagrant instance launch as usual.
This is caused by Vagrant finding multiple Ethernet interfaces that can be used as public network and Vagrant cannot decide which one to use.
There are 3 options:
Deactivate one of the 2 adapters, so that Vagrant can use the other
Specify the Ethernet adapter you would like Vagrant to use in the vagrantfile. Like this:
app.vm.network "public_network", bridge: "Intel(R) PRO/1000 PL Network Connection"
Running the vagrant executable manually as already described in Al Belsky's answer
If you are on Windows and are starting Vagrant through MinGW (Git Bash for example) and get this message, try running it once through Windows' default cmd.exe. You are then able to answer the question about your network adapters.
I'm using Vagrant 1.7.4
Execute the below code before running vagrant up:
export VAGRANT_DETECTED_OS=cygwin
That will eliminate the exiting of vagrant and will allow you to choose Network Interface.
This may also be caused by not having Hardware Virtualization enabled in BIOS.
Also encountered this with Windows 10, when Vagrant cannot properly detect OS.
also can happen if you have both vmware and virtual box installed and you try to use MinGW.

What is veewee waiting for when it's waiting for ssh login?

When veewee is displaying the following message, Waiting for ssh login on 127.0.0.1 with user veewee to sshd on port => 7222 to work, timeout=10000 sec what exactly is it waiting on?
As far as I can tell there is a ssh server on port 7222 on the host that veewee has put up and it's waiting on that. This means that something in the guest is going to connect back to it. However, I can't figure out what that thing might be - and thus I can't debug further.
Further details
I'm trying to build a virtualbox image for vagrant with the CentOS-6.3-x86_64-minimal template. My steps:
bundle exec veewee vbox define 'ejs-centos6.3-1' 'CentOS-6.3-x86_64-minimal'
wget http://mirror.symnds.com/distributions/CentOS-vault/6.3/isos/x86_64/CentOS-6.3-x86_64-minimal.iso
bundle exec veewee vbox build 'ejs-centos6.3-1'
The CentOS install appeared to run without error but it's stuck waiting for the ssh login.
You're right, there's a Ssh server on listening on port 7222, but it's on the guest (VM), not the host.
The host (Veewee) is waiting to connect to it. This SSH service is supposed to become available when the VM install process finishes, that's one of the steps used by Veewee to assume that the setup went fine and that the VM is ready.
If Veewee blocks and never gets this SSH connection, I think there could be multiple reasons:
VM setup went wrong and something prevents it from finishing successfully. Check Veewee output and the Virtualbox VM graphical console that should have opened when launching vewee box build.
There's something preventing your host computer to connect to the VM at the network level.
The VM image doesn't have Sshd installed, and/or the veewee box configuration files (in veewee/definitions/ejs-centos6.3-1/) miss instructions to install the ssh package
You should try to login to the VM using Virtuabox console window and check if there's an ssh package installed (rpm -qa | grep openssh-server) and a process named sshd running.
I've run Veewee against Centos 7 built with GUI on and it stuck on anaconda asking for source of packages. I've checked the ks.cfg and it was pointing to dead resource (404). After pointing to valid url it went through.

Vagrant & Chef - Postgresql not starting on reboot

I decided to create my own chef script to install Postgres. The installation works perfectly fine, but postgres doesn't start on boot when I vagrant reload
Here's my recipes/default.rb:
include_recipe "apt"
apt_repository 'apt.postgresql.org' do
uri 'http://apt.postgresql.org/pub/repos/apt'
distribution node["lsb"]["codename"] + '-pgdg'
components ['main', node["postgres"]["version"]]
key 'http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc'
action :add
end
package 'postgresql-' + node["postgres"]["version"] do
action :install
end
file "/etc/postgresql/#{node['postgres']['version']}/main/postgresql.conf" do
action :delete
end
link "/etc/postgresql/#{node['postgres']['version']}/main/postgresql.conf" do
to node["postgres"]["conf_path"]
action :create
notifies :reload, "service[postgresql]", :delayed
end
service "postgresql" do
action [:enable, :start]
supports :status=>true, :restart=>true, :start => true, :stop => true, :reload=>true
end
And here's my attributes/default.rb:
default["postgres"]["version"] = "9.3"
default["postgres"]["conf_path"] = "/home/vagrant/postgres/postgresql.conf"
Any help would be greatly appreciated!!
============ EDIT 1 ============
Here is the output when running vagrant up for the first time with chef.log_level = :debug: http://pastebin.com/w8Lp8gzv
Here is /etc/init.d/postgresql: http://pastebin.com/dQ5Zb1yj
Here is /var/log/postgresql/postgresql-9.3-main.log: http://pastebin.com/0Y2RhWvL
============ EDIT 2 ============
I'm now fairly confident that it's my postgresql.conf file, which looks like: http://pastebin.com/rjX89iU0
shared_buffers might be too high...
When you run vagrant reload, is the Chef Client running? I suspect not. Mitchell changed the behavior in a recent version of vagrant to only provision if the machine hasn't already been provisioned. This information is stored in the .vagrant directory in your working directory. In short, since you already provisioned your machine with vagrant up, it is not provisioned when you run vagrant reload.
You run vagrant up - this is actually going to run vagrant up --provision, which executes the Chef Client provisioner on the node, executing your Chef Recipe.
You run vagrant reload - this actually runs vagrant up --no-provision, because the .vagrant. directory indicates the machine has already been provisioned. So your machine is rebooted, but the Chef Client provisioner is not executed.
Solution
Run vagrant reload with the --provision flag
vagrant reload --provision
Notes
This still doesn't explain why upstart (or whatever you're using to ensure the postgres service is running at boot) isn't starting the server for your automatically. In order to answer that question, I'll need to see more information. Can you set the chef.log_level = :debug in your Vagrantfile and update your question with the output? It would also be helpful to see the init.d script this postgres installer creates, and any log output from /var/log related to postgres.
Alright, it looks like Postgresql doesn't play nice with postgresql.conf being a symbolic link. Copying the file instead did the trick.
Turns out the postgresql was starting before the postgersql.conf file was mounted
If you're starting services with Upstart that depend on something in Vagrant's shared folders, have your upstart conf file listen for the vagrant-mounted event.
# /etc/init/start-postgresql.conf
start on vagrant-mounted
script
# commands to start postgresql...
end script
The vagrant-mounted event is emitted after Vagrant is done setting up shared folders, this way you can restart dependent services after vagrant reload without having to run your provisioners again.

Cannot SSH into new computer running CentOS 6.3 from Fedora 16

I just installed CentOS 6.3 on a new computer and am unable to SSH to it from our computer running Fedora 16. They are both on the same network.
Some facts:
- I can ping it from the Fedora machine.
- I can SSH to the CentOS computer to itself on the CentOS computer.
- I have looked into hosts allow and deny, I have set selinux to be permissive, I tried with iptables disabled on the Fedora computer
I am fresh out of ideas...
Thanks
Do you have fail2ban running?
Do you have denyhosts running?
Do you have iptables allowing TCP 22?
Do you have a line in your sshd_config that refers to "AllowUsers"? (most dont but some do, and if yours does, you need your account listed on that line)
Can you run this command tail -f /var/log/secure on that machine at the same time while trying to login from the second machine and spot the issue? If not, paste the output from that log here for me to comment on.
A long shot, but you might try service sshd restart and try again to see if that helps. Go ahead and run tail /varlog/messages while restarting that daemon to see if you spot anything unusual while doing that. If you spot the issue great, if you dont, post the output here for me to comment on.
Last, do this cp /etc/ssh/sshd_config /etc/ssh/sshd_config.back and then take a good known working sshd_config from another machine and place it over the top of yours and then restart the daemon again & try again.
My money is on seeing something that helps us in /var/log/secure.

Vagrant Box Breaking After Knife Cookbook Installs

I'm a Vagrant n00b who's having issues getting Vagrant and Chef's knife command to play nice together as I'm setting up a pretty simple CentOS LAMP box using chef-solo.
Here's a quick rundown of ths issue:
I've created a basic Vagrantfile using the CentOS 6.3 w/ Chef base box on vagrantbox.es. You can see the basics in this gist.
I've downloaded all the cookbooks via knife cookbook site install nameofcookbook using a configuration that puts them in ./chef/cookbooks.
I've successfully run vagrant up to You can see the basics in this gist.
I've tested apache, php, etc. All good.
Now comes the trick: with the VM running, I run knife to add another package (in this case i3).
From here on, Vagrant fails to perform various tasks in the VM:
When I run vagrant provision I get an error like this
The chef binary (either `chef-solo` or `chef-client`) was not found on
the VM and is required for chef provisioning. Please verify that chef
is installed and that the binary is available on the PATH.
When I run vagrant halt I get an error that the ssh command exited with a non-zero error code.
I am able to run vagrant ssh however, and confirm that (a) chef-solo does, in fact, exist in the box and (b) I can shutdown via the commandline in the box.
When I run vagrant up I get an error like this:
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p /vagrant</li>
I'm stumped. I've had this happen on two boxes already, and I know that Knife and Vagrant should be able to play well together.
What am I doing wrong?
Any help much appreciated, I've very excited about digging into Vagrant!
chef.add_recipe "sudo"
Nuked your sudo file after the first run.
Add the appropriate json to your vagrant file for your vagrant user.
Something like:
config.vm.provision :chef_solo do |chef|
# add your recipes
# chef.add_recipe "foo"
# chef.add_role "bar"
chef.json = {
"authorization" => {
"sudo" => {
"users" => [ "vagrant" ],
"passwordless" => true,
}
}
}
end