I'm a Vagrant n00b who's having issues getting Vagrant and Chef's knife command to play nice together as I'm setting up a pretty simple CentOS LAMP box using chef-solo.
Here's a quick rundown of ths issue:
I've created a basic Vagrantfile using the CentOS 6.3 w/ Chef base box on vagrantbox.es. You can see the basics in this gist.
I've downloaded all the cookbooks via knife cookbook site install nameofcookbook using a configuration that puts them in ./chef/cookbooks.
I've successfully run vagrant up to You can see the basics in this gist.
I've tested apache, php, etc. All good.
Now comes the trick: with the VM running, I run knife to add another package (in this case i3).
From here on, Vagrant fails to perform various tasks in the VM:
When I run vagrant provision I get an error like this
The chef binary (either `chef-solo` or `chef-client`) was not found on
the VM and is required for chef provisioning. Please verify that chef
is installed and that the binary is available on the PATH.
When I run vagrant halt I get an error that the ssh command exited with a non-zero error code.
I am able to run vagrant ssh however, and confirm that (a) chef-solo does, in fact, exist in the box and (b) I can shutdown via the commandline in the box.
When I run vagrant up I get an error like this:
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p /vagrant</li>
I'm stumped. I've had this happen on two boxes already, and I know that Knife and Vagrant should be able to play well together.
What am I doing wrong?
Any help much appreciated, I've very excited about digging into Vagrant!
chef.add_recipe "sudo"
Nuked your sudo file after the first run.
Add the appropriate json to your vagrant file for your vagrant user.
Something like:
config.vm.provision :chef_solo do |chef|
# add your recipes
# chef.add_recipe "foo"
# chef.add_role "bar"
chef.json = {
"authorization" => {
"sudo" => {
"users" => [ "vagrant" ],
"passwordless" => true,
}
}
}
end
Related
I spent several hours looking at this Lens error for K8s. I installed Python, OCI-CLI for Windows 10 (I downloaded oci-cli offline installation, and run python install.py) and configured cluster access. Using CMD works ok:
kubectl command works fine, even get pods command works
But using Lens it gives me the error when connecting
Error getting Credentials: exec: executable oci not found
What am I missing?
I finally found the solution, it was to download kubectl.exe
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/windows/amd64/kubectl.exe
I put it in a folder on the disk, example c:\kubenetes
add that folder to the PATH environment variable.
Restart the PC. Without reboot it didn't work.
I have a small simple setup consisting of Jenkins & Ansible 2.7.8 running on Ubuntu 18.04.2 LTS (192.168.0.202):
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/jon/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]
Jenkins ver. 2.150.3
I then have a Windows VM (192.168.0.203) that has a Powershell script stored on it and an Ansible playbook configured to connect to the Windows VM and run the Powershell script.
When I run the Ansible-Playbook directly from the command line it works fine, connects to the Windows machine and runs the script. All good.
I am having real trouble though implementing Ansible into Jenkins to run the playbook. When I run the playbook through Jenkins, I get the following error:
Building in workspace /var/lib/jenkins/workspace/Ansible-RunPS-1.0
[Ansible-RunPS-1.0] $ /usr/bin/ansible-playbook //etc/ansible/runPS.yml -f 5
PLAY [Runs remote PS script] ***************************************************
TASK [Gathering Facts] *********************************************************
[0;31mfatal: [192.168.0.203]: FAILED! => {"msg": "winrm or requests is not installed: No module named winrm"}[0m
To me, it seems to be running the playbook but then failing as it cannot find the winrm module. Could it be something to do with the account that Jenkins uses can't somehow find the winrm module, yet if I run the same command under my account, it finds it ok ?
Happy to post other configs etc. if that would help but thought I'd try and keep it as simple as possible to begin with.
As is the way with these things, you spend days trying to fix it, then as soon as you ask for help you manage to fix it.
I'd love to be able to say it was just a case of installing this or running that but in truth, I really don't know what got it going, it suddenly burst into life after a reboot, can't believe that's all it was but you never know!
I have few cookbooks which have to be verified only for Windows.Basically running the recipes and see
I am using opscode's hosted chef.
For doing this, which combination shall I use :
a) windows workstation(for uploading recipes to server), ubuntu chef client node.
b) windows workstation , windows chef client node.
I am actually very new to the chef.Please suggest me...
In your case for simply verifying Windows recipes/cookbooks work, I highly recommend using chef-solo. You don't need to set up an entire chef server to prove everything works (unless that requirement is being forced on you):
http://docs.opscode.com/chef_solo.html
Running is as simple as:
chef-solo -c C:/chef/solo.rb -j C:/chef/solo.json
Where solo.rb is your configuration file and solo.json contains your run list and attributes.
If you do need a chef server... it doesn't really matter if you run it on Windows or Linux, just choose whatever is going to be easiest for you. Regardless the client will need to be installed on your Windows workstation.
http://docs.opscode.com/install_workstation.html
http://docs.opscode.com/install_windows.html
I decided to create my own chef script to install Postgres. The installation works perfectly fine, but postgres doesn't start on boot when I vagrant reload
Here's my recipes/default.rb:
include_recipe "apt"
apt_repository 'apt.postgresql.org' do
uri 'http://apt.postgresql.org/pub/repos/apt'
distribution node["lsb"]["codename"] + '-pgdg'
components ['main', node["postgres"]["version"]]
key 'http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc'
action :add
end
package 'postgresql-' + node["postgres"]["version"] do
action :install
end
file "/etc/postgresql/#{node['postgres']['version']}/main/postgresql.conf" do
action :delete
end
link "/etc/postgresql/#{node['postgres']['version']}/main/postgresql.conf" do
to node["postgres"]["conf_path"]
action :create
notifies :reload, "service[postgresql]", :delayed
end
service "postgresql" do
action [:enable, :start]
supports :status=>true, :restart=>true, :start => true, :stop => true, :reload=>true
end
And here's my attributes/default.rb:
default["postgres"]["version"] = "9.3"
default["postgres"]["conf_path"] = "/home/vagrant/postgres/postgresql.conf"
Any help would be greatly appreciated!!
============ EDIT 1 ============
Here is the output when running vagrant up for the first time with chef.log_level = :debug: http://pastebin.com/w8Lp8gzv
Here is /etc/init.d/postgresql: http://pastebin.com/dQ5Zb1yj
Here is /var/log/postgresql/postgresql-9.3-main.log: http://pastebin.com/0Y2RhWvL
============ EDIT 2 ============
I'm now fairly confident that it's my postgresql.conf file, which looks like: http://pastebin.com/rjX89iU0
shared_buffers might be too high...
When you run vagrant reload, is the Chef Client running? I suspect not. Mitchell changed the behavior in a recent version of vagrant to only provision if the machine hasn't already been provisioned. This information is stored in the .vagrant directory in your working directory. In short, since you already provisioned your machine with vagrant up, it is not provisioned when you run vagrant reload.
You run vagrant up - this is actually going to run vagrant up --provision, which executes the Chef Client provisioner on the node, executing your Chef Recipe.
You run vagrant reload - this actually runs vagrant up --no-provision, because the .vagrant. directory indicates the machine has already been provisioned. So your machine is rebooted, but the Chef Client provisioner is not executed.
Solution
Run vagrant reload with the --provision flag
vagrant reload --provision
Notes
This still doesn't explain why upstart (or whatever you're using to ensure the postgres service is running at boot) isn't starting the server for your automatically. In order to answer that question, I'll need to see more information. Can you set the chef.log_level = :debug in your Vagrantfile and update your question with the output? It would also be helpful to see the init.d script this postgres installer creates, and any log output from /var/log related to postgres.
Alright, it looks like Postgresql doesn't play nice with postgresql.conf being a symbolic link. Copying the file instead did the trick.
Turns out the postgresql was starting before the postgersql.conf file was mounted
If you're starting services with Upstart that depend on something in Vagrant's shared folders, have your upstart conf file listen for the vagrant-mounted event.
# /etc/init/start-postgresql.conf
start on vagrant-mounted
script
# commands to start postgresql...
end script
The vagrant-mounted event is emitted after Vagrant is done setting up shared folders, this way you can restart dependent services after vagrant reload without having to run your provisioners again.
I am taking my first steps with ipython notebook and I installed it successfully on a remote server of mine (over SSH) and I started it using the following command:
ipython notebook --ip='*' ---pylab=inline --port=7777
I then checked on http://myserver.sth:7777/ and the notebook was running just fine. I then wanted to close the SSH connection with the server and keep ipython running in the background. When I did this, I couldn't connect to myserver.sth:7777 anymore. Once I connected again to the remote server by SSH, I could connect again to the notebook. I then tried to use screen to start ipython: I created a new screen by screen -S ipy, I started ipython notebook as above and I used Ctrl+A,D to detach the screen and exit to the TTY. I could still connect remotely to the notebook. I then closed the SSH connection and I got a 404 NOT FOUND error when I tried to access my previously stored notebook and I couldn't see it on the list of notebook at http://myserver.sth:7777/. I tried to create a new notebook, but I got a 500 Internal Server Error.
I also tried running ipython notebook with and without using sudo.
Any ideas?
Rather than use screen, perhaps you could switch to an init script or supervisord to keep IPython notebook up and running.
Let's assume you go the supervisord route:
Install supervisord
Install supervisord using your package manager. For ubuntu it's named supervisor.
apt-get install supervisor
If you decide to install supervisor through pip, you'll have to set up its init.d script yourself.
Write a supervisor configuration file for IPython
The configuration file tells supervisor what to run and how.
After you install supervisor, it should have created /etc/supervisor/supervisord.conf. These lines should exist in the file:
[include]
files = /etc/supervisor/conf.d/*.conf
If they contain these lines, you're in good shape. I only show them to demonstrate where it expects new configuration files. Your configuration file can go there, named something like /etc/supervisor/conf.d/ipynb.conf.
Here's a sample configuration that was generated by Chef by an ipython-notebook-cookbook that runs the notebook in a virtualenv:
[program:ipynb]
command=/home/ipynb/.ipyvirt/bin/ipython notebook --profile=cooked
process_name=%(program_name)s
numprocs=1
numprocs_start=0
autostart=true
autorestart=true
startsecs=1
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=ipynb
redirect_stderr=false
stdout_logfile=AUTO
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=10
stdout_capture_maxbytes=0
stdout_events_enabled=false
stderr_logfile=AUTO
stderr_logfile_maxbytes=50MB
stderr_logfile_backups=10
stderr_capture_maxbytes=0
stderr_events_enabled=false
environment=HOME="/home/ipynb",SHELL="/bin/bash",USER="ipynb",PATH="/home/ipynb/.ipyvirt/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games",VIRTUAL_ENV="/home/ipynb/.ipyvirt"
directory=/home/ipynb
serverurl=AUTO
The above supervisor config also relies on an IPython notebook configuration (located at /home/ipynb/.ipython/profile_cooked/ipython_notebook_config.py). This makes configuration much easier (as you can also set up your password hash and many other configurables).:
c = get_config()
# Kernel config
# Make matplotlib plots inline
c.IPKernelApp.pylab = 'inline'
# The IP address the notebook server will listen on.
# If set to '*', will listen on all interfaces.
# c.NotebookApp.ip= '127.0.0.1'
c.NotebookApp.ip='*'
# Port to host on (e.g. 8888, the default)
c.NotebookApp.port = 8888 # If you want it on 80, I recommend iptables rules
# Open browser (probably want False)
c.NotebookApp.open_browser = False
Re-read and update, now that you have the configuration file
supervisorctl reread
supervisorctl update
Reality
In reality, I used to use a Chef cookbook to do the entire installation and configuration. However, using configuration management with tiny stuff like this is a bit of overkill (unless you're orchestrating these in automation).
Nowadays I use Docker images for IPython notebook, orchestrating via JupyterHub or tmpnb.