Can I add (create) shared directories from the host to a running Vagrant box on the fly? - share

I have a situation where I would like to add shared folders to a running Vagrant box (Ubuntu guest) provided by VirtualBox.
According to the docs, these "Synced Folders" should be defined in the configuration file like so:
config.vm.synced_folder "/mnt/plugdrive15", "/synced/plugdrive15"
But what if I need to share directories that are new on my host? I have this often and I cannot predict their names, so I cannot pre-write them beforehand in the configuration file. I want to automate this.
Using VBoxManage sharedfolder add doesn't work, because as soon as you do vagrant up, only the shares specified in the configuration file (and the default vagrant one) are left.
If all else fails (no better solutions), can Vagrantfile parse a configuration file (in which I specify shares right before vagrant up?

You can using virtual box interface, you have a shared folder option in the VM setting and you can add a new folder while the VM is running
[edit]
there is ofc a command line tool associated : http://www.virtualbox.org/manual/ch04.html#sharedfolders
[edit 2]
Setting the vm name is a provider specific operation, so VBoxManage tool will consider it as well.
config.vm.provider :virtualbox do |vb|
vb.name = "barhost"
end
If you don't specify a name vagrant will generated one, see How to change Vagrant 'default' machine name? for precisions.

Related

devcontainers without VS Code?

Let's say that for some reason I don't want to launch VSC to get a devcontainer shell running, but I still want all of that devcontainer goodness without rewriting all of the configuration files. There's a devcontainer CLI, but at the moment, the only options available are open (VSC, connected to the container) and build (which builds the image, in the use case that many people are sharing the same devcontainer environment).
Ideally, there'd be a third option devcontainer shell which does all the build, spin up and connection work that is done inside VSC, but the just execs to the running container.
The .devcontainer folder contains a devcontainer.json file. In it, if you're using docker-compose, there will be a dockerComposeFile key with an array of docker-compose files, loaded in order. You can do the same with a command such as docker-compose -f first-compose-file.yml -f second-compose-file.yml.
That same folder usually has its own docker-compose.yml file. You will notice it declares your main service and usually sets up a volume to share between the host and the container (useful to work inside the container).
There are other interesting keys in devcontainer.json such as forwardPorts, remoteUser or postCreateCommand. You should be able to set up most of them in your docker-compose file (dev stuff should go into the .devcontainer/ one). The post-create command can be run with docker compose exec SERVICENAME COMMAND.
I don't know if there's a command to detect .devcontainer files and pick up the right settings, but it should not be hard to write one.

vagrant up fails with: cannot translate name # rb_sysopen when trying to run homestead

When I run vagrant up I get the following error:
Vagrant/embedded/gems/2.2.14/gems/vagrant-2.2.14/plugins/hosts/suse/host.rb:20:in `initialize': Cannot translate name. # rb_sysopen - /etc/os-release (Errno::ELOOP)
I have installed Vagrant for Windows and I'm trying to launch Laravel's Homestead that I cloned in WSL2 by cd'ing into the Z: directory that WSL2 provides via PowerShell (so that I have access to Vagrant that's installed on Windows).
cd Z:\home\coder\projects\homestead
It seems that Vagrant is trying to recognize the OS from the filesystem if I'm understanding correctly. So if you're trying to run Vagrant on Windows across a network share that is Unix/WSL/Linux it seems that it will try to run as if it is Unix and fail.
Solution
I was able to copy the homestead directory from the network share into my Windows environment and then navigate to that directory and run vagrant up successfully using powershell.
Another Option
It sounds like you should also be able to install Vagrant within WSL2 and use it from within WSL2 instead of PowerShell.
Another possibility to note is that you can invoke exes from within WSL2, but it sounds like it will not work properly if you were trying to run Window's Vagrant from within WSL2.
Research
https://github.com/roots/trellis/issues/1083
https://www.vagrantup.com/docs/other/wsl.html
https://discourse.roots.io/t/command-vagrant-up-in-wsl-is-failed/16528

How to set a remote connection to a Vagrant container using "Visual Studio Code Remote - SSH"?

I'm exploring the new set extensions called VSCode Remote Pack and I want to connect to a Vagrant container using the Remote Container extension. Using a Windows 10 OS, how could I do that?
I tried the extension but it requests me to have Docker installed, what I suppose from that is that it only works for Docker containers. But I wonder if somebody have already managed to connect to a Vagrant box.
This are the docs from the extension: https://code.visualstudio.com/docs/remote/containers
VS Code Remote containers currently only support Docker (its implementation executes docker commands). Please open a feature request if you would like to see other tools supported.
As an alternative, you could try using Remote SSH to connect to vagrant containers. That should work but will require some extra container setup
Sorry for updating this so late.
The solution was pretty simple, as #MnZrk commented, what it needs to be done for setting up the connection is the following:
Run vagrant ssh-config > some-file.txt. This will generate a file with the configuration to run using SSH. Here an example of that file:
Host default
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile C:/Users/User/project/.vagrant/machines/default/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATAL
ForwardAgent yes
ForwardX11 yes
Notice that the host name is default, you could rename it to whatever you want so you could identify it more easily.
Copy the content of some-file.txt inside your SSH configuration file. This file could be edit directly from vscode by pressing F1 and writing Remote-SSH: Open Configuration File..., then you select the file you use for ssh configuration. After that file opens, just copy the content of some-file.txt there.
Finally, just press again F1 and type Remote-SSH: Connect to Host..., choose the connection with the host name default or the want you wrote in the first step, and that's all.

How do I set a default umask in JupyterHub

How do I set a default umask (other than the standard 0022) for individual notebooks/users in JupyterHub?
Use case: I'm using the SystemUserSpawner, which spawns a Docker container for a user, but hooked into the underlying (virtual machine) system users: their notebook home directory matches that of the underlying OS. I've added a small option so that not just users, but also groups match.
I've mounted the base home directory (/home/ usually) to a separate _users folder in the notebook (read-only), so that users can browse each other's home directories for sharing scripts. By default, however, I'd like to have the permissions by group, not world-readable (obviously, users can change that if they'd like), so that different groups can read & share within their group, but not automatically with everyone.
Using an umask setting of 0027 seems to be practical for this, but I can't seem to set it (systemwide): none of the standard OS practices (OS of the Docker container, Ubuntu 18.04) appear to work.
How do I set up a default umask for 0027 for each notebook user?
The single-user notebooks take their configuration from /etc/jupyter/jupyter_notebook_config.py (note: this file lives in the Docker container, not on the host OS).
The last lines of that configuration file are the following:
# Change default umask for all subprocesses of the notebook server if set in
# the environment
if 'NB_UMASK' in os.environ:
os.umask(int(os.environ['NB_UMASK'], 8))
Thus, we can set the environment variable NB_UMASK to set the default umask for the notebook user.
We can set this in the Jupyter Hub configuration file on the host OS. In /etc/jupyterhub/jupyterhub_config.py, add or adjust the c.SystemUserSpawner.environment (or perhaps just c.Spawner.environment, but I'm using a variant of the SystemUserSpanwer) setting to include:
c.SystemUserSpawner.environment = {'NB_UMASK': '0027'}
And that line should be the only thing to set the umask in the entire notebook.
For the record, my full spawner environment is as follows:
c.SystemUserSpawner.environment = {'JUPYTER_ENABLE_LAB': '1', 'GRANT_SUDO': '1', 'NB_UMASK': '0027'}
So that I have a Jupyter Lab environment, and users are able to install further software inside their container as need (sudo apt install <something>).

Vagrant & Chef - Postgresql not starting on reboot

I decided to create my own chef script to install Postgres. The installation works perfectly fine, but postgres doesn't start on boot when I vagrant reload
Here's my recipes/default.rb:
include_recipe "apt"
apt_repository 'apt.postgresql.org' do
uri 'http://apt.postgresql.org/pub/repos/apt'
distribution node["lsb"]["codename"] + '-pgdg'
components ['main', node["postgres"]["version"]]
key 'http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc'
action :add
end
package 'postgresql-' + node["postgres"]["version"] do
action :install
end
file "/etc/postgresql/#{node['postgres']['version']}/main/postgresql.conf" do
action :delete
end
link "/etc/postgresql/#{node['postgres']['version']}/main/postgresql.conf" do
to node["postgres"]["conf_path"]
action :create
notifies :reload, "service[postgresql]", :delayed
end
service "postgresql" do
action [:enable, :start]
supports :status=>true, :restart=>true, :start => true, :stop => true, :reload=>true
end
And here's my attributes/default.rb:
default["postgres"]["version"] = "9.3"
default["postgres"]["conf_path"] = "/home/vagrant/postgres/postgresql.conf"
Any help would be greatly appreciated!!
============ EDIT 1 ============
Here is the output when running vagrant up for the first time with chef.log_level = :debug: http://pastebin.com/w8Lp8gzv
Here is /etc/init.d/postgresql: http://pastebin.com/dQ5Zb1yj
Here is /var/log/postgresql/postgresql-9.3-main.log: http://pastebin.com/0Y2RhWvL
============ EDIT 2 ============
I'm now fairly confident that it's my postgresql.conf file, which looks like: http://pastebin.com/rjX89iU0
shared_buffers might be too high...
When you run vagrant reload, is the Chef Client running? I suspect not. Mitchell changed the behavior in a recent version of vagrant to only provision if the machine hasn't already been provisioned. This information is stored in the .vagrant directory in your working directory. In short, since you already provisioned your machine with vagrant up, it is not provisioned when you run vagrant reload.
You run vagrant up - this is actually going to run vagrant up --provision, which executes the Chef Client provisioner on the node, executing your Chef Recipe.
You run vagrant reload - this actually runs vagrant up --no-provision, because the .vagrant. directory indicates the machine has already been provisioned. So your machine is rebooted, but the Chef Client provisioner is not executed.
Solution
Run vagrant reload with the --provision flag
vagrant reload --provision
Notes
This still doesn't explain why upstart (or whatever you're using to ensure the postgres service is running at boot) isn't starting the server for your automatically. In order to answer that question, I'll need to see more information. Can you set the chef.log_level = :debug in your Vagrantfile and update your question with the output? It would also be helpful to see the init.d script this postgres installer creates, and any log output from /var/log related to postgres.
Alright, it looks like Postgresql doesn't play nice with postgresql.conf being a symbolic link. Copying the file instead did the trick.
Turns out the postgresql was starting before the postgersql.conf file was mounted
If you're starting services with Upstart that depend on something in Vagrant's shared folders, have your upstart conf file listen for the vagrant-mounted event.
# /etc/init/start-postgresql.conf
start on vagrant-mounted
script
# commands to start postgresql...
end script
The vagrant-mounted event is emitted after Vagrant is done setting up shared folders, this way you can restart dependent services after vagrant reload without having to run your provisioners again.