Vagrant provision order - kubernetes

I try to install kubernetes with istio. Here is snippet from Vagrantfile(copied from github)
config.vm.define "k8s-master" do |master|
master.vm.box = IMAGE_NAME
master.vm.network "private_network", ip: "192.168.50.10"
master.vm.hostname = "k8s-master"
master.vm.provision "ansible" do |ansible|
ansible.playbook = "kubernetes-setup/master-playbook.yml"
end
end
(1..N).each do |i|
config.vm.define "node-#{i}" do |node|
node.vm.box = IMAGE_NAME
node.vm.network "private_network", ip: "192.168.50.#{i + 10}"
node.vm.hostname = "node-#{i}"
node.vm.provision "ansible" do |ansible|
ansible.playbook = "kubernetes-setup/node-playbook.yml"
end
end
end
If I put installation of istio into k8s-master playbook, it fails. I assume this happens because worker nodes are not installed yet, and istio can't provision its infrastructure. I need to run second playbook on k8s-master after provisioning workers. What is solution?

Related

Why Does Podman Report "Not enough IDs available in namespace" with different UIDs?

Facts:
Rootless podman works perfectly for uid 1480
Rootless podman fails for uid 2088
CentOS 7
Kernel 3.10.0-1062.1.2.el7.x86_64
podman version 1.4.4
Almost the entire environment has been removed between the two
The filesystem for /tmp is xfs
The capsh output of the two users is identical but for uid / username
Both UIDs have identical entries in /etc/sub{u,g}id files
The $HOME/.config/containers/storage.conf is the default and is identical between the two with the exception of the uids. The storage.conf is below for reference.
I wrote the following shell script to demonstrate just how similar an environment the two are operating in:
#!/bin/sh
for i in 1480 2088; do
sudo chroot --userspec "$i":10 / env -i /bin/sh <<EOF
echo -------------- $i ----------------
/usr/sbin/capsh --print
grep "$i" /etc/subuid /etc/subgid
mkdir /tmp/"$i"
HOME=/tmp/"$i"
export HOME
podman --root=/tmp/"$i" info > /tmp/podman."$i"
podman run --rm --root=/tmp/"$i" docker.io/library/busybox printf "\tCOMPLETE\n"
echo -----------END $i END-------------
EOF
sudo rm -rf /tmp/"$i"
done
Here's the output of the script:
$ sh /tmp/podman-fail.sh
[sudo] password for functional:
-------------- 1480 ----------------
Current: =
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,35,36
Securebits: 00/0x0/1'b0
secure-noroot: no (unlocked)
secure-no-suid-fixup: no (unlocked)
secure-keep-caps: no (unlocked)
uid=1480(functional)
gid=10(wheel)
groups=0(root)
/etc/subuid:1480:100000:65536
/etc/subgid:1480:100000:65536
Trying to pull docker.io/library/busybox...Getting image source signatures
Copying blob 7c9d20b9b6cd done
Copying config 19485c79a9 done
Writing manifest to image destination
Storing signatures
ERRO[0003] could not find slirp4netns, the network namespace won't be configured: exec: "slirp4netns": executable file not found in $PATH
COMPLETE
-----------END 1480 END-------------
-------------- 2088 ----------------
Current: =
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,35,36
Securebits: 00/0x0/1'b0
secure-noroot: no (unlocked)
secure-no-suid-fixup: no (unlocked)
secure-keep-caps: no (unlocked)
uid=2088(broken)
gid=10(wheel)
groups=0(root)
/etc/subuid:2088:100000:65536
/etc/subgid:2088:100000:65536
Trying to pull docker.io/library/busybox...Getting image source signatures
Copying blob 7c9d20b9b6cd done
Copying config 19485c79a9 done
Writing manifest to image destination
Storing signatures
ERRO[0003] Error while applying layer: ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
ERRO[0003] Error pulling image ref //busybox:latest: Error committing the finished image: error adding layer with blob "sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
Failed
Error: unable to pull docker.io/library/busybox: unable to pull image: Error committing the finished image: error adding layer with blob "sha256:7c9d20b9b6cda1c58bc4f9d6c401386786f584437abbe87e58910f8a9a15386b": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
Here's the storage.conf for the 1480 uid. It's identical except s/1480/2088/:
[storage]
driver = "vfs"
runroot = "/run/user/1480"
graphroot = "/tmp/1480/.local/share/containers/storage"
[storage.options]
size = ""
remap-uids = ""
remap-gids = ""
remap-user = ""
remap-group = ""
ostree_repo = ""
skip_mount_home = ""
mount_program = ""
mountopt = ""
[storage.options.thinpool]
autoextend_percent = ""
autoextend_threshold = ""
basesize = ""
blocksize = ""
directlvm_device = ""
directlvm_device_force = ""
fs = ""
log_level = ""
min_free_space = ""
mkfsarg = ""
mountopt = ""
use_deferred_deletion = ""
use_deferred_removal = ""
xfs_nospace_max_retries = ""
You can see there's basically no difference between the two podman info outputs for the users:
$ diff -u /tmp/podman.1480 /tmp/podman.2088
--- /tmp/podman.1480 2019-10-17 22:41:21.991573733 -0400
+++ /tmp/podman.2088 2019-10-17 22:41:26.182584536 -0400
## -7,7 +7,7 ##
Distribution:
distribution: '"centos"'
version: "7"
- MemFree: 45654056960
+ MemFree: 45652697088
MemTotal: 67306323968
OCIRuntime:
package: containerd.io-1.2.6-3.3.el7.x86_64
## -24,7 +24,7 ##
kernel: 3.10.0-1062.1.2.el7.x86_64
os: linux
rootless: true
- uptime: 30h 17m 50.23s (Approximately 1.25 days)
+ uptime: 30h 17m 54.42s (Approximately 1.25 days)
registries:
blocked: null
insecure: null
## -35,14 +35,14 ##
- quay.io
- registry.centos.org
store:
- ConfigFile: /tmp/1480/.config/containers/storage.conf
+ ConfigFile: /tmp/2088/.config/containers/storage.conf
ContainerStore:
number: 0
GraphDriverName: vfs
GraphOptions: null
- GraphRoot: /tmp/1480
+ GraphRoot: /tmp/2088
GraphStatus: {}
ImageStore:
number: 0
- RunRoot: /run/user/1480
- VolumePath: /tmp/1480/volumes
+ RunRoot: /run/user/2088
+ VolumePath: /tmp/2088/volumes
I refuse to believe there's an if (2088 == uid) { abort(); } or similar nonsense somewhere in podman's source code. What am I missing?
Does podman system migrate fix there might not be enough IDs available in the namespace for you?
It did for me and others:
https://github.com/containers/libpod/issues/3421
AFAICT, sub-UID and GID ranges should not overlap between users. For reference, here is what the useradd manpage has to say about the matter:
SUB_GID_MIN (number), SUB_GID_MAX (number), SUB_GID_COUNT
(number)
If /etc/subuid exists, the commands useradd and newusers
(unless the user already have subordinate group IDs)
allocate SUB_GID_COUNT unused group IDs from the range
SUB_GID_MIN to SUB_GID_MAX for each new user.
The default values for SUB_GID_MIN, SUB_GID_MAX,
SUB_GID_COUNT are respectively 100000, 600100000 and 65536.
SUB_UID_MIN (number), SUB_UID_MAX (number), SUB_UID_COUNT
(number)
If /etc/subuid exists, the commands useradd and newusers
(unless the user already have subordinate user IDs) allocate
SUB_UID_COUNT unused user IDs from the range SUB_UID_MIN to
SUB_UID_MAX for each new user.
The default values for SUB_UID_MIN, SUB_UID_MAX,
SUB_UID_COUNT are respectively 100000, 600100000 and 65536.
The key word is unused.
CentOS 7.6 does not suport rootless buildah by default - see https://github.com/containers/buildah/pull/1166 and https://www.redhat.com/en/blog/preview-running-containers-without-root-rhel-76

Celery: Routing tasks issue - only one worker consume all tasks from all queues

I've some tasks with manually configured routes and 3 workers which were configured to consume tasks from specific queue. But only one worker consuming all of the tasks and I've no idea how to fix this issue.
My celeryconfig.py
class CeleryConfig:
enable_utc = True
timezone = 'UTC'
imports = ('events.tasks')
broker_url = Config.BROKER_URL
broker_transport_options = {'visibility_timeout': 10800} # 3H
worker_hijack_root_logger = False
task_protocol = 2
task_ignore_result = True
task_publish_retry_policy = {'max_retries': 3, 'interval_start': 0, 'interval_step': 0.2, 'interval_max': 0.2}
task_time_limit = 30 # sec
task_soft_time_limit = 15 # sec
task_default_queue = 'low'
task_default_exchange = 'low'
task_default_routing_key = 'low'
task_queues = (
Queue('daily', Exchange('daily'), routing_key='daily'),
Queue('high', Exchange('high'), routing_key='high'),
Queue('normal', Exchange('normal'), routing_key='normal'),
Queue('low', Exchange('low'), routing_key='low'),
Queue('service', Exchange('service'), routing_key='service'),
Queue('award', Exchange('award'), routing_key='award'),
)
task_route = {
# -- SCHEDULE QUEUE --
base_path.format(task='refresh_rank'): {'queue': 'daily'}
# -- HIGH QUEUE --
base_path.format(task='execute_order'): {'queue': 'high'},
# -- NORMAL QUEUE --
base_path.format(task='calculate_cost'): {'queue': 'normal'},
# -- SERVICE QUEUE --
base_path.format(task='send_pin'): {'queue': 'service'},
# -- LOW QUEUE
base_path.format(task='invite_to_tournament'): {'queue': 'low'},
# -- AWARD QUEUE
base_path.format(task='get_lesson_award'): {'queue': 'award'},
# -- TEST TASK
worker_concurrency = multiprocessing.cpu_count() * 2 + 1
worker_prefetch_multiplier = 1 #
worker_max_tasks_per_child = 1
worker_max_memory_per_child = 90000 # 90MB
beat_max_loop_interval = 60 * 5 # 5 min
I run workers in a docker, part of my stack.yml
version: "3.7"
services:
worker_high:
command: celery worker -l debug -A runcelery.celery -Q high -n worker.high#%h
worker_normal:
command: celery worker -l debug -A runcelery.celery -Q normal,award,service,low -n worker.normal#%h
worker_schedule:
command: celery worker -l debug -A runcelery.celery -Q daily -n worker.schedule#%h
beat:
command: celery beat -l debug -A runcelery.celery
flower:
command: flower -l debug -A runcelery.celery --port=5555
broker:
image: redis:5.0-alpine
I thought that my config is right and run command correct too, but docker logs and flower shown that only worker.normal consume all tasks.
I
Update
Here is part of task.py:
def refresh_rank_in_tournaments():
logger.debug(f'Start task refresh_rank_in_tournaments')
return AnalyticBackgroundManager.refresh_tournaments_rank()
base_path is shortcut for full task path:
base_path = 'events.tasks.{task}'
execute_order task code:
#celery.task(bind=True, default_retry_delay=5)
def execute_order(self, private_id, **kwargs):
try:
return OrderBackgroundManager.execute_order(private_id, **kwargs)
except IEXException as exc:
raise self.retry(exc=exc)
This task will call in a view as tasks.execute_order.delay(id)
Your worker.normal is subscribed to the normal,award,service,low queues. Furthermore, the low queue is the default one, so every task that does not have explicitly set queue will be executed on worker.normal.

Audit daemon does not take rules from audit.rules

I am unable to add rules to audit daemon using /etc/audit/audit.rules
Every time i add the rules using auditctl it gets removed on reboot or audit daemon restart I have attached the /etc/audit/audit.rules and /etc/audit/auditd.conf
cat /etc/audit/auditd.conf
$ cat /etc/audit/auditd.conf
#
# This file controls the configuration of the audit daemon
#
local_events = yes
write_logs = yes
log_file = /NU_Application/audit.log
log_group = root
log_format = RAW
flush = INCREMENTAL_ASYNC
freq = 50
max_log_file = 8
num_logs = 5
priority_boost = 4
disp_qos = lossy
dispatcher = /sbin/audispd
name_format = NONE
##name = mydomain
max_log_file_action = ROTATE
space_left = 75
space_left_action = SYSLOG
verify_email = yes
action_mail_acct = root
admin_space_left = 50
admin_space_left_action = SUSPEND
disk_full_action = SUSPEND
disk_error_action = SUSPEND
use_libwrap = yes
##tcp_listen_port = 22
tcp_listen_queue = 5
tcp_max_per_addr = 1
##tcp_client_ports = 1024-65535
tcp_client_max_idle = 0
enable_krb5 = no
krb5_principal = auditd
##krb5_key_file = /etc/audit/audit.key
distribute_network = no
cat /etc/audit/audit.rules
$ cat /etc/audit/audit.rules
## First rule - delete all
## Increase the buffers to survive stress events.
## Make this bigger for busy systems
-b 8192
## This determine how long to wait in burst of events
--backlog_wait_time 0
## Set failure mode to syslog
-f 1
-w /var/log/lastlog -p wa
root#iWave-G22M:~# auditctl
When i restart the audit daemon ( i.e /etc/init.d/auditd restart ) and try to list the rules i get the message No rules
$ /etc/init.d/auditd restart
Restarting audit daemon auditd
type=1305 audit(1558188111.980:3): audit_pid=0 old=1148 auid=4294967295 ses=4294967295
res=1
type=1305 audit(1558188112.010:4): audit_enabled=1 old=1 auid=4294967295 ses=4294967295
res=1
type=1305 audit(1558188112.020:5): audit_pid=30342 old=0 auid=4294967295 ses=4294967295
res=1
1
$ auditctl -l
No rules
OS INFO
$ uname -a
Linux iWave-G22M 3.10.31-ltsi-svn743 #5 SMP PREEMPT Mon May 27 18:28:01 IST 2019 armv7l GNU/Linux
audit_2.8.4.bb file was used to install auditd daemon via yocto
path of audit_2.8.4.bb -- http://git.yoctoproject.org/cgit/cgit.cgi/meta-selinux/tree/recipes-security/audit/audit_2.8.4.bb?h=master
audit rules add via /etc/audit/audit.rules and auditctl command are not permanent. to make them permanent across reboot you have to add them /etc/audit/rules.d/audit.rules file.
after adding the rule, restart auditd service and run command auditctl -l, it will list all the rules and also reflect in /etc/audit/audit.rules file.

Is it possible to get inside-out ordering of provisioners in a Vagrant multi-machine setup?

Is it possible to reverse the order of provisioners from innermost to outermost when using a multi-machine setup? I want a small shell provisioner to create some facts in /etc/facter/facts.d/ before provisioning with puppet, to mimic our current setup as much as possible. (I have inherited a large puppet repo and am trying to create a Vagrant testbed for it before I start doing changes.)
The puppet settings are the same for every box, but requires the shell provisioner to run first. Here's an example Vagrantfile to show what I want to do (some names changed to protect the innocent):
$facts =<<FACTS
set -x
mkdir -p /etc/facter/facts.d
echo role=$1 > /etc/facter/facts.d/role.txt
echo location=$2 > /etc/facter/facts.d/location.txt
echo environment=$3 > /etc/facter/facts.d/environment.txt
FACTS
Vagrant.configure(2) do |config|
config.vm.box = "centos-6.6"
config.vm.synced_folder "hiera", "/etc/puppet/hiera"
config.vm.provision :puppet do |puppet|
puppet.manifest_file = "site.pp"
puppet.module_path = ["modules", "internal"]
puppet.hiera_config_path = "hiera.yaml"
puppet.options = "--test"
end
config.vm.define :foo1 do |c|
c.vm.hostname = "foo-1.vagrant"
c.vm.provision :shell, inline: $facts, args: "foo testing stage"
end
config.vm.define :bar do |c|
c.vm.hostname = "bar-1.vagrant"
c.vm.provision :shell, inline: $facts, args: "bar testing stage"
end
# ... more machines omitted ...
end
Answering my own question as I found an acceptable workaround: I moved the puppet provisioning into the inner block. Here's what my current code looks like:
$facts =<<SET_FACTS
set -x
mkdir -p /etc/facter/facts.d
echo role=$1 > /etc/facter/facts.d/role.txt
echo location=$2 > /etc/facter/facts.d/location.txt
echo environment=$3 > /etc/facter/facts.d/environment.txt
SET_FACTS
module Vagrant
module Config
module V2
class Root
def provision(role, location, environment)
vm.provision "set-facts",
type: :shell,
inline: $facts,
args: [role, location, environment].map { |x| x.to_s }
vm.provision :puppet do |puppet|
puppet.manifest_file = "site.pp"
puppet.module_path = ["modules", "internal"]
puppet.hiera_config_path = "hiera.yaml"
end
end
end
end
end
end
Vagrant.configure(2) do |config|
config.vm.box = "centos-6.6"
config.vm.synced_folder "hiera", "/etc/puppet/hiera"
config.vm.define :foo1 do |c|
c.vm.hostname = "foo-1.vagrant"
c.provision(:foo, :testing, :stage)
end
config.vm.define :bar1 do |c|
c.vm.hostname = "bar-1.vagrant"
c.provision(:bar, :testing, :stage)
end
end

Vagrant machine disappears after reboot the host machine

Well, the title explain the issue very well. I install my vagrant machine through vagrant up from netbeans plugin. I configure it and use it propperly. Then i halt the machine, shut down my host machine, and when i start my host the vagrant machine has disappeared. No config files, no machine itself. Didn't find any documentation on this issue and i need the vagrant machine, so i install a new one every day so i can propperly debug my work project. I dont know what to do nor try, because I´ve only been using vagrant for a few weeks on a new job, but im wasting a lot of time on this daily instalation and i need to solve this. I appreciate any help.
Any ideas?
Vagrantfile:
require 'yaml'
dir = File.dirname(File.expand_path(__FILE__))
configValues = YAML.load_file("#{dir}/puphpet/config.yaml")
data = configValues['vagrantfile-local']
Vagrant.require_version '>= 1.6.0'
Vagrant.configure('2') do |config|
config.vm.box = "#{data['vm']['box']}"
config.vm.box_url = "#{data['vm']['box_url']}"
if data['vm']['hostname'].to_s.strip.length != 0
config.vm.hostname = "#{data['vm']['hostname']}"
end
if data['vm']['network']['private_network'].to_s != ''
config.vm.network 'private_network', ip: "#{data['vm']['network']['private_network']}"
end
data['vm']['network']['forwarded_port'].each do |i, port|
if port['guest'] != '' && port['host'] != ''
config.vm.network :forwarded_port, guest: port['guest'].to_i, host: port['host'].to_i
end
end
if !data['vm']['post_up_message'].nil?
config.vm.post_up_message = "#{data['vm']['post_up_message']}"
end
if Vagrant.has_plugin?('vagrant-hostmanager')
hosts = Array.new()
if !configValues['apache']['install'].nil? &&
configValues['apache']['install'].to_i == 1 &&
configValues['apache']['vhosts'].is_a?(Hash)
configValues['apache']['vhosts'].each do |i, vhost|
hosts.push(vhost['servername'])
if vhost['serveraliases'].is_a?(Array)
vhost['serveraliases'].each do |vhost_alias|
hosts.push(vhost_alias)
end
end
end
elsif !configValues['nginx']['install'].nil? &&
configValues['nginx']['install'].to_i == 1 &&
configValues['nginx']['vhosts'].is_a?(Hash)
configValues['nginx']['vhosts'].each do |i, vhost|
hosts.push(vhost['server_name'])
if vhost['server_aliases'].is_a?(Array)
vhost['server_aliases'].each do |x, vhost_alias|
hosts.push(vhost_alias)
end
end
end
end
if hosts.any?
contents = File.open("#{dir}/puphpet/shell/ascii-art/hostmanager-notice.txt", 'r'){ |file| file.read }
puts "\n\033[32m#{contents}\033[0m\n"
if config.vm.hostname.to_s.strip.length == 0
config.vm.hostname = 'puphpet-dev-machine'
end
config.hostmanager.enabled = true
config.hostmanager.manage_host = true
config.hostmanager.ignore_private_ip = false
config.hostmanager.include_offline = false
config.hostmanager.aliases = hosts
end
end
if Vagrant.has_plugin?('vagrant-cachier')
config.cache.scope = :box
end
data['vm']['synced_folder'].each do |i, folder|
if folder['source'] != '' && folder['target'] != ''
if folder['sync_type'] == 'nfs'
config.vm.synced_folder "#{folder['source']}", "#{folder['target']}", id: "#{i}", type: 'nfs'
config.vm.network "private_network", type: "dhcp"
elsif folder['sync_type'] == 'smb'
config.vm.synced_folder "#{folder['source']}", "#{folder['target']}", id: "#{i}", type: 'smb'
elsif folder['sync_type'] == 'rsync'
rsync_args = !folder['rsync']['args'].nil? ? folder['rsync']['args'] : ['--verbose', '--archive', '-z']
rsync_auto = !folder['rsync']['auto'].nil? ? folder['rsync']['auto'] : true
rsync_exclude = !folder['rsync']['exclude'].nil? ? folder['rsync']['exclude'] : ['.vagrant/']
config.vm.synced_folder "#{folder['source']}", "#{folder['target']}", id: "#{i}",
rsync__args: rsync_args, rsync__exclude: rsync_exclude, rsync__auto: rsync_auto, type: 'rsync'
else
config.vm.synced_folder "#{folder['source']}", "#{folder['target']}", id: "#{i}",
group: 'www-data', owner: 'www-data', mount_options: ['dmode=777', 'fmode=777']
end
end
end
config.vm.usable_port_range = (data['vm']['usable_port_range']['start'].to_i..data['vm']['usable_port_range']['stop'].to_i)
if data['vm']['chosen_provider'].empty? || data['vm']['chosen_provider'] == 'virtualbox'
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'virtualbox'
config.vm.provider :virtualbox do |virtualbox|
data['vm']['provider']['virtualbox']['modifyvm'].each do |key, value|
if key == 'memory'
next
end
if key == 'cpus'
next
end
if key == 'natdnshostresolver1'
value = value ? 'on' : 'off'
end
virtualbox.customize ['modifyvm', :id, "--#{key}", "#{value}"]
end
virtualbox.customize ['modifyvm', :id, '--memory', "#{data['vm']['memory']}"]
virtualbox.customize ['modifyvm', :id, '--cpus', "#{data['vm']['cpus']}"]
if data['vm']['hostname'].to_s.strip.length != 0
virtualbox.customize ['modifyvm', :id, '--name', config.vm.hostname]
end
end
end
if data['vm']['chosen_provider'] == 'vmware_fusion' || data['vm']['chosen_provider'] == 'vmware_workstation'
ENV['VAGRANT_DEFAULT_PROVIDER'] = (data['vm']['chosen_provider'] == 'vmware_fusion') ? 'vmware_fusion' : 'vmware_workstation'
config.vm.provider 'vmware_fusion' do |v|
data['vm']['provider']['vmware'].each do |key, value|
if key == 'memsize'
next
end
if key == 'cpus'
next
end
v.vmx["#{key}"] = "#{value}"
end
v.vmx['memsize'] = "#{data['vm']['memory']}"
v.vmx['numvcpus'] = "#{data['vm']['cpus']}"
if data['vm']['hostname'].to_s.strip.length != 0
v.vmx['displayName'] = config.vm.hostname
end
end
end
if data['vm']['chosen_provider'] == 'parallels'
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'parallels'
config.vm.provider 'parallels' do |v|
data['vm']['provider']['parallels'].each do |key, value|
if key == 'memsize'
next
end
if key == 'cpus'
next
end
v.customize ['set', :id, "--#{key}", "#{value}"]
end
v.memory = "#{data['vm']['memory']}"
v.cpus = "#{data['vm']['cpus']}"
if data['vm']['hostname'].to_s.strip.length != 0
v.name = config.vm.hostname
end
end
end
ssh_username = !data['ssh']['username'].nil? ? data['ssh']['username'] : 'vagrant'
config.vm.provision 'shell' do |s|
s.path = 'puphpet/shell/initial-setup.sh'
s.args = '/vagrant/puphpet'
end
config.vm.provision 'shell' do |kg|
kg.path = 'puphpet/shell/ssh-keygen.sh'
kg.args = "#{ssh_username}"
end
config.vm.provision :shell, :path => 'puphpet/shell/install-ruby.sh'
config.vm.provision :shell, :path => 'puphpet/shell/install-puppet.sh'
config.vm.provision :puppet do |puppet|
puppet.facter = {
'ssh_username' => "#{ssh_username}",
'provisioner_type' => ENV['VAGRANT_DEFAULT_PROVIDER'],
'vm_target_key' => 'vagrantfile-local',
}
puppet.manifests_path = "#{data['vm']['provision']['puppet']['manifests_path']}"
puppet.manifest_file = "#{data['vm']['provision']['puppet']['manifest_file']}"
puppet.module_path = "#{data['vm']['provision']['puppet']['module_path']}"
if !data['vm']['provision']['puppet']['options'].empty?
puppet.options = data['vm']['provision']['puppet']['options']
end
end
config.vm.provision :shell do |s|
s.path = 'puphpet/shell/execute-files.sh'
s.args = ['exec-once', 'exec-always']
end
config.vm.provision :shell, run: 'always' do |s|
s.path = 'puphpet/shell/execute-files.sh'
s.args = ['startup-once', 'startup-always']
end
config.vm.provision :shell, :path => 'puphpet/shell/important-notices.sh'
if File.file?("#{dir}/puphpet/files/dot/ssh/id_rsa")
config.ssh.private_key_path = [
"#{dir}/puphpet/files/dot/ssh/id_rsa",
"#{dir}/puphpet/files/dot/ssh/insecure_private_key"
]
end
if !data['ssh']['host'].nil?
config.ssh.host = "#{data['ssh']['host']}"
end
if !data['ssh']['port'].nil?
config.ssh.port = "#{data['ssh']['port']}"
end
if !data['ssh']['username'].nil?
config.ssh.username = "#{data['ssh']['username']}"
end
if !data['ssh']['guest_port'].nil?
config.ssh.guest_port = data['ssh']['guest_port']
end
if !data['ssh']['shell'].nil?
config.ssh.shell = "#{data['ssh']['shell']}"
end
if !data['ssh']['keep_alive'].nil?
config.ssh.keep_alive = data['ssh']['keep_alive']
end
if !data['ssh']['forward_agent'].nil?
config.ssh.forward_agent = data['ssh']['forward_agent']
end
if !data['ssh']['forward_x11'].nil?
config.ssh.forward_x11 = data['ssh']['forward_x11']
end
if !data['vagrant']['host'].nil?
config.vagrant.host = data['vagrant']['host'].gsub(':', '').intern
end
end
As a provisional solution im saving from now on all the vm files into other folder so i can only restore and dont need to install again, but this is a lame solution and i wish to do this propperly.
This question is of low quality and I'd recommend closing it if we can't fully describe what the issue that you were having and the solution to that issue. Keep in mind, you're able to submit answers to your own question. That way, other people who come across this issue will understand how you fixed the problem. Typically, you can discover a lot by searching "debugging X" (e.g. Google -> "debugging vagrant").
Short of that, something that might be useful to you is looking at the machine console (GUI) while your VM is booting. Since you're using Virtualbox, this is quite easy. In the section of your Vagrantfile that begins with config.vm.provider :virtualbox do |virtualbox| , add the following :
virtualbox.gui = true
This is a common approach, described in the documentation: http://docs.vagrantup.com/v2/virtualbox/configuration.html
To fix this i just stop launching vagrant from netbeans and started using it from cygwin. This way it dont disappear. Still dont know why it removes the machine if i launch it from netbeans, but i need to use it at work so i have to do it this way and move on. Thanks everyone for the answers and the time spent.