folders are not getting synced
q1: where should i clone my project ? in host or guestmachine so that sync works.
q2: vagrant up doesnt show shared folder mouted.
vagrantfile:
config.vm.synced_folder "ionic-projects/", "/home/vagrant/ionic-projects"
vagrant up
==> default: Attempting graceful shutdown of VM...
default: Guest communication could not be established! This is usually because
default: SSH is not running, the authentication information was changed,
default: or some other networking issue. Vagrant will force halt, if
default: capable.
==> default: Forcing shutdown of VM...
==> default: Checking if box 'drifty/ionic-android' is up to date...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 8100 (guest) => 8100 (host) (adapter 1)
default: 35729 (guest) => 35729 (host) (adapter 1)
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
EDIT: had a private key issue now the shared folders are mouted .
but still sync fails
==> default: Checking for guest additions in VM...
==> default: Setting hostname...
==> default: Mounting shared folders...
default: /vagrant => /home/nithin/Documents/Kappian/app
==> default: Machine already provisioned. Run vagrant provision or use the --provision
==> default: flag to force provisioning. Provisioners marked to run always will still run.
The main issue was with private key
1.setting these in vagrant file will resolve auto mount issue of shared folders.
username:vagrant
password:vagrant
auth method will be changed to password.
2.see that proper permission is given for the shared folder.
Related
information:
hostnames:
cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.49.41 ceph-gw-one
172.16.49.42 ceph-gw-two
shell: ceph orch host add 172.16.49.42
Error EINVAL: New host 172.16.49.42 (172.16.49.42) failed check: ['INFO:cephadm:podman|docker (/bin/docker) is present', 'INFO:cephadm:systemctl is present', 'INFO:cephadm:lvcreate is present', 'INFO:cephadm:Unit chronyd.service is enabled and running', 'INFO:cephadm:Hostname "172.16.49.42" matches what is expected.', 'ERROR: hostname "ceph-gw-two" does not match expected hostname "172.16.49.42"']
shell: orch host add ceph-gw-two
Error EINVAL: Failed to connect to ceph-gw-two (ceph-gw-two).
Check that the host is reachable and accepts connections using the cephadm SSH key
you may want to run:
ceph cephadm get-ssh-config > ssh_config
ceph config-key get mgr/cephadm/ssh_identity_key > key
ssh -F ssh_config -i key root#ceph-gw-two
i have checked that wether by ip or hostname, ssh login success;
i read the adm source scripts:
out, err, code = self._run_cephadm(spec.hostname, cephadmNoImage, 'check-host',
['--expect-hostname', spec.hostname],
addr=spec.addr,
error_ok=True, no_fsid=True)
if code:
raise OrchestratorError('New host %s (%s) failed check: %s' % (
spec.hostname, spec.addr, err))
so ,i change the cmd to:
ceph orch host add ceph-gw-two 172.16.49.42;
done, it works well;
While trying to run a puppet update form a node:
sudo /opt/puppetlabs/bin/puppet agent -t
I get an error:
Error: Could not retrieve catalog; skipping run
Error: Could not send report: Connection refused - connect(2) for "puppet" port 8140`
Elsewhere indicates this is likely a problem with the puppetserver service, and suggests to reboot the server. Restarting didn't help, and when I try to restart the service I get failure:
~$ sudo service puppetserver restart
Job for puppetserver.service failed because the control process exited with error code. See "systemctl status puppetserver.service" and "journalctl -xe" for details.
I've looked at these logs, and as a puppet/linux noob, I'm not sure what to do next.
systemctl status puppetserver.service
● puppetserver.service - puppetserver Service
Loaded: loaded (/lib/systemd/system/puppetserver.service; enabled; vendor preset: enabled)
Active: activating (start-post) since Fri 2016-09-02 15:54:26 PDT; 2s ago
Process: 22301 ExecStartPre=/usr/bin/install --directory --owner=puppet --group=puppet --mode=775 /var/run/puppetlabs/puppetserver (code=exited
Main PID: 22306 (java); : 22307 (bash)
Tasks: 17
Memory: 335.7M
CPU: 5.535s
CGroup: /system.slice/puppetserver.service
├─22306 /usr/bin/java -Xms6g -Xmx6g -XX:MaxPermSize=256m -XX:OnOutOfMemoryError=kill -9 %p -Djava.security.egd=/dev/urandom -cp /opt/p
└─control
├─22307 /bin/bash /opt/puppetlabs/server/apps/puppetserver/ezbake-functions.sh wait_for_app
└─22331 sleep 1
Sep 02 15:54:26 puppet systemd[1]: Starting puppetserver Service...
Sep 02 15:54:26 puppet java[22306]: OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
puppet version 4.6.1
The puppet master communicates with the other node using port number 8140.
I don't think a restart will help, since this looks like a connection issue between the server and the node.
please try the following -
first make sure that the puppet master is actually listening on port 8140. run the following command on the puppetmaster -
netstat -ntlp | grep 8140
this command should return something like this -
tcp 0 0 0.0.0.0:8140 0.0.0.0:* LISTEN 1783/puppetmaster
If you don't get the same output, your puppetmaster is not listening, and therefore can not compile catalogs for the node.
Try checking the puppet master log at /var/log/puppetmaster.log
check that the node can communicate with the puppetmaster on the relevant port. you can check this quickly with the telnet command. run this on your node -
telnet < puppetmaster ip address \ dns name> 8140
you should get something like -
Connected to <puppet-master-IP/DNS-name>
Escape character is '^]'.
if you don't get this output, this means that something is blocking you from accessing the puppetmaster. try opening the port in your firewall to access the puppetmaster.
if you're still stuck try using the --debug flag for verbose output and edit your question.
Could be 2 things: (1) in puppet.conf you have configured more memory than you have on your machine. Or (2) You installed both apt-get install puppetserver and apt-get install puppet.
If you get failed to start puppet.service: unit not found. error on slave machine while connecting to puppet.
Close the putty and then again open and connect it.The issue wont come while starting putty on slave.
The error occurs because there is not enough RAM and to fix the error, open the Puppet server configuration file:
sudo nano /etc/sysconfig/puppetserver
And reduce the amount of allocated RAM for the Puppet server (for example, I specified 512m instead of 2g):
JAVA_ARGS="-Xms512m -Xmx512m"
Now let’s start the Puppet server:
sudo systemctl start puppetserver
I tried
to spin up a CentOS 7 VM. Below is my settings
Vagrant File
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.define "zabbix1" do |zabbix1|
zabbix1.vm.box = "centos/7"
zabbix1.vm.hostname = "zabbix1"
zabbix1.ssh.insert_key = false
zabbix1.vm.network :private_network, ip: "10.11.12.55"
zabbix1.ssh.private_key_path = "~/.ssh/id_rsa"
zabbix1.ssh.forward_agent = true
end
end
Result
vagrant reload
==> zabbix1: Attempting graceful shutdown of VM...
zabbix1: Guest communication could not be established! This is usually because
zabbix1: SSH is not running, the authentication information was changed,
zabbix1: or some other networking issue. Vagrant will force halt, if
zabbix1: capable.
==> zabbix1: Forcing shutdown of VM...
==> zabbix1: Checking if box 'centos/7' is up to date...
==> zabbix1: Clearing any previously set forwarded ports...
==> zabbix1: Fixed port collision for 22 => 2222. Now on port 2204.
==> zabbix1: Clearing any previously set network interfaces...
==> zabbix1: Preparing network interfaces based on configuration...
zabbix1: Adapter 1: nat
zabbix1: Adapter 2: hostonly
==> zabbix1: Forwarding ports...
zabbix1: 22 (guest) => 2204 (host) (adapter 1)
==> zabbix1: Booting VM...
==> zabbix1: Waiting for machine to boot. This may take a few minutes...
zabbix1: SSH address: 127.0.0.1:2204
zabbix1: SSH username: vagrant
zabbix1: SSH auth method: private key
zabbix1: Warning: Remote connection disconnect. Retrying...
zabbix1: Warning: Remote connection disconnect. Retrying...
zabbix1: Warning: Remote connection disconnect. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
zabbix1: Warning: Authentication failure. Retrying...
vagrant ssh-config
Host zabbix1
HostName 127.0.0.1
User vagrant
Port 2204
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/bheng/.ssh/id_rsa
IdentitiesOnly yes
LogLevel FATAL
ForwardAgent yes
What did I do wrong ? What did I miss ?
I had the same issue with the same box and the way I fixed it was to log into the VM from VirtualBox (vagrant/vagrant as username/password) and change the permission of .ssh/authorized_keys
chmod 0600 .ssh/authorized_keys
Do that after you run vagrant up (while the error repeats) and the VM is up and you will see vagrant up will complete successfully and you will be able to ssh into the VM from vagrant ssh
Private networks can be configured manually or with the VirtualBox built-in DHCP server. This works for me.
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.define "zabbix1" do |zabbix1|
zabbix1.vm.box = "centos/7"
zabbix1.vm.hostname = "zabbix1"
zabbix1.ssh.insert_key = false
zabbix1.vm.network :private_network, type: "dhcp"
end
end
Next you have to use vagrant destory and vagrant up.
I am doing this as part of the Ambari setup. followed the steps for quick start with Ambari and Vagrant.
I am using this CentOS 6.4 image:
https://github.com/u39kun/ambari-vagrant/blob/master/centos6.4/Vagrantfile
I did this on Google Cloud from RHEL 7.2 host and with VirtualBox 5, but went to install, as suggested, CentOS 6.4 guests.
I successfully installed and configured the pre-requisities (with tweaking required to make vbox 5 work on RHEL 7.2).
When I try to bring up 6 hosts, I see the timeouts where machines are not coming up.
Host machine I am running on is fast - 32 cores, 64 GB RAM, 500 GB SSD ...
Does anyone know what might be the issue?
Is there some firewall I need to turn off, etc.?
[<myuser>#ambari-host-rhel7 centos6.4]$ ./up.sh 6
Bringing machine 'c6401' up with 'virtualbox' provider...
==> c6401: Box 'centos6.4' could not be found. Attempting to find and install... c6401: Box Provider: virtualbox c6401: Box Version: >= 0
==> c6401: Box file was not detected as metadata. Adding it directly...
==> c6401: Adding box 'centos6.4' (v0) for provider: virtualbox c6401: Downloading: http://developer.nrel.gov/downloads/vagrant-boxes/CentOS-6.4-x86_64-v20130427.box
==> c6401: Box download is resuming from prior download progress
==> c6401: Successfully added box 'centos6.4' (v0) for 'virtualbox'!
==> c6401: Importing base box 'centos6.4'...
==> c6401: Matching MAC address for NAT networking...
==> c6401: Setting the name of the VM: centos64_c6401_1456171923223_2329
==> c6401: Clearing any previously set network interfaces...
==> c6401: Preparing network interfaces based on configuration... c6401: Adapter 1: nat c6401: Adapter 2: hostonly
==> c6401: Forwarding ports... c6401: 22 (guest) => 2222 (host) (adapter 1)
==> c6401: Running 'pre-boot' VM customizations...
==> c6401: Booting VM...
==> c6401: Waiting for machine to boot. This may take a few minutes... c6401: SSH address: 127.0.0.1:2222 c6401: SSH username: vagrant c6401: SSH auth method: private key
Timed out while waiting for the machine to boot. This means thatVagrant was unable to communicate with the guest machine withinthe configured ("config.vm.boot_timeout" value) time period.If you look above, you should be able to see the error(s) thatVagrant had when attempting to connect to the machine. These errorsare usually good hints as to what may be wrong.If you're using a custom box, make sure that networking is properlyworking and you're able to connect to the machine. It is a commonproblem that networking isn't setup properly in these boxes.Verify that authentication configurations are also setup properly,as well.If the box appears to be booting properly, you may want to increasethe timeout ("config.vm.boot_timeout") value.
As a final step I get this summary error:
There was an error while executing `VBoxManage`, a CLI used by Vagrantfor controlling VirtualBox.
The command and stderr is shown below.
Command: ["import", "/home/<me>/.vagrant.d/boxes/centos6.4/0/virtualbox/box.ovf", "--vsys", "0", "--vmname", "CentOS-6.4-x86_64_1456173504674_45962", "--vsys", "0", "--unit", "9", "--disk", "/home/<me>/VirtualBox VMs/CentOS-6.4-x86_64_1456173504674_45962/box-disk1.vmdk"]
Stderr: 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Interpreting /home/<me>/.vagrant.d/boxes/centos6.4/0/virtualbox/box.ovf...OK.0%...
Progress state: VBOX_E_FILE_ERRORVBoxManage: error: Appliance import failedVBoxManage: error: Could not create the imported medium '/home/<me>/VirtualBox VMs/CentOS-6.4-x86_64_1456173504674_45962/box-disk1.vmdk'.
VBoxManage: error: VMDK: cannot write allocated data block in '/home/<me>/VirtualBox VMs/CentOS-6.4-x86_64_1456173504674_45962/box-disk1.vmdk' (VERR_DISK_FULL)
VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component ApplianceWrap, interface IAppliance
VBoxManage: error: Context: "RTEXITCODE handleImportAppliance(HandlerArg*)" at line 877 of file VBoxManageAppliance.cpp
Any ideas what might be going on?
Do you still have free space on your drive ?
Generally VERR_DISK_FULL indicates that the hard drive is full, it cannot provision enough space for the vdi files.
Following the instructions on this page - http://kubernetes.io/v1.1/docs/getting-started-guides/vagrant.html#setup, I'm getting the following error when trying to get Kubernetes up on a Mac running El Capitan, using Vagrant and VirtualBox. Where am I going wrong?
OS X El Capitan 10.11.2 (15C50)
Vagrant 1.8.1
VirtualBox 5.0.12 r104815
and trying to get a cluster up by executing these steps:
export KUBERNETES_PROVIDER=vagrant
curl -sS https://get.k8s.io | bash
The output below is a capture of having executed these two commands. I'm following these instructions [link]http://kubernetes.io/v1.1/docs/getting-started-guides/vagrant.html#prerequisites
Unpacking kubernetes release v1.1.4
Creating a kubernetes on vagrant...
... Starting cluster using provider: vagrant
... calling verify-prereqs
... calling kube-up
Bringing machine 'master' up with 'virtualbox' provider...
Bringing machine 'minion-1' up with 'virtualbox' provider...
==> master: VirtualBox VM is already running.
==> minion-1: Importing base box 'kube-fedora21'...
==> minion-1: Matching MAC address for NAT networking...
==> minion-1: Setting the name of the VM: kubernetes_minion-1_1454028157203_24352
==> minion-1: Fixed port collision for 22 => 2222. Now on port 2200.
==> minion-1: Clearing any previously set network interfaces...
==> minion-1: Preparing network interfaces based on configuration...
minion-1: Adapter 1: nat
minion-1: Adapter 2: hostonly
==> minion-1: Forwarding ports...
minion-1: 22 (guest) => 2200 (host) (adapter 1)
==> minion-1: Running 'pre-boot' VM customizations...
==> minion-1: Booting VM...
==> minion-1: Waiting for machine to boot. This may take a few minutes...
minion-1: SSH address: 127.0.0.1:2200
minion-1: SSH username: vagrant
minion-1: SSH auth method: private key
minion-1:
minion-1: Vagrant insecure key detected. Vagrant will automatically replace
minion-1: this with a newly generated keypair for better security.
minion-1:
minion-1: Inserting generated public key within guest...
minion-1: Removing insecure key from the guest if it's present...
minion-1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> minion-1: Machine booted and ready!
==> minion-1: Checking for guest additions in VM...
==> minion-1: Configuring and enabling network interfaces...
==> minion-1: Mounting shared folders...
minion-1: /vagrant => /Users/lee/kubernetes
==> minion-1: Running provisioner: shell...
minion-1: Running: /var/folders/cb/lpcc0zbs441777bwsl1zrcbh0000gn/T/vagrant-shell20160128-14233-gm7iq9.sh
==> minion-1: Adding kubernetes-master to hosts file
==> minion-1: Provisioning network on minion
==> minion-1: Resolving Dependencies
==> minion-1: --> Running transaction check
==> minion-1: ---> Package flannel.x86_64 0:0.5.0-3.fc21 will be installed
==> minion-1: --> Finished Dependency Resolution
==> minion-1:
==> minion-1: Dependencies Resolved
==> minion-1:
==> minion-1: ================================================================================
==> minion-1: Package Arch Version Repository Size
==> minion-1: ================================================================================
==> minion-1: Installing:
==> minion-1: flannel x86_64 0.5.0-3.fc21 updates 1.6 M
==> minion-1:
==> minion-1: Transaction Summary
==> minion-1: ================================================================================
==> minion-1: Install 1 Package
==> minion-1: Total download size: 1.6 M
==> minion-1: Installed size: 7.0 M
==> minion-1: Downloading packages:
==> minion-1: warning:
==> minion-1: /var/cache/yum/x86_64/21/updates/packages/flannel-0.5.0-3.fc21.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 95a43f54: NOKEY
==> minion-1: Public key for flannel-0.5.0-3.fc21.x86_64.rpm is not installed
==> minion-1: Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-21-x86_64
==> minion-1: Importing GPG key 0x95A43F54:
==> minion-1: Userid : "Fedora (21) <fedora#fedoraproject.org>"
==> minion-1: Fingerprint: 6596 b8fb abda 5227 a9c5 b59e 89ad 4e87 95a4 3f54
==> minion-1: Package : fedora-repos-21-2.noarch (#anaconda)
==> minion-1: From : /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-21-x86_64
==> minion-1: Running transaction check
==> minion-1: Running transaction test
==> minion-1: Transaction test succeeded
==> minion-1: Running transaction (shutdown inhibited)
==> minion-1: Installing : flannel-0.5.0-3.fc21.x86_64 1/1
==> minion-1:
==> minion-1: Verifying : flannel-0.5.0-3.fc21.x86_64 1/1
==> minion-1:
==> minion-1:
==> minion-1: Installed:
==> minion-1: flannel.x86_64 0:0.5.0-3.fc21
==> minion-1: Complete!
==> minion-1: Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
==> minion-1: Network configuration verified
==> minion-1: Disable swap memory to ensure proper QoS
==> minion-1: * INFO: sh -- Version 2015.11.09
==> minion-1:
==> minion-1: * INFO: System Information:
==> minion-1: * INFO: CPU: GenuineIntel
==> minion-1: * INFO: CPU Arch: x86_64
==> minion-1: * INFO: OS Name: Linux
==> minion-1: * INFO: OS Version: 3.17.4-301.fc21.x86_64
==> minion-1: * INFO: Distribution: Fedora 21
==> minion-1: * INFO: Installing minion
==> minion-1: * INFO: Found function install_fedora_deps
==> minion-1: * INFO: Found function install_fedora_stable
==> minion-1: * INFO: Found function install_fedora_stable_post
==> minion-1: * INFO: Found function install_fedora_restart_daemons
==> minion-1: * INFO: Found function daemons_running
==> minion-1: * INFO: Found function install_fedora_check_services
==> minion-1: * INFO: Running install_fedora_deps()
==> minion-1: which: no dnf in (/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin)
==> minion-1: * INFO: Adding SaltStack's COPR repository
==> minion-1:
==> minion-1:
==> minion-1: File contains no section headers.
==> minion-1: file: file:///etc/yum.repos.d/saltstack-salt-fedora-21.repo, line: 1
==> minion-1: '<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">\n'
==> minion-1: * ERROR: Failed to run install_fedora_deps()!!!
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
Your problem with the vagrant setup is currently an open issue https://github.com/kubernetes/kubernetes/issues/20088#issuecomment-174528066 . As a temporary fix you can do this to get it to work: https://stackoverflow.com/a/35015586/5834774