I am running Ubuntu 12.04 on a VM on Fedora 18 host using QEMU/KVM and libvirtd. Now, when I fire up the virsh and give list command, it doesn't show any domain running, though my VM is running quite fine. Any idea what am I doing wrong?
It may caused by the user you issued the virsh command. It's normal that you start your VM by sudo but you can see nothing under your other accounts.
You can see your VMs via sudo virsh list --all
Related
I have a CentOS image running in Docker. I don't have root access to this image. The only thing I can control is the dockerfile.
Anyway, I have the yum command to install postgresql in my dockerfile. From the output I can see that the yum command is succeeding.
Now when I ssh into the host, and type "psql", the console outputs with cmd not found. Most likely due to path not being set. But since I don't have root access, my hands are tied. I tried to use locate command but then again CentOS doesn't seem to have that command by default.
My question is two folds:
How do I locate postgresql client on a CentOS docker image that I don't have root access to? I tried the expected default paths like /usr/lib or /etc/ but no luck.
Is there anything I can print/echo on my dockerfile that could help me get the location of postgresql client?
I've got problem with completing pgadmin4 installation thru sudo /usr/pgadmin4/bin/setup-web.sh command.
During this process instalator does not recognizing that Apache is running and asks me if I want to start it:
The Apache web server is not running. We can enable and start the web server for you to finish pgAdmin 4 installation. Continue (y/n)? y
Then it just spits some errors:
Too few arguments.
Error enabling . Please check the systemd logs
Too few arguments.
Error starting . Please check the systemd logs
So far I havn't found where the logs are stored.
About my apache, I am quite sure that my server is running, because I can connect to it through browser, phpmyadmin is working properly, and service apache2 status returns * apache2 is running. By my understanding apache2 is just fancy word for httpd service, and there is no other service called simply apache.
PostgreSQL seems to work properly from command line, haven't tested if I can connect to it yet, but this shouldn't be the case right?
I am using
**PostgreSQL:** 12.5 (Ubuntu 12.5-0ubuntu0.20.04.1)
**Ubuntu:** Ubuntu 20.04 LTS
**Server:** Apache/2.4.41 (Ubuntu)
I had the same issue for Debian 10 and Ubuntu 20. The /usr/pgadmin4/bin/setup-web.sh script is using 'uname -a' which doesn't contain "Debian" identifier in the return string. Updating this to read /proc/version will allow APACHE to be specified as the Debian variant of apache2.
Change:
UNAME=$(uname -a)
To:
UNAME=$(cat /proc/version)
I had a similar problem with Ubuntu running inside WSL 2. Managed to resolve it by modifying the /usr/pgadmin4/bin/setup-web.sh script. I moved these lines outside of the conditional:
IS_DEBIAN=1
APACHE=apache2
This allowed the installation to progress beyond the Too few arguments. error. There was still an error however:
System has not been booted with systemd as init system (PID 1). Can't operate.
Error restarting apache2. Please check the systemd logs
I resolved this by running:
sudo service apache2 restart
After this I tried bringing up the admin page by visiting http://127.0.0.1/pgadmin4 from the Windows host. This still didn't work, and had to connect using the Ubuntu machine's ip address (you can find it out via ifconfig) which finally allowed me to see the login page.
I am trying to install Kubernetes on windows server 2016.
I tried to install minikube, and got some errors.
This is the tutorial that I followed:
https://www.assistanz.com/installing-minikube-on-windows-2016-server/
This is the command + error that I got:
PS C:\Windows\system32> minikube start –vm-driver=hyperv –hyperv-virtual-switch=Minikube
Starting local Kubernetes v1.10.0 cluster...
Starting VM... Downloading Minikube ISO
170.78 MB / 170.78 MB [============================================] 100.00% 0s
E1106 19:29:10.616564 11852 start.go:168] Error starting host: Error creating host: Error executing step: Running precreate checks.
: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path.
Retrying.
E1106 19:29:10.689675 11852 start.go:174] Error starting host: Error creating host: Error executing step: Running precreate checks.
: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:
Someone knows how to solve it?
I googled it, but no luck.
Thanks!
I was never able to get the config parameters to work with minikube start.
I was able to get past this error using the minikube config commands in PowerShell (should also work at a command prompt):
minikube config set vm-driver hyperv
minikube config set hyperv-virtual-switch ExternalSwitch
minikube config view
minikube delete
minikube start
For more information on the command run: minikube config -h
Looking at the documentation you have provided, I have noticed that the screenshot shows a slight difference to the one they've quote.
I have also found this command in another piece of documentation from kubernetes here, showing the same command as that from the screenshot.
I suggest you try the following command;
minikube start --vm-driver=hyperv --hyperv-virtual-switch=Minikube
It is true that OP has pasted the incorrect command, because there is - instead of --. I tried to pass this arguments to minikube and all you get is an instant error. So the issue must be somewhere else. I remember having similar issue and it got resolved after deleting the .kube and .minikube folders and trying to run it again.
After taking a closer look this tutorial is destined for installation of minikube inside of a Windows Server 2016 Virtual Machine, so you have to have a Nested Virtualization able hardware:
Prerequisites The Hyper-V host and guest must both be Windows Server
2016/Windows 10 Anniversary Update or later. VM configuration version
8.0 or greater. An Intel processor with VT-x and EPT technology -- nesting is currently Intel-only. There are some differences with
virtual networking for second-level virtual machines. See "Nested
Virtual Machine Networking".
So the main question is, is that true in your scenario? Are you trying to perform your steps on Windows Server Hyper-V virtual machine with nested virtualization feature?
If you confirm that I have technical possibilities to check it in that scenario.
Otherwise I recommend using the "traditional way" of running minikube in Windows, according for example to this tutorial.
When trying to run minikube with hyperkit, I was getting errors about xhyve not being installed. I installed that and reran minikube start --vm-driver hyperkit with no issues.
I was under the impression that hyperkit was a replacement for xhyve, not a supplement to it.
When I run ps I see both com.docker.hyperkit and docker-machine-driver-xhyve running.
How can I confirm that minikube is correctly using hyperkit?
Docker for Mac changed virtualization layer few times last years, and it can confuse users after updates of environment.
If the process list shows both com.docker.hyperkit and xhyve processes is probably due
to docker-machine environment which was previously set up using docker-machine-driver-xhyve.
You may consider cleaning up installation by
stopping Docker (from command line or from tray icon),
next removing machines created by docker-machine tool.
I can also suggest to remove current minikube installation using
minikube stop && minikube delete
and start fresh one with:
minikube start --v=10 --vm-driver=hyperkit"
That will add additional verbose output of building minikube environment.
This will give you the current driver for the current machine. Replace the second "minikube" with the name of your profile if you're using the --profile flag.
$ cat ~/.minikube/machines/minikube/config.json | grep DriverName
Strange, considering Hyperkit is supposed to replace xhyve eventually.
Make sure Hyperkit is built/installed and referenced by tour PATH.
And that you are using the latest docker-ce for Mac.
Use this command to get a list of each hypervisor instance that's running with hyperkit:
$ ps -ef | grep hyperkit
If minikube is running in hyperkit then the name 'minikube' should show up in the output:
0 29305 1 0 Tue06PM ?? 515:01.32 /usr/local/bin/hyperkit -A -u -F /Users/me/.minikube/machines/minikube/hyperkit.pid -c 2 -m 2000M -s 0:0,...
The instance labeled as 'com.docker.hyperkit' is the process that's being used by Docker and is NOT the minikube instance.
I just installed CentOS 6.3 on a new computer and am unable to SSH to it from our computer running Fedora 16. They are both on the same network.
Some facts:
- I can ping it from the Fedora machine.
- I can SSH to the CentOS computer to itself on the CentOS computer.
- I have looked into hosts allow and deny, I have set selinux to be permissive, I tried with iptables disabled on the Fedora computer
I am fresh out of ideas...
Thanks
Do you have fail2ban running?
Do you have denyhosts running?
Do you have iptables allowing TCP 22?
Do you have a line in your sshd_config that refers to "AllowUsers"? (most dont but some do, and if yours does, you need your account listed on that line)
Can you run this command tail -f /var/log/secure on that machine at the same time while trying to login from the second machine and spot the issue? If not, paste the output from that log here for me to comment on.
A long shot, but you might try service sshd restart and try again to see if that helps. Go ahead and run tail /varlog/messages while restarting that daemon to see if you spot anything unusual while doing that. If you spot the issue great, if you dont, post the output here for me to comment on.
Last, do this cp /etc/ssh/sshd_config /etc/ssh/sshd_config.back and then take a good known working sshd_config from another machine and place it over the top of yours and then restart the daemon again & try again.
My money is on seeing something that helps us in /var/log/secure.