Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Spent a couple hours this evening on the #openbsd irc channel troubleshooting a display issue. Couldn't figure this one out though we had fun trying!
Specs:
USB Stick and Openbsd 6.2 Image
Known good,
Openbsd image installed successfully from this USB, to a virtual env on a separate device, using these instructions
Desktop computer
Motherboard, 64bit Intel processor
On-board graphics only
Currently running Ubuntu 16.04.03 Server
BIOS (legacy enabled) set to boot USB first
Monitor
40" Toshiba LCD TV Model 40UX600U
Symptoms
Start computer on Ubuntu server, displays just fine, no issues
Boot to Openbsd USB stick, bios screen runs, self check passes then monitor displays "Unsupported Video Signal". This is not an Openbsd message, but rather from the monitor itself.
Done so far
Cleanly formatted USB (under supervision)
Installed from USB to virtual env on separate device, no issues, loads Openbsd just fine.
Boot computer to Ubuntu, display works perfect
Boot computer to Openbsd USB, display shows error message above.
Unplug and replug VGA cable
Power off, power on monitor
Suspicions
- Pg. 88 of the Toshiba monitor manual shows a table of Acceptable Signal Formats for PC IN. My hunch is the graphics driver is incompatible with this monitor.
Questions
Is there anything I can do to make this monitor work with a new Openbsd 6.2 install?
How can I check Openbsd monitor compatibility before embarking next time?
Ubuntu has KMS support for the nVidia graphics card but OpenBSD only has support for the old UMS driver. The OpenBSD kernel (probably, I'm not sure) is using 640x480 as resolution and the Linux kernel is using the highest resolution supported by your monitor.
You have two options to "fix" the problem. Install OpenBSD on another computer or with a different monitor (you can also use a laptop with a USB<->HDD adapter), when the installer ask if you want graphics support (or something like that, I don't remember), reply yes. Finish the installation and reboot. Then move the HDD to the original computer and power on the machine. You will see the same message but at some point the system will run xenodm (a graphical launcher for X11 sessions, like xdm), then the monitor will work fine. Unfortunately, you can't see the console messages.
Don't change xorg.conf, your problem is not only related to the monitor. Another option (quite better, imho) is to buy a cheap used ATI graphics card and just install OpenBSD. The ATI graphics cards are fully supported (except the newest ones) and have KMS support. You will only see the message during two or three seconds and after of that, you will see the console. Maybe you will need run the installer on a different computer or with a different monitor, but everything will work fine after of that.
Anyway, OpenBSD uses only a standard mode for the basic console. It doesn't make weird unsupported things. So, probably your monitor has some kind of problem with the lowest VGA resolution/frequency. The problem is that the OpenBSD kernel can't change to a higher resolution during the boot process because it doesn't support the nVidia cards at the kernel level. It uses a userland driver for the nVidia cards, like the Linux/BSD/Unix traditionally used to use.
If you have an old (like 10 years or so) Linux LiveCD/installer, try to run it on your computer. You will see the same problem.
Related
It's been my understanding the OS sits on top of the hardware. Is it more or less the same to run windows from a macbook? When installing SQL on a windows partition, does this install similar to an all Windows setup?
I've heard the kernel is the main connector between hardware and basic OS, so would the mac kernel cause potential differences in operation?
Would installing the linux OS also adhere to these rules?
Thanks, and sorry for the simple question.
Generally, you are correct to say that, installing different operating systems on the same hardware would be the same. You will be able to both install Windows and Linux on the same laptop (whether that would be an Asus brand laptop, or HP, or whatever). Once you install an OS on some hardware, and the OS is able to recognize the hardware, and is able to utilize it, then you are in the clear. What's important is to install on OS that is compatible with the architecture of the computer. So if you get a Linux distro that supports x86 architecture, then you would have to install it on hardware that is with a x86 architecture.
Side note: Modern OS's are very smart and they have a wide range of architecture support (list of Linux architecture support, Windows support for ARM, Apple also has a wide range of architecture support).
Since you are asking about a macbook and Windows, then the short answer is: there won't be a problem for you to install Windows on your mac. Apple even gives you Boot Camp to easily do this (there are also quite a bit of recent tutorials on this topic as well).
So the end experience would be almost the same as having Windows on any other machine.
I've heard the kernel is the main connector between hardware and basic OS, so would the mac kernel cause potential differences in operation?
This is true. The kernel is the heart of any OS, but once you have your Windows running, it would be using its own heart and it won't touch the mac kernel. So if you remove your macOS and install only Windows, then only the Windows kernel would be taking control of the Mac hardware. But if you load your macOS, then the Mac kernel would be running and operating on the hardware.
Will Windows run faster on Mac hardware than macOS on its hardware? It's debatable and I would assume not a lot of studies have been made in that sphere. But, at least, it will run.
But what about dual-booting your macbook with Linux? Technically, it is possible (and the principle is the same), but Apple have made restrictions to their firmware, limiting the option of having both a macOS and a Linux distro at the same time. What's so different here than the case with Windows? Well mainly that the firmware of the macbook (the software embedded in the hardware of the laptop) doesn't allow for Linux to be installed. Maybe things have changed, but these are the (not so recent anymore) news I know about (I guess there are still ways of installing Linux on mac hardware).
My understanding is that Docker on Windows currently uses a "regular VM" under the hood. WSL2 (and Docker) will switch to using a lightweight VM. But what does this actually mean; is it just using a smaller initial memory foot print with some memory passthrough technnique, or is there more to it?
TL;DR
The big change is in the move from a virtualized Linux system call interpreter for the Windows kernel in WSL to a full-on Linux kernel provided in WSL2. This move dramatically cuts down on virtualization overhead.
Juicy Details
Directly from the DevBlogs Post on the announcement of WSL2:
Microsoft will be shipping a Linux kernel with Windows ... This kernel has been specially tuned for WSL 2. It has been optimized for size and performance to give an amazing Linux experience on Windows.
This is a departure from the ways of the current (as of writing) WSL which doesn't make use of a proper Linux kernel, demonstrated in the original WSL overview from 2016.
WSL executes unmodified Linux ELF64 binaries by virtualizing a Linux kernel interface on top of the Windows NT kernel.
The WSL LXCore service runs an interpreter of sorts for native Linux system calls as well as running its own VolFs and DriveFs operations to provide file access between WSL and Windows 10, which essentially performs the role of a traditional VM's translation layer the likes of VirtualBox.
Citation: MSDN Blog
Little is known as of yet about the exact system employed for WSL2, what we do know is from the Build2019 WSL2 talk. To help answer the question regarding file system changes and the light VM:
Here, we see that the Linux kernel runs alongside the NT Kernel instead of as a virtualized environment on top of it. (as a Windows service). The lightweight VM likely comes into play for facilitating the necessary interactions between the two kernels.
This gives a peek into the inner workings of that interoperability layer. Discussed verbally in the Build2019 talk, the two kernels serve each other files via natively hosted file servers (inaccessible to the Windows userspace via means other than WSL2).
Again, much is still up in the air from our perspective as users due to the limited details currently available to us at the time of writing.
I have a Raspberry Pi with Raspbian Wheezy installed. I'm using TightVNC server on the Raspberry Pi and RealVNC on my MAC to connect to it. However, when I log in with RealVNC, I'm given a new session, with my own desktop, applications etc.
I want to login to the SAME session as the already running Raspberry, so I can refresh the browser etc (We're using this to display a company desktop application).
How can I achieve this?
I don't believe this is supported by TightVNC (which I think only does "Virtual" sessions). But I may be wrong...
The answer here: https://serverfault.com/questions/27044/how-to-vnc-into-an-existing-x-session suggests a few alternatives (at least ones which work on Fedora based Unix distros)
I know RealVNC can do it (it's known as "User Mode" or "Service Mode" as opposed to "Virtual Desktop" mode), but depending on your users, you may have to license it, or the free mode may suffice.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
So I am a big fan of VMs, actually got experience enough to switch my development box to a linux distro. At this point I would like to get more experience with BSD and hope to do this with a VM. So the question I have is what configuration is correct?
BSD...
List item
FreeBSD
OpenBSD
PC-BSD (I know it is FreeBSD with a KDE, but might be simpler to get started with)
Which Virtual Machine is best for these guests (on a linux host)
List item
VMWare Workstation (have a license for 7)
Virtualbox 4
QEmu
Other?
Any suggestions from experts would be great. I was able to get FreeBSD and PC-BSD installed on virtualbox 4, however I get horrible resolution that I can't seem to fix.
I found the 'right virtual machine' requires some tinkering. VirtualBox ran Plan9 really slowly, qemu+kvm ran it hundreds of times faster. qemu+kvm also ran an Ubuntu guest at what felt like faster-than-hardware (at least for booting :) but I've read accounts from people that say the exact opposite, that VirtualBox outpaced qemu+kvm. Test them both :) that way you get the experience, and can know which one is more usable for your environments.
As for the BSDs, I ran OpenBSD for years and really liked it. You probably can't go wrong with FreeBSD. Learning both wouldn't be a bad idea -- they have different feature sets and excel at different tasks.
Don't let KDE in PC-BSD sway you too much, the different KDE things ought to be available in all their ports trees. Or try life without KDE or Gnome for a while.
I run FreeBSD 8-STABLE guests in VirtualBox 4.0.4, running on Windows (XP & 7) systems. It works, but there are some caveats. Seamless mode (which you might use with Linuxen) doesn't work, and it takes some configuring to get things set up exactly right. See http://wiki.freebsd.org/VirtualBox for the settings you need.
I played with virtualized PC-BSD, and it worked about the same as FreeBSD, since it is FreeBSD. PC-BSD has some nice features for the newbie to take some of the pain out of installing software.
I have also tried NetBSD as a VirtualBox guest. It "works" (for some definitions of work), but you have to launch the VM with something along the line of "vboxsdl.exe --nopatm --startvm [machine]". This worked for me on one Windows box but not on another. I didn't get around to seeing if X works.
I have not tried OpenBSD, but I seem to recall there being images out there, so it should work to some degree.
I don't have experience with other virtualization software, so can't help you there.
I am using VMware Server 1.0.7 on Windows XP SP3 at the moment to test software in virtual machines.
I have also tried Microsoft Virtual PC (do not remeber the version, could be 2004 or 2007) and VMware was way faster at the time.
I have heard of Parallels and VirtualBox but I did not have the time to try them out. Anybody has some benchmarks how fast is each of them (or some other)?
I searched for benchmarks on the web, but found nothing useful.
I am looking primarily for free software, but if it is really better than free ones I would pay for it.
Also, if you are using (or know of) a good virtualization software but have no benchmarks for it, please let me know.
From my experience of Parallels and VMware (on the PC and more extensively on the Mac) the difference between any 2 competing versions of the software is usually quite small and often 'reversed' in the next releases.
I never found Parallels to be much faster (or slower) than VMware - it often would be a case of the state of the VM I was running, the host machine itself and the app(s) I was running within the VM. If VMWare brought out a new release which did something faster, you could be sure that Parallels would improve their performance in that area in the next release, too.
In the end I settled on VMWare Fusion and the key reason for this was just that it played nicely with VMware Workstation on the PC. I have trouble taking Parallels VMs from the Mac to the PC and back again, and this worked fine on VMware. Finally, though this is less of a concern, I was unhappy that sometimes it felt as if Parallels would release a version without proper regression testing - you'd get the up-to-date version and find that networking was suddenly unexplicably broken until they released another patch a few days later. I doubt this is still the case but VMware always felt a little more 'in control' and professional to me.
I'd go for a solution that you can get running in a stable fashion on your PC, that is compatible with your other requirements (such as your co-workers' platforms and your overall budget). You can waste your lifetime trying to measure which one is faster at any given task!
One other thing - it's worth checking the documentation that comes with the software, and any forums etc, before making judgements about performance. For instance, in my experience throwing huge amounts of ram at your VM (at the expense of free ram in the host system) does NOT automatically make it faster; better to split the ram up evenly, and certainly keep an eye on any recommended figure. In VMware, that recommended figure is a good guide.
You'll get best performance if your hardware supports hardware virtualization, such as AMD's AMD-V or Intel's VT, and you enable this feature on the computer and in your virtualization software.
For Microsoft solutions, you need at least Virtual PC 2007 or Virtual Server 2005 R2 SP1, or Hyper-V on Windows Server 2008 (I don't expect you'll rebuild your system just to run Hyper-V, but I thought I'd mention it).
Subjectively I haven't noticed any difference between Virtual PC and VMware Workstation performance; I'm using VMware now as it supports USB virtualization, which Virtual PC doesn't.
You also generally need to install appropriate custom, virtualization-aware, drivers in the guest OS, as the standard drivers are expecting to talk to real hardware. In Virtual PC and Server these are called Additions, in VMware they are VMware Tools.
Anandtech has some great info on virtualization. Although they are not any benchmarks, it provides a great insight on why it is so difficult to do proper virtualization benchmarks. I cannot suggest you a specific product, because it depends very much on your requirements.