Getting a VZVirtualMachine for a running virtualmachine - swift

I would like to write a thin wrapper around the Virtualization framework that offers docker style commands to start and stop virtual machines. One hurdle I found is the inability to create VZVirtualMachineConfiguration and VZVirtualMachine that refers to an already running machine.
So, if I want to use my cli tool like this:
# creates VZVirtualMachineConfiguration, VZVirtualMachine, starts VM and detaches process
vm-cli run --assorted-vm-parameters
vm-cli stop
# recreate VZVirtualMachineConfiguration with same hardwareidentifier,
# macmachineidentifier and VZVirtualMachine now refer to new virtual machine that is stopped
Is there a way to control a virtual machine of antoher process without having to write a daemon that maintains a list of created virtual machines?

Related

Running a Node.js server in Azure Release Pipeline

I have a Node.js web server which, as part of a CD process, I want to deploy to a staging server using Azure Release Pipeline. The problem is, that if I just run a Powershell script:
# Run-Server.ps1
node my-server.js
The Pipeline will hold since the node process blocks the Powershell session.
What I want is to be able to launch the service, and then in the next deployment just kill the node process and run it again with the new code.
So I figured I'll use Start-Process. If I run it locally:
> Start-Process node -ArgumentList ./server.js
I can now exit the Powershell session and the server will continue running. So I thought I can implement it the same way in my Release Pipeline.
But it turns out that once the Release Pipeline ends running, the server is no longer available - the node process is gone.
Can you help me figure out why is that? Is there another way of achieving this? I suppose it's a pretty common use case so there must be best-practices out there regarding to how this should be done.
Another way to achieve this is to use a full-blown web server to host andmanage node process. I.e. on Windows you could use IIS with iisnode module. This is more reliable and gives you a few other benefits:
process management (automatic start, restart on failure, etc.)
security - you can configure the user that node process will run as
scalability on multi-core CPUs
Then the process of app deployment would be just copying files to the right directory - the web server should pick up the change automatically.
By default, A pipeline job cleans up all of the child processes it spins up when it exits. This is killing your node server.
Set Process.Clean variable to false to override the default behavior.

NixOS within NixOS?

I'm starting to play around with NixOS deployments. To that end, I have a repo with some packages defined, and a configuration.nix for the server.
It seems like I should then be able to test this configuration locally (I'm also running NixOS). I imagine it's a bad idea to change my global configuration.nix to point to the deployment server's configuration.nix (who knows what that will break); but is there a safe and convenient way to "try out" the server locally - i.e. build it and either boot into it or, better, start it as a separate process?
I can see docker being one way, of course; maybe there's nothing else. But I have this vague sense Nix could be capable of doing it alone.
There is a fairly standard way of doing this that is built into the default system.
Namely nixos-rebuild build-vm. This will take your current configuration file (by default /etc/nixos/configuration.nix, build it and create a script allowing you to boot the configuration into a virtualmachine.
once the script has finished, it will leave a symlink in the current directory. You can then boot by running ./result/bin/run-$HOSTNAME-vm which will start a boot of your virtualmachine for you to play around with.
TLDR;
nixos-rebuild build-vm
./result/bin/run-$HOSTNAME-vm
nixos-rebuild build-vm is the easiest way to do this, however; you could also import the configuration into a NixOS container (see Chapter 47. Container Management in the NixOS manual and the nixos-container command).
This would be done with something like:
containers.mydeploy = {
privateNetwork = true;
config = import ../mydeploy-configuration.nix;
};
Note that you would not want to specify the network configuration in mydeploy-configuration.nix if it's static as that could cause conflicts with the network subnet created for the container.
As you may already know, system configurations can coexist without any problems in the Nix store. The problem here is running more than one system at once. For this, you need an isolation or virtualization tools like Docker, VirtualBox, etc.
NixOS Containers
NixOS provides an efficient implementation of the container concept, backed by systemd-nspawn instead of an image-based container runtime.
These can be specified declaratively in configuration.nix or imperatively with the nixos-container command if you need more flexibility.
Docker
Docker was not designed to run an entire operating system inside a container, so it may not be the best fit for testing NixOS-based deployments, which expect and provide systemd and some services inside their units of deployment. While you won't get a good NixOS experience with Docker, Nix and Docker are a good fit.
UPDATE: Both 'raw' Nix packages and NixOS run in Docker. For example, Arion supports images from plain Nix, NixOS modules and 'normal' Docker images.
NixOps
To deploy NixOS inside NixOS it is best to use a technology that is designed to run a full Linux system inside.
It helps to have a program that manages the integration for you. In the Nix ecosystem, NixOps is the first candidate for this. You can use NixOps with its multiple backends, such as QEMU/KVM, VirtualBox, the (currently experimental) NixOS container backend, or you can use the none backend to deploy to machines that you have created using another tool.
Here's a complete example of using NixOps with QEMU/KVM.
Tests
If the your goal is to run automated integration tests, you can make use of the NixOS VM testing framework. This uses Linux KVM virtualization (expose /dev/kvm in sandbox) to run integrations test on networks of virtual machines, and it runs them as a derivation. It is quite efficient because it does not have to create virtual machine images because it mounts the Nix store in the VM. These tests are "built" like any other derivation, making them easy to run.
Nix store optimization
A unique feature of Nix is that you can often reuse the host Nix store, so being able to mount a host filesystem in the container/vm is a nice feature to have in your solution. If you are creating your own solutions, depending on you needs, you may want to postpone this optimization, because it becomes a bit more involved if you want the container/vm to be able to modify the store. NixOS tests solve this with an overlay file system in the VM. Another approach may be to bind mount the Nix store forward the Nix daemon socket.

Options to restart services on multiple Azure VMs

We currently have multiple Azure VMs all on the same virtual network. We would like to run a script, which, in case of failure restarts the services on a VM, but we would want to run that script on all vms at the same time (in parallel).
I have currently tried with runbook which works but it is not an option since it takes about 5 minutes to complete.
Another option seems to be with Invoke-Command but that would mean opening some ports (I am not sure if the endpoint needs to be opened since the machines are on the same virtual network) which is not very convenient.
Does someone has another idea maybe?

How does the Sulley fuzzing framework procmon work on a virtual machine?

From my understanding, the process_monitor stores crashbin information locally. If this is running on a virtual machine and a test case causes the process and target machine to become unresponsive, vmcontrol would then revert to an earlier snapshot. How is the crashbin information displayed to the web interface, or accessed at this point if it was lost on the revert to an earlier snapshot?
After walking through most of the code in the Sulley environment, I found that the restart_target() method in the sessions.py module calls for a restart on the virtual machine if vmcontrol is available first, and then tries to restart the process via the procmon if its available. By switching the order of these, I can solve the problem of losing the log information from the crashbin unless the entire target machine becomes unresponsive.

Should I use VirtualBox on a production server?

I just completed my vagrant box for a product that made by my company.
I needed that because we're running same product on different
operating systems. I want to serve sites inside virtual machines, I
have questions:
Am I on correct way? Can a virtual machine used as production
server?
If you say yes:
How should I keep virtualbox running? Are there any script or sth
to restart if something crashes?
What happens if somebody accidentally gives "vagrant destroy"
command? What should I do if I don't want to lose my database and user
uploaded files?
We have some import scripts that running every beginning of the
month. sometimes they're using 7gb ram (running 1500 lines of mysql
code with lots of asynchronised instances). Can it be dangerous to run
inside VirtualBox?
Are there any case study blog post about this?
Vagrant is mainly for Development environment. I personally recommend using Type 1 hypervisor (Bare metal), VirtualBox is a desktop virtulization tool (Type 2, running on top of a traditional OS), not recommended for production.
AWS is ok, the VMs are running as Xen guest, Xen is on bare metal;-)
I wouldn't.
The w/ Vagrant + Virtualbox is that these are development instances. I would look at Amazon Web Services for actually deploying your project into the wild.