I'm trying to build a project on a rather underpowered system (intel compute stick with 1GB of RAM). Some of the compilation steps run out of memory. I've configured icecc so that it can send some jobs to a more powerful machine, but it seems that icecc will always do at least one job on the local machine.
I've tried setting ICECC_MAX_JOBS="0" in /etc/icecc/icecc.conf (and restarting iceccd), but the comments in this file say:
# Note: a value of "0" is actually interpreted as "1", however it
# also sets ICECC_ALLOW_REMOTE="no".
I also tried disabling the icecc daemon on the compute stick by running /etc/init.d/icecc stop. However, it seems that icecc is still putting one job on the local machine (perhaps if the daemon is off it's putting all jobs on the local machine?).
The project is makefile based and it appears that I'm stuck on a bottleneck step where calling make with -j > 1 still only issues one job, and this compilation is expiring the system memory.
The only work around I can think of is to actually compile on a different system and then ship the binaries back over but I expect to enter a tweak/build/evaluate cycle on this platform so I'd like to be able to work from the compute stick directly.
Both systems are running ubuntu 14.04 if that helps.
I believe it is not supported since if there are network issues, icecc resorts to compiling on the host machine itself. Best solution would be to compile on the remote machine and copy back the resulting binary.
Have you tried setting ICECC_TEST_REMOTEBUILD in client's terminal (where you run make)?
export ICECC_TEST_REMOTEBUILD=1
In my tests this always forces all sources to be compiled remotely.
Just remember that linking is always done on local machine.
Related
I am currently configuring a virtual machine to work as an agent within Azure (with Ubuntu as image). In which the additional configuration is running through a cloud init file.
In which, among others, I have the below 'fix' within bootcmd and multiple steps within runcmd.
However the machine already gives the state running within the azure portal, while still running the cloud configuration phase (cloud_config_modules). This has as a result pipelines see the machine as ready for usage while not everything is installed/configured yet and breaks.
I tried a couple of things which did not result in the desired effect. After which I stumbled on the following article/bug;
The proposed solution worked, however I switched to a rhel image and it stopped working.
I noticed this image is not using walinuxagent as the solution states but waagent, so I tried to replacing that like the example below without any success.
bootcmd:
- mkdir -p /etc/systemd/system/waagent.service.d
- echo "[Unit]\nAfter=cloud-final.service" > /etc/systemd/system/waagent.service.d/override.conf
- sed "s/After=multi-user.target//g" /lib/systemd/system/cloud-final.service > /etc/systemd/system/cloud-final.service
- systemctl daemon-reload
After this, also tried to set the runcmd steps to the bootcmd steps. This resulted in a boot which took ages and eventually froze.
Since I am not that familiar with rhel and Linux overall, I wanted to ask help if anyone might have some suggestions which I can additionally try.
(Apply some other configuration to ensure await on the cloud-final.service within a waagent?)
However the machine already had the state running, while still running the cloud configuration phase (cloud_config_modules).
Could you please be more specific? Where did you read the machine state?
The reason I ask is that cloud-init status will report status: running until cloud-init is done running, at which point it will report status: done
I what is the purpose of waiting until cloud-init is done? I'm not sure exactly what you are expecting to happen, but here are a couple of things that might help.
If you want to execute a script "at the end" of cloud-init initialization, you could put the script directly in runcmd, and if you want to wait for cloud-init in an external script you could do cloud-init status --wait, which will print a visual indicator and eventually return once cloud-init is complete.
On not too old Azure Linux VM images, cloud-init rather than WALinuxAgent acts as the VM provisioner. The VM is marked provisioned by the Azure cloud-init datasource module very early during cloud-init processing (source), before any cloud-init modules configurable with user data. WALinuxAgent is only responsible for provisioning Azure VM extensions. It does not appear to be possible to delay sending the 'VM ready' signal to Azure without modifying the VM image and patching the source code of cloud-init Azure datasource.
New to MDT.
So I am following through the MS step by step guides:
https://learn.microsoft.com/en-us/windows/deployment/windows-10-poc
https://learn.microsoft.com/en-us/windows/deployment/windows-10-poc-mdt
I am at step 28 in (in the second guide):
Deploy Windows 10 in a test lab using Microsoft Deployment Toolkit
Where the deployment wizard has been launched in a VM on the host system and have watched the process continue for an hour. It finally finishes but it does not create the .wim on the the server share as
expected and as referred to in the bootstrap.ini:
Bootstrap.ini
[Settings]
Priority=Default
[Default]
DeployRoot=\\SRV1\MDTBuildLab$
UserDomain=CONTOSO
UserID=MDT_BA
UserPassword=pass#word1
SkipBDDWelcome=YES
I have verified that the share "DeployRoot" exists and can be connected to using the provided credentials and that the share has the correct permissions to create/delete files.
Not sure what I'm missing but my expectation was a .wim should have been created in \srv1\MDTBuildLab$\Captures but there is nothing in that folder.
Just before stopping the deployment wizard reboots several times in quick succession, which to me doesn't appear correct but as I have never witnessed a successful capture I can't say for sure this isn't what's supposed to happen.
I'm not even sure where I can view any log files to figure out why it fails.
Any assistance appreciated!
Further Info:
Activated monitoring. It gets to step 86 of 93. The last thing I see is "Applying WinPE (BD)" or something similar and then it restarts. Then several quick reboots occur (the loading bar appears for a second or two and then reboots) (Which I think are failing) finally it gives up! The process never completes!
When I attempt to mount the client REFW10X64-001.vhdx to check the logs I am greeted with this message
The disk image isn't initialized, contains partitions that aren't recognizable, or contains volumes that haven't been assigned drive letters. Please use the Disk Management snap-in to make sure that the disk, partitions, and volumes are in a usable state.
So it looks like the last step totally screwed the disk! Which would explain the last several boots failing to load anything.
So no errors no warnings, no logs, no finish and no wim generated.
How do I troubleshoot this?
I know this post is old, but the normal behavior would be as follows:
Using the boot image, you boot into WinPE
The task sequence is started and the OS gets applied to the disk
Reboot
Boot into full Windows where the task sequence also continues
Under full Windows, one of the last steps is that WinPE gets applied again
Reboot
Computer boots automatically into WinPE
The wim file gets created (WinPE is running on the RAM disk and the regular C: drive (and any additional drives) is being mirrored into the wim file)
Computer performs the FINISHACTION.
We would need at least BDD.log and smsts.log to further troubleshoot. My guess is that WinPE was not applied correctly.
I'm starting to play around with NixOS deployments. To that end, I have a repo with some packages defined, and a configuration.nix for the server.
It seems like I should then be able to test this configuration locally (I'm also running NixOS). I imagine it's a bad idea to change my global configuration.nix to point to the deployment server's configuration.nix (who knows what that will break); but is there a safe and convenient way to "try out" the server locally - i.e. build it and either boot into it or, better, start it as a separate process?
I can see docker being one way, of course; maybe there's nothing else. But I have this vague sense Nix could be capable of doing it alone.
There is a fairly standard way of doing this that is built into the default system.
Namely nixos-rebuild build-vm. This will take your current configuration file (by default /etc/nixos/configuration.nix, build it and create a script allowing you to boot the configuration into a virtualmachine.
once the script has finished, it will leave a symlink in the current directory. You can then boot by running ./result/bin/run-$HOSTNAME-vm which will start a boot of your virtualmachine for you to play around with.
TLDR;
nixos-rebuild build-vm
./result/bin/run-$HOSTNAME-vm
nixos-rebuild build-vm is the easiest way to do this, however; you could also import the configuration into a NixOS container (see Chapter 47. Container Management in the NixOS manual and the nixos-container command).
This would be done with something like:
containers.mydeploy = {
privateNetwork = true;
config = import ../mydeploy-configuration.nix;
};
Note that you would not want to specify the network configuration in mydeploy-configuration.nix if it's static as that could cause conflicts with the network subnet created for the container.
As you may already know, system configurations can coexist without any problems in the Nix store. The problem here is running more than one system at once. For this, you need an isolation or virtualization tools like Docker, VirtualBox, etc.
NixOS Containers
NixOS provides an efficient implementation of the container concept, backed by systemd-nspawn instead of an image-based container runtime.
These can be specified declaratively in configuration.nix or imperatively with the nixos-container command if you need more flexibility.
Docker
Docker was not designed to run an entire operating system inside a container, so it may not be the best fit for testing NixOS-based deployments, which expect and provide systemd and some services inside their units of deployment. While you won't get a good NixOS experience with Docker, Nix and Docker are a good fit.
UPDATE: Both 'raw' Nix packages and NixOS run in Docker. For example, Arion supports images from plain Nix, NixOS modules and 'normal' Docker images.
NixOps
To deploy NixOS inside NixOS it is best to use a technology that is designed to run a full Linux system inside.
It helps to have a program that manages the integration for you. In the Nix ecosystem, NixOps is the first candidate for this. You can use NixOps with its multiple backends, such as QEMU/KVM, VirtualBox, the (currently experimental) NixOS container backend, or you can use the none backend to deploy to machines that you have created using another tool.
Here's a complete example of using NixOps with QEMU/KVM.
Tests
If the your goal is to run automated integration tests, you can make use of the NixOS VM testing framework. This uses Linux KVM virtualization (expose /dev/kvm in sandbox) to run integrations test on networks of virtual machines, and it runs them as a derivation. It is quite efficient because it does not have to create virtual machine images because it mounts the Nix store in the VM. These tests are "built" like any other derivation, making them easy to run.
Nix store optimization
A unique feature of Nix is that you can often reuse the host Nix store, so being able to mount a host filesystem in the container/vm is a nice feature to have in your solution. If you are creating your own solutions, depending on you needs, you may want to postpone this optimization, because it becomes a bit more involved if you want the container/vm to be able to modify the store. NixOS tests solve this with an overlay file system in the VM. Another approach may be to bind mount the Nix store forward the Nix daemon socket.
From my understanding, the process_monitor stores crashbin information locally. If this is running on a virtual machine and a test case causes the process and target machine to become unresponsive, vmcontrol would then revert to an earlier snapshot. How is the crashbin information displayed to the web interface, or accessed at this point if it was lost on the revert to an earlier snapshot?
After walking through most of the code in the Sulley environment, I found that the restart_target() method in the sessions.py module calls for a restart on the virtual machine if vmcontrol is available first, and then tries to restart the process via the procmon if its available. By switching the order of these, I can solve the problem of losing the log information from the crashbin unless the entire target machine becomes unresponsive.
I understand the purpose of chef-client --daemonize, because it's a service that Chef Server can connect to and interact with.
But chef-solo is a command that simply brings the current system inline with specifications and then is done.
So what is the point of chef-solo --daemonize, and what specifically does it do? For example, does it autodetect when the system falls out of line with spec? Does it do so via polling or tapping into filesystem events? How does it behave if you update the cookbooks and node files it depends on when it's already running?
You might also ask why chef-solo supports the --splay and --interval arguments.
Don't forget that chef-server is not the only source of data.
Configuration values can rely on a bunch of other sources (APIs, OHAI, DNS...).
The most classic one is OHAI - think of a cookbook that configures memcached. You would probably want to keep X amount of RAM for the operating system and the rest goes to memcached.
Available RAM can be changed when running inside a VM, even without rebooting it.
That might be a good reason to run chef-solo as a daemon with frequent chef-runs, like you're used to when using chef-client with a chef-server.
As for your other questions:
Q: Does it autodetect when the system falls out of line with spec?
Does it do so via polling or tapping into filesystem events?
A: Chef doesn't respond to changes. Instead, it runs frequently and makes sure the current state is in sync with the desired state - which can be based on chef-server inventory, API calls, OHAI attributes, etc. The desired state is constructed from scratch every time you run Chef, at the compile stage when all the resources are generated. Read about it here
Q: How does it behave if you update the cookbooks and node files it depends on when it's already running?
A: Usually when running chef-solo, one uses the --json flag to specify a JSON file with node attributes and a run-list. When running in --daemonize mode with chef-solo, the node attributes are read only for the first run. For the rest of the runs, it's as if you were running it without a --json flag. I couldn't figure out a way to make it work as if you were running it with --json all over again, however, you can use the --override-runlist option to at least make the runlist stick.
Note that the attributes you're specifying in your JSON won't make it past the first run. This is possibly a bug.