kernelspec not found after setting JUPYTER_PATH - jupyter

I am working in Google Vertex AI, which has a two-disk system of a boot disk and a data disk, the latter of which is mounted to /home/jupyter. I am trying to expose python venv environments with kernelspec files, and then keep those environments exposed across repeated stop-start cycles. All of the default locations for kernelspec files are on the boot disk, which is ephemeral and recreated each time the VM is started (i.e., the exposed kernels vaporize each time the VM is stopped). Conceptually, I want to use a VM start-up script to add a persistent data disk path to the JUPYTER_PATH variable, since, according to the documentation, "Jupyter uses a search path to find installable data files, such as kernelspecs and notebook extensions." During interactive testing in the Terminal, I have not found this to be true. I have also tried setting the data directory variable, but it does not help.
export JUPYTER_PATH=/home/jupyter/envs
export JUPYTER_DATA_DIR=/home/jupyter/envs
I have a beginner's understanding of jupyter and of the important ramifications of using two-disk systems. Could someone please help me understand:
(1) Why is Jupyter failing to search for kernelspec files on the JUPYTER_PATH or in the JUPYTER_DATA_DIR?
(2) If I am mistaken about how the search paths work, what is the best strategy for maintaining virtual environment exposure when Jupyter is installed on an ephemeral boot disk? (Note, I am aware of nb_conda_kernels, which I am specifically avoiding)
A related post focused on the start-up script can be found at this url. Here I am more interested in the general Jupyter + two-disk use case.

Related

Install Postgres on removable volume on linux?

Cloud platforms like Linode.com often provide hot-pluggable storage volumes that you can easily attach and detach from a Linux virtual machine without restarting it.
I am looking for a way to install Postgres so that its data and configuration ends up on a volume that I have mounted to the virtual machine. The end result should allow me to shut down the machine, detach the volume, spin up another machine with an identical version of Postgres already installed, attach the volume and have Postgres work just like it did on the old machine with all the data, file system permissions and server-wide configuration intact.
Is such a thing possible? Is there a reliable way to move installations (i.e databases and configuration, not the actual binaries) of Postgres across machines?
CLARIFICATION: the virtual machine has two disks:
the "built-in" one which is created when the VM is created and mounted to /. That's where Postgres gets installed to and you can't move this disk.
the hot-pluggable disk which you can easily attach and detach from a running VM. This is where I want Postgres data and configuration to be so I can just detach the disk (after shutting down the VM to prevent data loss/corruption) and attach it to another VM when I want my data to move so it behaves like it did on the old VM (i.e. no failures to start Postgres, no errors about permissions or missing files, etc).
This works just fine. It is not really any different to starting and stopping PostgreSQL and not removing the disk. There are a couple of things to consider though.
You have to make sure it is stopped + writing synced before unmounting the volume. Obvious enough, and I can't believe you'd be able to unmount before sync completed, but worth repeating.
You will want the same version of PostgreSQL, probably on the same version of operating system with the same locales too. Different distributions might compile it with different options.
Although you can put configuration and data in the same directory hierarchy, most distros tend to put config in /etc. If you compile from source yourself this won't be a problem. Alternatively, you can usually override the default locations or, and this is probably simpler, bind-mount the data and config directories into the places your distro expects.
Note that if your storage allows you to connect the same volume to multiple hosts in some sort of "read only" mode that won't work.
Edit: steps from comment moved into body for easier reading.
start up PG, create a table put one row in it.
Stop PG.
Mount your volume at /mnt/db
rsync /var/lib/postgresql/NN/main to /mnt/db/pg_data and /etc/postgresql/NN/main to /mnt/db/pg_etc
rename /var/lib/postgresql/NN/main and add .OLD to the name and do the same with the /etc
bind-mount the dirs from /mnt to replace them
restart PG
Test
Repeat
Return to step 8 until you are happy

NixOS within NixOS?

I'm starting to play around with NixOS deployments. To that end, I have a repo with some packages defined, and a configuration.nix for the server.
It seems like I should then be able to test this configuration locally (I'm also running NixOS). I imagine it's a bad idea to change my global configuration.nix to point to the deployment server's configuration.nix (who knows what that will break); but is there a safe and convenient way to "try out" the server locally - i.e. build it and either boot into it or, better, start it as a separate process?
I can see docker being one way, of course; maybe there's nothing else. But I have this vague sense Nix could be capable of doing it alone.
There is a fairly standard way of doing this that is built into the default system.
Namely nixos-rebuild build-vm. This will take your current configuration file (by default /etc/nixos/configuration.nix, build it and create a script allowing you to boot the configuration into a virtualmachine.
once the script has finished, it will leave a symlink in the current directory. You can then boot by running ./result/bin/run-$HOSTNAME-vm which will start a boot of your virtualmachine for you to play around with.
TLDR;
nixos-rebuild build-vm
./result/bin/run-$HOSTNAME-vm
nixos-rebuild build-vm is the easiest way to do this, however; you could also import the configuration into a NixOS container (see Chapter 47. Container Management in the NixOS manual and the nixos-container command).
This would be done with something like:
containers.mydeploy = {
privateNetwork = true;
config = import ../mydeploy-configuration.nix;
};
Note that you would not want to specify the network configuration in mydeploy-configuration.nix if it's static as that could cause conflicts with the network subnet created for the container.
As you may already know, system configurations can coexist without any problems in the Nix store. The problem here is running more than one system at once. For this, you need an isolation or virtualization tools like Docker, VirtualBox, etc.
NixOS Containers
NixOS provides an efficient implementation of the container concept, backed by systemd-nspawn instead of an image-based container runtime.
These can be specified declaratively in configuration.nix or imperatively with the nixos-container command if you need more flexibility.
Docker
Docker was not designed to run an entire operating system inside a container, so it may not be the best fit for testing NixOS-based deployments, which expect and provide systemd and some services inside their units of deployment. While you won't get a good NixOS experience with Docker, Nix and Docker are a good fit.
UPDATE: Both 'raw' Nix packages and NixOS run in Docker. For example, Arion supports images from plain Nix, NixOS modules and 'normal' Docker images.
NixOps
To deploy NixOS inside NixOS it is best to use a technology that is designed to run a full Linux system inside.
It helps to have a program that manages the integration for you. In the Nix ecosystem, NixOps is the first candidate for this. You can use NixOps with its multiple backends, such as QEMU/KVM, VirtualBox, the (currently experimental) NixOS container backend, or you can use the none backend to deploy to machines that you have created using another tool.
Here's a complete example of using NixOps with QEMU/KVM.
Tests
If the your goal is to run automated integration tests, you can make use of the NixOS VM testing framework. This uses Linux KVM virtualization (expose /dev/kvm in sandbox) to run integrations test on networks of virtual machines, and it runs them as a derivation. It is quite efficient because it does not have to create virtual machine images because it mounts the Nix store in the VM. These tests are "built" like any other derivation, making them easy to run.
Nix store optimization
A unique feature of Nix is that you can often reuse the host Nix store, so being able to mount a host filesystem in the container/vm is a nice feature to have in your solution. If you are creating your own solutions, depending on you needs, you may want to postpone this optimization, because it becomes a bit more involved if you want the container/vm to be able to modify the store. NixOS tests solve this with an overlay file system in the VM. Another approach may be to bind mount the Nix store forward the Nix daemon socket.

Copying directories into minikube and persisting them

I am trying to copy some directories into the minikube VM to be used by some of the pods that are running. These include API credential files and template files used at run time by the application. I have found you can copy files using scp into the /home/docker/ directory, however these files are not persisted over reboots of the VM. I have read files/directories are persisted if stored in the /data/ directory on the VM (among others) however I get permission denied when trying to copy files to these directories.
Are there:
A: Any directories in minikube that will persist data that aren't protected in this way
B: Any other ways of doing the above without running into this issue (could well be going about this the wrong way)
To clarify, I have already been able to mount the files from /home/docker/ into the pods using volumes, so it's just the persisting data I'm unclear about.
Kubernetes has dedicated object types for these sorts of things. API credential files you might store in a Secret, and template files (if they aren't already built into your Docker image) could go into a ConfigMap. Both of them can either get translated to environment variables or mounted as artificial volumes in running containers.
In my experience, trying to store data directly on a node isn't a good practice. It's common enough to have multiple nodes, to not directly have login access to those nodes, and for them to be created and destroyed outside of your direct control (imagine an autoscaler running on a cloud provider that creates a new node when all of the existing nodes are 90% scheduled). There's a good chance your data won't (or can't) be on the host where you expect it.
This does lead to a proliferation of Kubernetes objects and associated resources, and you might find a Helm chart to be a good resource to tie them together. You can check the chart into source control along with your application, and deploy the whole thing in one shot. While it has a couple of useful features beyond just packaging resources together (a deploy-time configuration system, a templating language for the Kubernetes YAML itself) you can ignore these if you don't need them and just write a bunch of YAML files and a small control file.
For minikube, data kept in $HOME/.minikube/files directory is copied to / directory in VM host by minikube.

How can I specify where my local developer's service fabric cluster is created?

My problem: I am learning Service Fabric, and doing simple tutorials, and the local cluster is filling up my C drive. I run the projects in Visual Studio. It first creates a cluster in a folder SfDevCluster. That takes up 842 MB of space. Then it deploys the services and web api sites. Remember, these are trivial tutorials with almost nothing in them. Now, I notice that I have a folder with a Size = 1.22 TB and Size on Disk of 9.4 GB. I'm not sure how to interpret that. But it consumes the remaining space on my C drive and sets off alarms.
I have other drives with lots of space. I would love to specify that those be used. Is there a way to do that with the service fabric cluster used by Visual Studio? Or is there a way to constrain the overly ambitious size allocations? And if you understand this, can you explain what these unusual folder sizes mean?
In the old days, I would have a hard drive with lots of space. But now, my developer machine has a much faster, but more expensive SSD drive, and space is at a premium. So I need more control of the cluster location.
You can set up a local cluster pointing to a non-system drive by running the DevClusterSetup script in PowerShell. You can find the script under %programfiles%\Microsoft SDKs\Service Fabric\ClusterSetup\. The command line you want is:
.\DevClusterSetup.ps1 -PathToClusterDataRoot <desired_app_and_data_location> -PathToClusterLogRoot <desired_tracelog_location>
If you already have a cluster running, this script will remove it and create a new one (note that this will delete any deployed apps and their data). Once you have the new cluster running, Visual Studio will automatically use that when you deploy locally.
As for the file sizes - this is mostly due to the log file used for replication of state stored in reliable collections. A large, sparse file is preallocated up-front, which is why you see a difference between size and size on disk. We are planning to make these values configurable so that they can be dialed down on local clusters.
In the Service Fabric SDK folder (C:\Program Files\Microsoft SDKs\ServiceFabric), you will find a ClusterSetup folder.
In there you will find ClusterManifestTemplate.json files for the different configurations of the local cluster. These are json configuration files used by the powershell scripts that create and manage the local service fabric cluster.
At the bottom of these files, in "fabricSettings" it is setting the value of the FabricDataRoot and FabricLogRoot, based on the "%systemDrive%". If you replace this by "D:" it should result in a local cluster on the D drive.
After making these changes, I stopped my local fabric, deleted the current fabric folders from my C drive, and rebooted my machine. When I then start a debug session in VS.2017, it creates the local dev fabric on my D drive and deploys the application to that location. (I do notice that some empty folders are created on my C drive but these are not used.)
What you also can do is resetting the local cluster once in a while.
Can be easily done using the Service Fabric Local Cluster Manager application:

When is cloud-init run and how does it find its data?

I'm currently dealing with CoreOS, and so far I think I got the overall idea and concept. One thing that I did not yet get is execution of cloud-init.
I understand that cloud-init is a process that does some configuration for CoreOS. What I do not yet understand is…
When does CoreOS run cloud-init? On first boot? On each boot? …?
How does cloud-init know where to find its configuration data? I've seen that there is config-drive and that totally makes sense, but is this the only way? What exactly is the role of the user-data file? …?
CoreOS runs cloudinit a few times during the boot process. Right now this happens at each boot, but that functionality may change in the future.
The first pass is the OEM cloud-init, which is baked into the image to set up networking and other features required for that provider. This is done for EC2, Rackspace, Google Compute Engine, etc since they all have different requirements. You can see these files on Github.
The second pass is the user-data pass, which is handled differently per provider. For example, EC2 allows the user to input free-form text in their UI, which is stored in their metadata service. The EC2 OEM has a unit that reads this metadata and passes it to the second cloud-init run. On Rackspace/Openstack, config-drive is used to mount a read-only filesystem that contains the user-data. The Rackspace and Openstack OEMs know to mount and look for the user-data file at that location.
The latest version of CoreOS also has a flag to fetch a remote file to be evaluated for use with PXE booting.
The CoreOS distribution docs have a few more details as well.