Old ZFS recovery/upgrade strategy - upgrade

The motherboard of a ZFS-based NAS died, and I'm now trying to access the data and move it, or revive the NAS. Debian and ZFS haven't been updated since 2015 or so, however. What I can glean from the log-files is:
ZFS 0.6.4
ZFS pool version 5000
ZFS filesystem 5
Debian Wheezy
Linux 3.2.0-4
So far so good. This Debian is rather old, though, and ZFS and some dependencies have to be compiled by hand to get it all going again - the apt repos have been largely purged of this old stuff, it seems.
So, I'm wondering if it's safe to just spin up a modern Ubuntu, say, and simply create the ZFS pools again.
The ZFS should get updated in any case, so it would be really neat if this just worked with Ubuntu 20, for example...
What came up after a bit of digging is that the ZFS pool version today is still 5000 according to Wikipedia. I can't find any information about what this "ZFS filesystem 5" refers to. I'm not sure at all what the right upgrade strategy is, or what the relevant documentation might be. Any pointers would be very welcome.

Here's what I did:
Install Ubuntu 20.04, install zfsutils-linux.
Run zpool import, this will list all the pools the system can find.
Run zpool import -f <poolname> (the -f is required because ZFS will otherwise complain that the "pool was previously in use from another system").

Related

Red Hat needs-restarting

I have some problems trying to test "needs-restarting -r ; echo $?" inside a RedHat distribution. The command works for cases where a reboot is not required, but I have not been able to voluntarily generate the need to reboot in the operating system, which has made it impossible for me to know if the response to the command works. That is to say the output in 1 of the needs-restarting. Do you know of any way to generate the need to reboot in a controlled manner in RedHat?
You can find which packages require a system reboot after the update to Redhat KB. If you can downgrade one of these packages, you can generate reboot required state. But this is not recommended in production systems. glibc and kernel downgrades can cause problems. You can try it at new installed Rhel server after "yum update".

After updating OSx Big Mister Can't start mini kube

It works fine before the update.
After updating Big Boss I can't start mini-kube.
minikube minikube start --kubernetes-version=v1.19.2
Exiting due to K8S_INSTALL_FAILED: updating control plane: copy: copy: sudo test -d /var/tmp/minikube && sudo scp -t /var/tmp/minikube: Process exited with status 1
output: �scp: /var/tmp/minikube/kubeadm.yaml.new: Read-only file system
scp: protocol error: expected control record
Maybe I need to add some settings? 🙂
Big Sur is a bit unstable. I use a MBA 2015 and couldn't upgrade to Big Sur. I am using Catalina right now. Because of 128GB space that I have is not enough to free up 42GB of free space.
But at least I decided to stay in Catalina is better because there are lots of comments about bugs and problems in Big Sur. And if I would upgrade, my computer may be slower because it is 5 years old.
Install it if there is a snapshot version or a newer version compatible with Big Sur. If there isn't, I'm afraid you have to wait for a BigBoss compatible version.

Migrate data between WSL distributions

>wsl -l
Windows Subsystem for Linux Distributions:
Ubuntu-18.04 (Default)
Ubuntu
I made the mistake of setting up everything in the Ubuntu-18.04 version, which is locked to that specific version and doesn't allow dist upgrades (Please prove me wrong). This includes shell customizations, symlinks etc.
I would however like to upgrade Ubuntu every once in a while. What I do not want to do is manually find all the configuration files and copy them to the new distribution.
The Ubuntu distribution is the one from the Windows store; fresh clean with no modifications.
How do I get my data from the old distribution into the fresh Ubuntu store distribution? Or is there a way to upgrade the locked Ubuntu-18.04 distribution (also from the Windows store)?
I know of wsl --export and wsl --import, but as far as I can tell these keep the distribution (with in this case the lack of upgrades) and just place a copy of that distribution into another folder. Which does not solve the dist upgrade problem.
I ended up using the free, legacy version of Aptik (I couldn't get the GTK version to launch). Even though it's no longer maintained, it's still working perfectly well, and only took me a single command to export everything and another single command to import everything again.

Availability of snapcraft on AlpineLinux

I was looking for compatibility between snap package management system and alpine linux but could not find any relevant resources. Is there any plan to make it available on alpine linux? Any progress being made in that regard?
To be clear: there are two components here: snapd, which is responsible for running snaps, and Snapcraft, which is responsible for building/creating snaps. You specifically asked about Snapcraft, which unlike snapd, is currently Ubuntu-specific. This is due to the fact that it assumes build- and stage-packages are debs, and tries to use apt (and apt python bindings) to get them.
This is currently changing to be more extensible, with RPM support to probably be added first. Alpine will likely need apk support there.
Another feature coming soon will be to build in lxd containers by default. This may be the easier path, where Snapcraft can run natively on Alpine but then build packages using an Ubuntu container.
If you're curious about snapd, you can see from this table that Alpine does not currently seem to be a target. However, please do log a bug requesting that it be put on the roadmap.

Can I run Postgres 8.4 and Postgres 9 on the same machine?

Is it possible to run Postgres 8.4 AND 9 at the same time (two installations)?
Thank you
Short answer: yes
Long answer:
You didn't specify your OS, so it's difficult to say how to do it. For example in Debian/Ubuntu you can just install second version from package (postgresql-8.4 and postgresql-9.0) and everything works out of the box (thanks to postgresql-common). On other systems you probably need to do it manually using "low level" commands such initdb and pg_ctl. Make sure that second installation (database cluster) uses different port (for example 5433) and not same data directory.
Yes, provided the following three preconditions are satisfied:
PostgreSQL is listening on a unique IP/port (check out pgbouncer and you can probably hide both copies of PostgreSQL behind a single IP/port and reduce your memory footprint by reducing the number of active connections)
You have enough SYSV shared memory available (this is frequently the limiting factor)
You use different PGDATA directories.
I can't recommend using pgbouncer enough.
On Windows you don't need to do anything as the installer automatically creates unique data directories and detects an existing installation and adjusts the ports automatically.
For example - your first installation will listen on 5432 and your second installation will listen on 5433, as the installer configures this for you.
You always can, the question is how hard it will be to install two versions at the same time, and that depends on your operating system. On RedHat Linux derived systems for example, this is very hard to do. The PostgreSQL RPM packages are only intended to have a single version installed at any one time. Sometimes the only reasonable way to proceed is to build your own PostgreSQL from source for the second version you want to install, which is an interesting adventure if you've never done it before.
On Debian Linux two versions at once is pretty easy. I believe it's straightforward on Windows too, but it may depend on which installer you are using.
Once you get two different versions of the database installed, only then do you have to worry about the things everyone else is talking about: making each database run on its own port and have its own installation directory. Those are often trivial compared with the work it takes to get two versions installed at once though.
yes, you just put the data directores in different locations.
Yes you can. You'll need to run them on different ports and use different data directories.
The port and data directory can both be set in postgresql.conf.
There are I believe several other ways of specifying the data directory including using the PGDATA environment variable.