I'm developing a daemon application on Ubuntu server that being managed by Systemd.
I create a SHM file in /dev/shm/ by using shm_open, and close the file descriptor after calling to mmap. At the beginning it exists, but it disappeared after a time, maybe as I loged out from the server.
Perhaps this is controlled by the option RemoveIPC=yes in /etc/systemd/logind.conf.
My question is
Why does systemd not clean up the shm file created by Postgresql, but mine?
How to modify my app to make it like Postgresql, so that we can reduce the managing/maintaining work at the producing time.
I found that the shm memory is still available after it be cleaned by systemd. Does this mean that I can ignore that, and continue to use it without recreating?
I think your suspicion is right; see the documentation for details:
If systemd is in use, some care must be taken that IPC resources (including shared memory) are not prematurely removed by the operating system. This is especially of concern when installing PostgreSQL from source. Users of distribution packages of PostgreSQL are less likely to be affected, as the postgres user is then normally created as a system user.
The setting RemoveIPC in logind.conf controls whether IPC objects are removed when a user fully logs out. System users are exempt. This setting defaults to on in stock systemd, but some operating system distributions default it to off.
[...]
A “user logging out” might happen as part of a maintenance job or manually when an administrator logs in as the postgres user or something similar, so it is hard to prevent in general.
What is a “system user” is determined at systemd compile time from the SYS_UID_MAX setting in /etc/login.defs.
Packaging and deployment scripts should be careful to create the postgres user as a system user by using useradd -r, adduser --system, or equivalent.
Alternatively, if the user account was created incorrectly or cannot be changed, it is recommended to set
RemoveIPC=no
in /etc/systemd/logind.conf or another appropriate configuration file.
While this is talking about PostgreSQL, the same applies to your software. So take one of the recommended measures.
Related
By default, after Marklogic default installation on Centos, ML will starts under daemon user.
Everything works fine. Except that I could not make DB backup.
After research, I found below KB.
https://docs.marklogic.com/guide/installation/procedures#id_32108
I wonder whether it is recommended to always set up MARKLOGIC_USER to a named user for Linux Installation.
I guess running ML in production, ease of ML upgrade should be important.
Whether or not to run the MarkLogic process as the default daemon or a different specified user is a matter of preference. Though, it is generally considered a best practice to run applications and services a specified user.
https://refspecs.linuxbase.org/LSB_3.1.1/LSB-Core-generic/LSB-Core-generic/usernames.html
The daemon User ID/Group ID was used as an unprivileged User ID/Group ID for daemons to execute under in order to limit their access to the system. Generally daemons should now run under individual User ID/Group IDs in order to further partition daemons from one another.
Though, the daemon user is provided by default. If you configure MarkLogic to run as a different user, you need to ensure that user is created and provisioned properly.
The error that you encountered when running the backup was because the daemon user didn't have permission to create the backup directory.
You can address that by adjusting the filesystem permissions and continue to run the MarkLogic process as daemon. If you choose to run the process as a different user, you still need to ensure that the chosen user has the necessary permissions to create files and directories in order to perform a backup.
So I've spent the better part of my day (and several searches before) looking for a workable solution to prevent data loss when the host of a PostgreSQL server installation gets rebooted or shut down. We maintain a number of Azure and on-prem servers and the number of times someone has inadvertently shut down the server without first ensuring Postgres is no longer flushing data to disk is far more frequent than it should be. Of note we are a Windows Server shop.
Our current best practice (which if followed appropriately works) is to stop the Postgres service, then watch disk writes to the Postgres data directory in Resource Monitor. Once nothing is writing to that directory, shut down the host. I have to think that there's a better way to ensure that it doesn't get shutdown in a manner that leads to data corruption, regardless of adherence to the best practice (or in some cases, because Windows Update mandates a reboot, regardless of configured settings telling it not to reboot).
Some things I've considered, but have been unable to find solid answers for:
Create a scheduled task that uses the "On an event" trigger to monitor the System log for event 1074. It would have to be configured to "run whether the user is logged in or not". The script would cancel the shutdown command with shutdown /a, then run a script to elegantly shutdown Postgres. I've seen mixed results on if the scheduled job would reliably trigger before Task Scheduler is terminated in the shutdown sequence.
Create a shutdown script using Group Policy. My question there is will it wait for the script to complete before executing the shutdown?
How do you deal with data loss in your Postgres server Windows hosts?
First, if you register PostgreSQL as a Windows service, a shutdown of the machine will automatically shut down PostgreSQL first.
But even without that, a properly configured PostgreSQL server on proper hardware will never suffer data loss (unless you hit a rare PostgreSQL software bug). It is one of the basic requirements for a relational database to survive crashes without data loss.
To enumerate a few things that come to mind:
make sure that the PostgreSQL parameters fsync and synchronous_commit are set to on
make sure that you are using a reliable file system for the data files and the WAL (a Windows network share is not a reliable file system)
make sure you are using storage that has no caches that are not battery-backed
Cloud platforms like Linode.com often provide hot-pluggable storage volumes that you can easily attach and detach from a Linux virtual machine without restarting it.
I am looking for a way to install Postgres so that its data and configuration ends up on a volume that I have mounted to the virtual machine. The end result should allow me to shut down the machine, detach the volume, spin up another machine with an identical version of Postgres already installed, attach the volume and have Postgres work just like it did on the old machine with all the data, file system permissions and server-wide configuration intact.
Is such a thing possible? Is there a reliable way to move installations (i.e databases and configuration, not the actual binaries) of Postgres across machines?
CLARIFICATION: the virtual machine has two disks:
the "built-in" one which is created when the VM is created and mounted to /. That's where Postgres gets installed to and you can't move this disk.
the hot-pluggable disk which you can easily attach and detach from a running VM. This is where I want Postgres data and configuration to be so I can just detach the disk (after shutting down the VM to prevent data loss/corruption) and attach it to another VM when I want my data to move so it behaves like it did on the old VM (i.e. no failures to start Postgres, no errors about permissions or missing files, etc).
This works just fine. It is not really any different to starting and stopping PostgreSQL and not removing the disk. There are a couple of things to consider though.
You have to make sure it is stopped + writing synced before unmounting the volume. Obvious enough, and I can't believe you'd be able to unmount before sync completed, but worth repeating.
You will want the same version of PostgreSQL, probably on the same version of operating system with the same locales too. Different distributions might compile it with different options.
Although you can put configuration and data in the same directory hierarchy, most distros tend to put config in /etc. If you compile from source yourself this won't be a problem. Alternatively, you can usually override the default locations or, and this is probably simpler, bind-mount the data and config directories into the places your distro expects.
Note that if your storage allows you to connect the same volume to multiple hosts in some sort of "read only" mode that won't work.
Edit: steps from comment moved into body for easier reading.
start up PG, create a table put one row in it.
Stop PG.
Mount your volume at /mnt/db
rsync /var/lib/postgresql/NN/main to /mnt/db/pg_data and /etc/postgresql/NN/main to /mnt/db/pg_etc
rename /var/lib/postgresql/NN/main and add .OLD to the name and do the same with the /etc
bind-mount the dirs from /mnt to replace them
restart PG
Test
Repeat
Return to step 8 until you are happy
We have a CentOS 8 (tried 7 as well) image and I am adding some config to act as a router.
The issue is, for some reason, the first time the instance is created, cloud init doesn't read the network config we pass using the user-data metadata
#cloud-config
network
version: 1
etc...
We configure eth1 to use dhcp and get cloud-init to manage it, as well as add a route.
Works perfectly every time after the initial boot up (and stop>start again).
To me it feels like cloud-init is not aware of the config, but when I go in the machine and do cloud-init query userdata i can see the data, and even then if I do cloud-init clean && cloud-init init it doesn't do anything. The same commands work fine if the machine was rebooted
Try running cloud-init analyze show both times (instance creation and consecutive reboot) and check for any differences.
Sadly, cloud providers kind-of abuse the abilities of cloud-init, not to a complete fault. cloud-init allows for customization of vendor/user provided configuration (who overrides what), changing the order of boot stages, etc.
This is done mostly because different cloud providers need network/provisioning/storage at different times. For example, AWS attaches storage after network (EBS only), Azure provides VM only after storage is attached and it's natively provided as NTFS (they really format the drive if you need anything else), etc.
These shenanigans, while understandable (datacenter infrastructure defines user availability) make cloud-init's documentation merely a suggestion for the user to investigate.
From my experience, Azure is the closest to original implementation. Possibly they haven't learned yet how to utilize the potential in their favor.
My general suggestion for any instance customization (almost always works) is to write a script with write_files and execute them with bootcmd/runcmd, because these run at the final stage, and provide for best override opportunity. Edit hosts, change firewall rules - most of the stuff will not require reboot.
I understand the purpose of chef-client --daemonize, because it's a service that Chef Server can connect to and interact with.
But chef-solo is a command that simply brings the current system inline with specifications and then is done.
So what is the point of chef-solo --daemonize, and what specifically does it do? For example, does it autodetect when the system falls out of line with spec? Does it do so via polling or tapping into filesystem events? How does it behave if you update the cookbooks and node files it depends on when it's already running?
You might also ask why chef-solo supports the --splay and --interval arguments.
Don't forget that chef-server is not the only source of data.
Configuration values can rely on a bunch of other sources (APIs, OHAI, DNS...).
The most classic one is OHAI - think of a cookbook that configures memcached. You would probably want to keep X amount of RAM for the operating system and the rest goes to memcached.
Available RAM can be changed when running inside a VM, even without rebooting it.
That might be a good reason to run chef-solo as a daemon with frequent chef-runs, like you're used to when using chef-client with a chef-server.
As for your other questions:
Q: Does it autodetect when the system falls out of line with spec?
Does it do so via polling or tapping into filesystem events?
A: Chef doesn't respond to changes. Instead, it runs frequently and makes sure the current state is in sync with the desired state - which can be based on chef-server inventory, API calls, OHAI attributes, etc. The desired state is constructed from scratch every time you run Chef, at the compile stage when all the resources are generated. Read about it here
Q: How does it behave if you update the cookbooks and node files it depends on when it's already running?
A: Usually when running chef-solo, one uses the --json flag to specify a JSON file with node attributes and a run-list. When running in --daemonize mode with chef-solo, the node attributes are read only for the first run. For the rest of the runs, it's as if you were running it without a --json flag. I couldn't figure out a way to make it work as if you were running it with --json all over again, however, you can use the --override-runlist option to at least make the runlist stick.
Note that the attributes you're specifying in your JSON won't make it past the first run. This is possibly a bug.