I have OpenVZ VPS and have problem with clearing the cache:
In OpenVZ not work echo 3 > /proc/sys/vm/drop_caches
How may be this cache clear?
This was reported on the OpenVZ bug tracker. It has been resolved as RESOLVED WONTFIX
From Kir Kolyshkin in the bug report (project leader of OpenVZ):
All containers share the same page cache (although there is per-container accounting), so to drop caches of one single container we have to check each page:
1 Whether it belongs to the container or not -- supposing we do have that information, which I am not sure of
2 Whether this page is used by other containers.
So, while this is trivial on the host system, it is much less trivial for a container. And this is not a critical piece of functionality -- drop_caches is only useful for running various sorts of benchmarks.
Since your don't get your own kernel instance with OpenVZ you are prevented from running the command.
As a result, in order to clear the cache -> must restart VPS.
OpenVZ don't support to clear cash.
Could you try doing below steps -
sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
sudo echo 3 > /proc/sys/vm/drop_caches
echo 3 > /proc/sys/vm/drop_caches
echo 3 | sudo tee /proc/sys/vm/drop_caches
If these steps doesn't work -
Get a real non-OpenVZ machine (KVM, Xen, etc) and this will work just fine. With OpenVZ, you don't get your own kernel instance and as such, are restricted from performing commands like this.
Related
I am attempting to debug some C# / .NET 5 code in WSL 2 with Ubuntu on Windows. I have WSL 2 setup with Windows 10 and want to test out creating a Systemd service. Unfortunately, it appears Systemd is not enabled with WSL 2 by default, even though a standard Ubuntu install does have it enabled by default. Is there any way to get Systemd enabled in WSL 2?
Note: See footnote at bottom of this answer for background on this Community Wiki.
There are several possible paths to enabling Systemd on WSL2 (but not WSL1). These are summarized here, with more detail provided below.
Option 1: Upgrade WSL to the latest application release (if supported by your system) and opt-in to the Systemd feature
Option 2: Run a Systemd-helper script designed for WSL2
Option 3: Manually run Systemd in its own namespace
And while not part of this question, for those simply looking to run certain applications that require Systemd, there are alternatives:
On WSL1 and WSL2:
Alternative 1: SysVInit scripts (e.g. sudo service <service_name> start) where available
Alternative 2: Manually configuring and running the service
On WSL2-only:
Alternative 3: Docker
Should you enable Systemd in WSL?
First, consider whether you should or need to enable Systemd in WSL. Enabling Systemd will automatically start a number of background services and tasks that you really may not need under WSL. As a result, it will also increase WSL startup times, although the impact will be dependent on your system. Check the Alternatives section below to see if there may be a better option that fits your needs. For example, the service command may do what you need without any additional effort.
More detail on each answer:
Option 1: Upgrade WSL to the latest application release (if supported by your system) and opt-in to the Systemd feature
Microsoft has now integrated Systemd support in the WSL2 application release (as opposed to the older "Windows feature" implementation).
Starting with WSL Application Release 1.0.0, this feature is available on both Windows 10 and Windows 11. Windows 10 users do need to be on UBR (update build revision) 2311 or later. The UBR is the last 4 digits of your full Windows build number (e.g. 10.0.19045.2311 for Windows 10 22H2). 2311 is installed with KB5020030, an optional Preview update, although if you are reading this later, it will likely be a later (non-Preview) monthly servicing update.
If you are on a supported Windows release, the WSL application with Systemd support can be installed:
Through the Microsoft Store (as "Windows Subsystem for Linux").
Or from the Releases page in the Github repo. To install a release manually:
Reboot (to make sure that WSL is not in use at all). A simple wsl --shutdown may work, but often will not.
Download the 1.0.0 (or later) release from the link above.
Start an Administrator PowerShell and:
Add-AppxPackage <path.to>/Microsoft.WSL_1.0.0.0_x64_ARM64.msixbundle
wsl --version # to confirm
To enable, start your Ubuntu (or other Systemd) distribution under WSL (typically just wsl ~ will work).
sudo -e /etc/wsl.conf
Add the following:
[boot]
systemd=true
Exit Ubuntu and again:
wsl --shutdown
Then restart Ubuntu.
sudo systemctl status
... should show your Systemd services.
Option 2: Run a Systemd-helper script designed for WSL2
There are a number of Systemd-enablement scripts available from various sources. Given the complexities involved in running Systemd under WSL, it is recommended that you:
Use one that is actively maintained
Attempt to understand, as much as possible, how they operate, and how they may impact other features and applications in your distribution(s) under WSL
When asking questions here or on any other site, disclose in the question which script you are using so that others can attempt to understand and/or reproduce your issue in the proper context
Several of the more popular projects that enable Systemd under WSL2 are:
Genie: 1.8k stars, last commit September, 2022
Distrod: 1.4k stars, last commit July 2022
WSL2-Hacks: 1.1k stars, mostly instructional, with a supporting script example. Last commit January, 2022
At the core, all of them operate on the same principles covered in the next option ...
Option 3: Manually run Systemd in its own namespace
One of the main issues with running Systemd in earlier versions of WSL is that both inits need to be PID 1. To get around this, it is possible to create a new namespace or container where Systemd can run as PID 1.
To see how this is done (at a very basic level):
Run:
sudo -b unshare --pid --fork --mount-proc /lib/systemd/systemd --system-unit=basic.target
This starts Systemd in a new namespace with its own PID mapping. Inside that namespace, Systemd will be PID1 (as it must, to function) and own all other processes. However, the "real" PID mapping still exists outside that namespace.
Note that this is a "bare minimum" command-line for starting Systemd. It will not have support for, at least:
Windows Interop (the ability to run Windows .exe)
The Windows PATH (which isn't necessary without Windows Interop anyway)
WSLg
The scripts and projects listed above do extra work to get these things working as well.
Wait a few seconds for Systemd to start up, then:
sudo -E nsenter --all -t $(pgrep -xo systemd) runuser -P -l $USER -c "exec $SHELL"
This enters the namespace, and you can now use ps -efH to see that systemd is running as PID 1 in that namespace.
At this point, you should be able to run systemctl.
And after proving to yourself that it's possible, it is recommended that you exit all WSL instances completely, then doing wsl --shutdown. Otherwise, some things will be "broken" until you do. They can likely be "fixed", but that's beyond the scope this answer. If you are interested, please refer to the projects listed above to see how they handle these situations.
Alternative 1: SysVInit scripts (e.g. sudo service <service_name> start) where available
In Ubuntu, Debian, and some other distributions on WSL, many of the common system services still have the "old" init.d scripts available to be used in place of systemctl with Systemd units. You can see these by using ls /etc/init.d/.
So, for example, you can start ssh with sudo service ssh start, and it will run the /etc/init.d/ssh script with the start argument.
Even some non-default packages such as MySql/MariaDB will install both the Systemd unit files and the old init.d scripts, so you can still use the service command for them as well.
On the hand, some packages, like Elasticsearch, only install Systemd units. And some distributions only provide Systemd units for most (if not all) packages in their repositories.
Alternative 2: Manually configuring and running the service
For those services that don't have a init-script equivalent, it can be possible to run them "manually".
For simplicity, let's assume that the ssh init.d script wasn't available.
In this case, the "answer" is to figure out what the Systemd unit files are doing and attempt to replicate that manually. This can vary widely in complexity. But I'd start with looking at the Systemd unit file that you are trying to run:
less /lib/systemd/system/ssh.service
# Trimmed
[Service]
EnvironmentFile=-/etc/default/ssh
ExecStartPre=/usr/sbin/sshd -t
ExecStart=/usr/sbin/sshd -D $SSHD_OPTS
RuntimeDirectory=sshd
RuntimeDirectoryMode=0755
Some of the less relevant lines have been trimmed to make it easier to parse, but you can man systemd.exec, man systemd.service, and others to see what most of the options do.
In this case, when you sudo systemctl start ssh, it:
Reads environment variables (the $SSHD_OPTS) from /etc/default/ssh
Tests the config, exits if there is a failure
Makes sure the RuntimeDirectory exists with the specified permissions. This translates to /run/sshd (from man systemd.exec). This also removes the runtime directory when you stop the service.
Runs /usr/sbin/sshd with options
So, if you don't have any environment-based config, you could just set up a script to:
Make sure the runtime directory exists. Note that, since it is in /run, which is a tmpfs mount, it will be deleted after every restart of the WSL instance.
Set the permissions to 0755
Start /usr/sbin/sshd as root
... And you would have done the same thing manually without Systemd.
Again, this is probably the simplest example. You might have much more to work through for more complex tasks.
Alternative 3: Docker
Many packages/services are available as Docker images. Docker typically runs very well under Ubuntu on WSL2 (specifically WSL2; it will not run on WSL1). If there's not a SysVinit "service" script for the service you are trying to start, there may very well be a Docker image available that runs in a containerized environment.
Example: Elasticsearch, as in this question.
Bonus #1: Doesn't interfere with other packages already installed (no dependency issues).
Bonus #2: The Docker images themselves pretty much never use Systemd, so you can often inspect the Dockerfile to see how the service is started without Systemd. For more information see the next option - "The manual way."
Microsoft recommends Docker Desktop for Windows for running Docker containers under WSL2.
Footnote This answer is being posted as a Community Wiki because it can apply to multiple Stack Overflow questions. It is originally based on answers to this Ask Ubuntu question. However, it is hoped that this wiki-answer can be continuously updated by the community as Systemd evolves on WSL.
This question has been chosen since:
It appears to be the most canonical, straightforward, "How do I enable Systemd on WSL?" question.
It is on-topic, as *creating Systemd services is (or at least can-be) unique to programming.
What exactly does command sudo setsebool -P nis_enabled 1 ? It seems to fixed strange access denied errors when running rabbitmq on Centos 7. All I know is that it i somehow related to SELINUX (what is for me black magic and often the reason why various programs mysteriously does not run).
I Guess NIS=Network Information Service
I'm new to Postgres.
I updated the Dockerfile I use and successfully installed Postgresql on it. (My image runs Ubuntu 16.04 and I'm using Postgres 9.6.)
Everything worked fine until I tried to move the database to a Volume with docker-compose (that was after making a copy of the container's folder with cp -R /var/lib/postgresql /somevolume/.)
The issue is that Postgres just keeps crashing, as witnessed by supervisord:
2017-07-26 18:55:38,346 INFO exited: postgresql (exit status 1; not expected)
2017-07-26 18:55:39,355 INFO spawned: 'postgresql' with pid 195
2017-07-26 18:55:40,430 INFO success: postgresql entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-07-26 18:55:40,763 INFO exited: postgresql (exit status 1; not expected)
2017-07-26 18:55:41,767 INFO spawned: 'postgresql' with pid 197
2017-07-26 18:55:42,841 INFO success: postgresql entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-07-26 18:55:43,179 INFO exited: postgresql (exit status 1; not expected)
(and so on…)
Logs
It's not clear to me what's happening as /var/log/postgresql remains empty.
chown?
I suspect it has to do with the user. If I compare the data folder inside the container and the copy I made of it to the volume, the only difference is that the original is owned by postgres while the copy is owned by root.
I tried running chown -R postgres:postgres on the copy. The operation was performed successfully, however postmaster.pid remains owned by root and I think that would be the issue.
Questions
How can I get more information about the cause of the crash?
How can I make it so that postmaster.id be owned by postgres ?
Should I consider running postgres with root instead?
Any hint welcome.
EDIT: links to the Dockerfile and the docker-compose.xml.
I'll answer my own question:
Logs & errors
What made matters more complicated was that I was not getting any specific error message.
To change that, I disabled the [program:postgresql] section in supervisord and, instead, started postgres manually from the command-line (thanks to Miguel Marques for setting me on the right track with his comment.)
Then I finally got some useful error messages:
2017-08-02 08:27:09.134 UTC [37] LOG: could not open temporary statistics file "/var/run/postgresql/9.6-main.pg_stat_tmp/global.tmp": No such file or directory
Fixing the configuration
I fixed the error above with this, eventually adding them to my Dockerfile:
mkdir -p /var/run/postgresql/9.6-main.pg_stat_tmp
chown postgres.postgres /var/run/postgresql/9.6-main.pg_stat_tmp -R
(Kudos to this guy for the fix.)
To make the data permanent, I also had to do this, for the volume to be accessible by postgres:
mkdir -p /var/lib/postgresql/9.6/main
chmod 700 /var/lib/postgresql/9.6/main
I also used initdb to initialize the data directory. BEWARE! This will erase any data found in that folder. Like so:
rm -R /var/lib/postgresql/9.6/main/*
ls /var/lib/postgresql/9.6/main/
/usr/lib/postgresql/9.6/bin/initdb -D /var/lib/postgresql/9.6/main
Testing
After the above, I could finally run postgres properly. I used this command to run it and test from the command-line:
su postgres
/usr/lib/postgresql/9.6/bin/postgres -D /var/lib/postgresql/9.6/main -c config_file=/etc/postgresql/9.6/main/postgresql.conf # as per the Docker docs
To test, I kept it running and then, from another prompt, checked everything ran fine with this:
su postgres
psql
CREATE TABLE cities ( name varchar(80), location point );
INSERT INTO cities VALUES ('San Francisco', '(-194.0, 53.0)');
select * from cities; # repeat this command after restarting the container to check that the data does persist
…making sure to restart the container and test again to check the data did persist.
And then finally restored the [program:postgresql] section in supervisord, rebuilt the image and restarted the container, making sure everything ran fine (in particular supervisord: tail /var/log/supervisor/supervisord.log), which it did.
(The command I used inside of supervisord.conf is also /usr/lib/postgresql/9.6/bin/postgres -D /var/lib/postgresql/9.6/main -c config_file=/etc/postgresql/9.6/main/postgresql.conf, as per this Docker article and other postgres+supervisord examples. Other options would have been using pg_ctl or an init.d script, but it's not clear to me why/when one would use those.)
I spent a lot of time on this. Hopefully the detailed answer will help someone down the line.
P.S.: I did end up producing a minimal example of my issue. If that can help anyone, here they are: Dockerfile, supervisord.conf and docker-compose.yml.
I do not know if this would be another way to achieve the same result (I'm new on Docker and Postgres too), but have you try the oficial repository image for Postgres (https://hub.docker.com/_/postgres/)?
I'm getting the data out of the container setting the environment variable PGDATA to '/var/lib/postgresql/data/pgdata' and binding this to an external volume on the run command:
docker run --name bd_TEST --network=my_network --restart=always -e POSTGRES_USER="superuser" -e POSTGRES_PASSWORD="myawesomepass" -e PGDATA="/var/lib/postgresql/data/pgdata" -v /var/local/db_data:/var/lib/postgresql/data/pgdata -itd -p 5432:5432 postgres:9.6
When the volume is empty, all the files are created by the image startup script, and if they already exist, the database start to used it.
From past experience I can see what may be a problem. I can't say if this will help but it is worth a try.
I would have added this as a comment, but I can't because my rep isn't hight enough.
I've spied a couple problems with how you have structured your statements in your Dockerfile. You have installed various things multiple times and also updated sporadically through the code. In my own files i've noticed that this can lead to somewhat random behaviour of my services and installation because of the different layers.
This may not seem to solve your problem directly, but cleaning up your file as is outlined in the best practices has solved many Dockerfile problems for me in the past.
One of the first places upon finding such problems is to start here at the best practices for RUN. This has helped me solve tricky problems in the past and I hope it'll solve or at least make it easier.
Pay special attention to this part:
After building the image, all layers are in the Docker cache. Suppose you later modify apt-get install by adding extra package:
FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y curl nginx
Docker sees the initial and modified instructions as identical and reuses the cache from previous
steps. As a result the apt-get update is NOT executed because the
build uses the cached version. Because the apt-get update is not run,
your build can potentially get an outdated version of the curl and
nginx packages.
After reading this I would start by consolidating all your dependencies.
In my case, having the same error, I debugged it until I found out:
the disk was full and I increased the diskspace to solve this.
(stupid error, easy fix - maybe reading this here helps someone not wasting time)
also linking this questiong for other options:
Supervisord "exit status 1 not expected" running php script
https://serverfault.com/questions/537773/supervisor-process-exits-with-exit-status-1-not-expected/1076115#1076115
I'm using CentOS 7 via AWS.
I'd like to store MongoDB data on an attached EBS instead of the default /var/lib path.
However, when I edit /etc/mongod.conf to point to a new dbpath, I'm getting a permission denied error.
Permissions are set correctly to mongod.mongod on the dir.
What gives?
TL;DR - The issue is SELinux, which affects what daemons can access. Run setenforce 0 to temporarily disable.
You're using a flavour of Linux that uses SELinux.
From Wikipedia:
SELinux can potentially control which activities a system allows each
user, process and daemon, with very precise specifications. However,
it is mostly used to confine daemons[citation needed] like database
engines or web servers that have more clearly defined data access and
activity rights. This limits potential harm from a confined daemon
that becomes compromised. Ordinary user-processes often run in the
unconfined domain, not restricted by SELinux but still restricted by
the classic Linux access rights
To fix temporarily:
sudo setenforce 0
This should disable SELinux policies and allow the service to run.
To fix permanently:
Edit /etc/sysconfig/selinux and set this:
SELINUX=disabled
Then reboot.
The service should now start-up fine.
The data dir will also work with Docker, i.e. something like:
docker run --name db -v /mnt/path-to-mounted-ebs:/data/db -p 27017:27017 mongo:latest
Warning: Both solutions DISABLE the security that SELinux provides, which will weaken your overall security. A better solution is to understand how SELinux works, and create a policy on your new data dir that works with mongod. See https://wiki.centos.org/HowTos/SELinux for a more complete tutorial.
I have a question with respect to Mongo DB Master / Slave setup on to the same machine .
I am using Ubutu 12 as OS .
Do i need to have two copies of MongoDB in the same machine ??
If yes , how can it allow to install two times seperately ??
(sudo apt-get install mongodb-10gen)
Since all the linked questions are for Windows and this is a Linux command I will divert from the "Possible duplicate" comment.
Yes you can run multiple mongods on the same machine. Instead of installing multiple times you just start mongod differently like such:
./mongod --dbpath /foo/bar/otherpath --port some_other_port
source: https://serverfault.com/questions/296246/multiple-mongos-on-one-server
But it is not recommended to do this due to resource contention, especially for memory. It will be horrid for even a development server and if you intend to put this setup into production then you might as well just go for one mongod.
If you want to run multiple instances on the same machine it is instead recommended to use containers such as virtual machines or something else. There are a few out there.