restore complete filesystem to default security context - centos

I'm a selinux newbie and had to change the security context of a mercurial repo and config file on a CentOS box to get it serves from httpd.
Accidentally I issued "chcon -Rv --type=httpd_sys_script_exec_t /", which I could only stop when already masses of files and directories have been modified.
I read about restorecon to restore something to its default context, but it doesn't work for me, I got "permission denied".
What can I do to restore the whole filesystem to its selinux defaults?

You could try doing a fixfiles relabel to get things back in order. Else you could edit /etc/selinux/config and set the system to no longer enforce SELinux. Good luck!

You could either of the following to fix this.
fixfiles
create a file /.autorelabel and reboot the system.
restorecon -f file
Usually the conf file will be /etc/selinux/targeted/contexts/files/file_contexts

Related

PostgreSQL : .pcppass file not found, tried other links

I am working on installing PgPoolAdmin on my local ubuntu system for installing it on server later. Currently, I am able to login but I keep getting an error Could not read .pcppass fileFile not found. I have tried this and many other resources, but no luck. Where is it looking for this file?
The username and passowrd in pcp.conf is same as here, just its in plain text in .pcppass and md5 in pcp.conf. Is that correct?
pcp.conf I have on 2 location /var/www/html and /var/www/html/admin-tool/
Its contents :
#insert:hostname:port:username:password
*:*:akshay:PASSWORD
*:*:postgres:PASSWORD
Thank you.
.pcppass needs to accessible by the user that runs your web server. For example, if you are serving pgpoolAdmin through apache2 with default paths and users. The following should solve the issue.
cp ~/.pcpass /var/www/.pcppass
chown www-data:www-data /var/www/.pcppass
chmod 600 /var/www/.pcppass
By default a .pcppass file should be located in the user's $HOME directory. If you have created it elsewhere, then initialize the $PCPPASSFILE environment variable with the filepath. Make sure the file is in this format:hostname:port:username:password. Then you should be able to access the database.
Note: You cannot use wildcards in the password files, as it will give error sometimes. It is better to use exact host/port values for better security.

make server backup, and keep owner with rsync

I recently configured a little server for test some services, now, before an upgrade or install new software, I want to make an exact copy of my files, with owners, groups and permissions, also the symlinks.
I tried with rsync to keep the owner and group but in the machine who receives the copy I lost them.
rsync -azp -H /directorySource/ myUser#192.168.0.30:/home/myUser/myBackupDirectory
My intention is to do it with the / folder, to keep all my configurations just in case, I have 3 services who have it's own users and maybe makes modifications in folders outside it's home.
In the destination folder appear with my destination user, whether I do the copy from the server as if I do it from the destination, it doesn't keep the users and groups!, I create the same user, tried with sudo, even a friend tried with 777 folder :)
cp theoretically serves the same but doesn't work over ssh, anyway I tried to do it in the server but have many errors. As I remembered the command tar also keep the permissions and owners but have errors because the server it's working and it isn't so fast the process to restore. I remember too the magic dd command, but I made a big partition. Rsync looked the best option to do it, and to keep synchronized the backup. I saw rsync in the new version work well with owners but I have the package upgraded.
Anybody have some idea how I do this, or how is the normal process to keep my own server well backuped, to restore just making the partition again?
The services are taiga, a project manager platform, a git repository, a code reviewer, and so on, all are working well with nginx over Ubuntu Server. I haven't looked other backup methods because I thought rsync with a cron job do the work.
Your command would be fine, but you need to run as root user on the remote end (only root has permission to set file owners):
rsync -az -H /directorySource/ root#192.168.0.30:/home/myUser/myBackupDirectory
You also need to ensure that you use rsync's -o option to preserve owners, and -g to preserve groups, but as these are implied by -a your command is OK. I removed -p because that's also implied by -a.
You'll also need root access, on the local end, to do the reverse transfer (if you want to restore your files).
If that doesn't work for you (no root access), then you might consider doing this using tar. A proper archive is probably the correct tool for the job, and will contain all the correct user data. Again, root access will be needed to write that back to the file-system.

postgresql initdb - directory not empty

I am installing postgres 8.4 on an ubuntu lucid server (no, at the moment we are using the "lucid" LTS version on that server so an upgrade is not possible yet (although we are going to start testing the system on precise quite soon now))
I have set up an own partition for the /var/lib/postgresql/8.4/main directory with a ext4 file system. (Those of you who are really into postgres installs knows what is happening now...) Since ext4 puts a lost+found directory in the root of all file system, postgres will not use that directory as its data-directory since it is initially not empty...
initdb: directory "/var/lib/postgresql/8.4/main" exists but is not empty
If you want to create a new database system, either remove or empty
the directory "/var/lib/postgresql/8.4/main" or run initdb
with an argument other than "/var/lib/postgresql/8.4/main".
The easiest way to proceed would be to remove the lost+found and recreate it after initdb has done its job. - could that cause any problems? Does the lost+found have any special attributes or anything that makes it impossible to recreate, and also, it is needed at any other time than if checkdisk finds something it needs to put there?
Another way would be to unmount the .../main/ file system, init the database, temporary mount the .../main/ filesystem somewhere else, move things over there and mount it in place. Seems to be a bit more work than the "easiest way".
Or is it some way to make initdb ignore that the directory is not empty? (couldn't see any command line switches for that)
May a lost+found directory within postgres main directory cause any problems?
At the moment I am running the system on a virtual machine for testing, so it really doesn't matter if I mess up things, but before making this an official way of installing a mission-critical system, it would be nice to have some thoughts on this.
lost+found has preallocated blocks that make it easier for fsck to move data into it when the partition is short of free blocks. To create it, better use the mklost+found command rather than mkdir.
If you don't recreate it, fsck will do it anyway when it's needed.
But if it comes to the point where fsck finds corruption within PGDATA, I'd think about going for a backup rather than counting on lost+found to retrieve anything.

Moving MongoDB's data folder?

I have 2 computers in different places (so it's impossible to use the same wifi network).
One contains about 50GBs of data (MongoDB files) that I want to move to the second one which has much more computation power for analysis. But how can I make MongoDB on the second machine recognize that folder?
When you start mongodprocess you provide an argument to it --dbpath /directory which is how it knows where the data folder is.
All you need to do is:
stop the mongod process on the old computer. wait till it exits.
copy the entire /data/db directory to the new computer
start mongod process on the new computer giving it --dbpath /newdirectory argument.
The mongod on the new machine will use the folder you indicate with --dbpath. There is no need to "recognize" as there is nothing machine specific in that folder, it's just data.
I did this myself recently, and I wanted to provide some extra considerations to be aware of, in case readers (like me) run into issues.
The following information is specific to *nix systems, but it may be applicable with very heavy modification to Windows.
If the source data is in a mongo server that you can still run (preferred)
Look into and make use of mongodump and mongorestore. That is probably safer, and it's the official way to migrate your database.
If you never made a dump and can't anymore
Yes, the data directory can be directly copied; however, you also need to make sure that the mongodb user has complete access to the directory after you copy it.
My steps are as follows. On the machine you want to transfer an old database to:
Edit /etc/mongod.conf and change the dbPath field to the desired location.
Use the following script as a reference, or tailor it and run it on your system, at your own risk.
I do not guarantee this works on every system --> please verify it manually.
I also cannot guarantee it works perfectly in every case.
WARNING: will delete everything in the target data directory you specify.
I can say, however, that it worked on my system, and that it passes shellcheck.
The important part is simply copying over the old database directory, and giving mongodb access to it through chown.
#!/bin/bash
TARGET_DATA_DIRECTORY=/path/to/target/data/directory # modify this
SOURCE_DATA_DIRECTORY=/path/to/old/data/directory # modify this too
echo shutting down mongod...
sudo systemctl stop mongod
if test "$TARGET_DATA_DIRECTORY"; then
echo removing existing data directory...
sudo rm -rf "$TARGET_DATA_DIRECTORY"
fi
echo copying backed up data directory...
sudo cp -r "$SOURCE_DATA_DIRECTORY" "$TARGET_DATA_DIRECTORY"
sudo chown -R mongodb "$TARGET_DATA_DIRECTORY"
echo starting mongod back up...
sudo systemctl start mongod
sudo systemctl status mongod # for verification
quite easy for windows, just move the data folder to the target location
run cmd
"C:\your\mongodb\bin-path\mongod.exe" --dbpath="c:\what\ever\path\data\db"
In case of Windows in case you need just to configure new path for data, all you need to create new folder, for example D:\dev\mongoDb-data, open C:\Program Files\MongoDB\Server\6.0\bin\mongod.cfg and change there path :
Then, restart your PC. Check folder - it should contains new files/folders with data.
Maybe what you didn't do was export or dump the database.
Databases aren't portable therefore must be exported or created as a dumpfile.
Here is another question where the answer is further explained

Stop Oracle from generating sqlnet.log file

I'm using DBD::Oracle in perl, and whenever a connection fails, the client generates a sqlnet.log file with error details.
The thing is, I already have the error trapped by perl, and in my own log file. I really don't need this extra information.
So, is there a flag or environment for stopping the creation of sqlnet.log?
As the Oracle Documentation states: To ensure that all errors are recorded, logging cannot be disabled on clients or Names Servers.
You can follow the suggestion of DCookie and use the /dev/null as the log directory. You can use NUL: on windows machines.
From the metalink
The logging is automatic, there is no way to turn logging off, but since you are on Unix server, you can redirect the log file to a null device, thus eliminating the problem of disk space consumption.
In the SQLNET.ORA file, set LOG_DIRECTORY_CLIENT and LOG_DIRECTORY_SERVER equal to a null device.
For example:
LOG_DIRECTORY_CLIENT = /dev/null
LOG_FILE_CLIENT = /dev/null
in SQLNET.ORA suppresses client logging completely.
To disable the listener from logging, set this parameter in the LISTENER.ORA file:
logging_listener = off
Are your clients on Windows, or *nix? If in *nix, you can set LOG_DIRECTORY_CLIENT=/dev/null in your sqlnet.ora file. Not sure if you can do much for a windows client.
EDIT: Doesn't look like it's possible in Windows. The best you could do would be to set the sqlnet.ora parameter above to a fixed location and create a scheduled task to delete the file as desired.
Okay, as Thomas points out there is a null device on windows, use the same paradigm.
IMPORTANT: DO NOT SET "LOG_FILE_CLIENT=/dev/null", this will cause permissions of /dev/null be reset each time your initialize oracle library, and when your umask is something that does not permit world readable-writable bits, those get removed from /dev/null and if you have permission to chmod that file: i.e running as root.
and running as root maybe something trivial, like php --version having oci php-extension present!
full details here:
http://lists.pld-linux.org/mailman/pipermail/pld-devel-en/2014-May/023931.html
you should use path inside directory that doesn't exist:
LOG_FILE_CLIENT = /dev/impossible/path
and hope nobody creates dir /dev/impossible :)
for Windows NUL probably is fine as it's not actual file there...