We are using rundeck 2.8.4.1 we have number of rundeck job which will generate huge number of log files in /var/lib/rundeck /logs
we have housekeeping and backup jobs in place to purge the older logs in Filesystem and DB.
But my concern is can we change this /var/lib/rundeck/logs location in rundeck framework.properties file to /data directory ?which we have more file-system space than /var.
framework.logs.dir=/var/lib/rundeck/logs
To
framework.logs.dir=/data/rundeck/logs
As you say, just modify the line framework.logs.dir=/var/lib/rundeck/logs by framework.logs.dir=/your/new/path (tested on 2.8.4 and 3.2.6). Here you have more information about that.
So, remember that it's for execution logs, not for "system" Rundeck logs.
Related
I accidentally deleted a volume of docker mongo-data:/data/db , i have a copy of that folder , now the problem is when i run docker-compose up mongodb container doesn't start and gives an error of mongo_1 exited with code 14 below more details of the error and the mongo-data folder , can you someone help me please
in docker-compose.yml
volumes:
- ./mongo-data:/data/db
Restore from backup files
A step-by-step process to repair the corrupted files from a failed mongodb in a docker container:
! Before you start, make copy of the files. !
Make sure you know which version of the image was running in the container
Spawn new container with to run the repair process as follows
docker run -it -v <data folder>:/data/db <image-name>:<image-version> mongod --repair
Once the files are repaired, you can start the containers from the docker-compose
If the repair fails, it usually means that the files are corrupted beyond repair. There is still a chance to repair it with exporting the data as described here.
How to secure proper backup files
The database is constantly working with the files, so the files are constantly changed on the disks. In addition, the database will keep some of the changes in the internal memory buffers before they are flushed to the filesystem. Although the database engines are doing very good job to assure the the database can recover from abrupt failure by using the 2-stage commit process (first update the transaction-log than the datafile), when the files are copied there could be a corruption that will prevent the database from recovery.
Reason for such corruption is that the copy process is not aware of the database written process progress, and this creates a racing condition. With very simple words, while the database is in middle of writing, the copy process will create a copy of the file(s) that is half-updated, hence it will be corrupted.
When the database writer is in middle of writing to the files, we call them hot files. hot files are term from the OS perspective, and MongoDB also uses a term hot backup which is a term from MongoDB perspective. Hot backup means that the backup was taken when the database was running.
To take a proper snapshot (assuring the files are cold) you need to follow the procedure explained here. In short, the command db.fsyncLock() that is issued during this process will inform the database engine to flush all buffers and stop writing to the files. This will make the files cold, however the database remains hot, hence the difference between the terms hot files and hot backup. Once the copy is done, the database is informed to start writing to the filesystem by issuing db.fsyncUnlock()
Note the process is more complex and can change with different version of the databse. Here I give a simplification of it, in order to illustrate the point about the problems with the file snapshot. To secure proper and consistent backup, always follow the documented procedure for the database version that you use.
Suggested backup method
Preferred backup should always be the data dump method, since this assures that you can restore even in case of upgraded/downgraded database engines. MongoDB provides very useful tool called mongodump that can be used to create database backups by dumping the data, instead by copy of the files.
For more details on how to use the backup tools, as well as for the other methods of backup read the MongoDB Backup Methods chapter of the MondoDB documentation.
I've been given a project to extract data from a PostgreSQL database. I've no previous experience with PostgreSQL but the project I have is to bug fix existing code, so all the logic to connect to the engine and get data is already in place.
The problem I have is the database has been given to me in the form of the folders and files straight from the source HDD, not a backup (which isn't going to happen so "Get the customer to give you a backup instead isn't an option here).
The folders also contained the actual PostgreSQL binaries so I looked a the version (9.4.14) and downloaded the nearest (9.4.18) from the PostgreSQL site and installed it. Now all I have to do is some how is to get it to look at my given data files.
I tried the obvious of copying the contents of the data folder into the installed data folder but after the PostgreSQL service won't start.
I did find a option in the conf file:
#data_directory = 'ConfigDir'
I changed this to:
data_directory = 'C:\customer\data'
But again the service won't start after this.
The data directory used by the service is defined through the service command line which overwrites any property defined in postgresql.conf.
You need to re-create the service in order to change the data directory, e.g.:
Remove the service:
pg_ctl -unregister -N postgresql-9.1
postgresql-9.1 is the "real" name of the service, not the "Display Name". You can see that in the properties of the service inside the "services" app.
Then re-create the service with the correct data directory:
pg_ctl -register -D -D c:\customer\data -N postgresql-9.1
Another way of "debugging" startup errors in Windows, is to start Postgres from the command line (not through the service) because some errors during startup are not logged in the Postgres logfile but they are displayed on the command line. You can do that with e.g.:
pg_ctl start -D c:\customer\data`
If the bin directory is not in your PATH you need to specify the full path to it on the command line, e.g.: c:\Postgres9.1\bin\pg_ctl
One of my servers has a virus and the Postgres service in Windows is not running a backup and I'm using Odoo8 and even the Odoo Service is not running.
Is it possible to restore a database using only a OID directory which from what I know is the database file of Postgres.
I assume you mean /data/base/<oid> directory. Unfortunately it's not enough. There are some settings stored outside database oid directory as you called it.
Ex:
/data/glboal/ - cluster users' settings (passwords, roles etc)
/data/pg_xlog/ - WAL entries - possibly with transactions changes not "transfered" to database files yet.
/data/pg_tblspc/ - tablespaces
You need whole /data directory. Read more about PHYSICAL BACKUP.
Edit:
So, if whole /data is available for you, you can restore database to other server. There's one thing you should remember: destination postrges cluster must be at the same varsion ex. 9.4.1. When the first and seccond numbers match (ex 9.2.10 and 9.2.16) this should also work most of the times. Keeping that in mind, you just need to replace /data/ directory on destination server with your source /data directory (destination server must be stopped during that operation).
I am installing postgres 8.4 on an ubuntu lucid server (no, at the moment we are using the "lucid" LTS version on that server so an upgrade is not possible yet (although we are going to start testing the system on precise quite soon now))
I have set up an own partition for the /var/lib/postgresql/8.4/main directory with a ext4 file system. (Those of you who are really into postgres installs knows what is happening now...) Since ext4 puts a lost+found directory in the root of all file system, postgres will not use that directory as its data-directory since it is initially not empty...
initdb: directory "/var/lib/postgresql/8.4/main" exists but is not empty
If you want to create a new database system, either remove or empty
the directory "/var/lib/postgresql/8.4/main" or run initdb
with an argument other than "/var/lib/postgresql/8.4/main".
The easiest way to proceed would be to remove the lost+found and recreate it after initdb has done its job. - could that cause any problems? Does the lost+found have any special attributes or anything that makes it impossible to recreate, and also, it is needed at any other time than if checkdisk finds something it needs to put there?
Another way would be to unmount the .../main/ file system, init the database, temporary mount the .../main/ filesystem somewhere else, move things over there and mount it in place. Seems to be a bit more work than the "easiest way".
Or is it some way to make initdb ignore that the directory is not empty? (couldn't see any command line switches for that)
May a lost+found directory within postgres main directory cause any problems?
At the moment I am running the system on a virtual machine for testing, so it really doesn't matter if I mess up things, but before making this an official way of installing a mission-critical system, it would be nice to have some thoughts on this.
lost+found has preallocated blocks that make it easier for fsck to move data into it when the partition is short of free blocks. To create it, better use the mklost+found command rather than mkdir.
If you don't recreate it, fsck will do it anyway when it's needed.
But if it comes to the point where fsck finds corruption within PGDATA, I'd think about going for a backup rather than counting on lost+found to retrieve anything.
I have looked at the postgres documentation and the synopsis below is given:
pg_resetxlog [-f] [-n] [-ooid ] [-x xid ] [-e xid_epoch ] [-m mxid ] [-O mxoff ] [-l timelineid,fileid,seg ] datadir
But at no point in the documentation do they explain what the datadir is.
Is it the %postgres-path%/9.0/data or could it be %postgres-path%/9.0/data/pgxlog ?
Also, if I want to change my xlog directory, can I simply move the items in my current pg_xlog directory and run the command to point to another directory? (Assume my current pg_xlog directory is in /data1/postgres/data/pg_xlog AND the directory I want it the logs to go to is: /data2/pg_xlog)
Would the following command achieve what I've just described?
mv /data1/postgres/data/pg_xlog /data2/pg_xlog
pg_resetxlog /data2
pg_resetxlog is a tool of last resort for getting your database running again after:
You deleted files you shouldn't have from pg_xlog;
You restored a file system level backup that omitted the pg_xlog directory due to a backup system configuration mistake (this happens more than you'd think, people think "it has log in the name so it must be unimportant; I'll leave it out of the backups").
File-system corruption due to a hardware fault or hard drive failure damaged your data directory; or potentially even
a PostgreSQL bug or operating system bug damaged the write-ahead logs (exceedingly rare).
As the manual says:
pg_resetxlog clears the write-ahead log (WAL) [...]. This
function is sometimes needed if these files have become corrupted. It
should be used only as a last resort, when the server will not start
due to such corruption.
Do not run pg_resetxlog unless you know exactly what you are doing and why. If you are unsure, ask on the pgsql-general mailing list or on https://dba.stackexchange.com/.
pg_resetxlog may corrupt your database, as the documentation warns. If you have to use it, you should REINDEX, dump your database(s), re-initdb, and reload your databases. Do not just continue using the damaged cluster. As per the documentation:
After running this command, it should be possible to start the server,
but bear in mind that the database might contain inconsistent data due
to partially-committed transactions. You should immediately dump your
data, run initdb, and reload. After reload, check for inconsistencies
and repair as needed.
If you simply want to move your write-ahead log directory to another location, you should:
Stop PostgreSQL
Move pg_xlog
Add a symbolic link from the old location to the new location
Start PostgreSQL
Or, as the documentation says:
It is advantageous if the log is located on a different disk from the
main database files. This can be achieved by moving the pg_xlog
directory to another location (while the server is shut down, of
course) and creating a symbolic link from the original location in the
main data directory to the new location.
If PostgreSQL fails to start, you've done something wrong. Do not use pg_resetxlog to "fix" it. Undo your changes and work out what you did wrong.
Move the contents of your pg_xlog directory to the desired location like '/home/foo/pg_xlog'
mv pg_xlog/* /home/foo/pg_xlog
Delete the pg_xlog directory
rm -rf pg_xlog
Create a soft-link of pg_xlog
ln -s /home/foo/pg_xlog pg_xlog
Verify the link
ls -lrt pg_xlog
Note: pg_resetxlog is not the right tool to move the pg_xlog please read
http://www.postgresql.org/docs/9.2/static/app-pgresetxlog.html
The data directory corresponds to the data_directory entry in the postgresql.conf file, or the PGDATA environment variable, and it can also be queried live in SQL with the SHOW data_directory statement. It does not point to the pg_xlog directory, but one level above.
To change the location of the WAL files, the PG server must be shut down, the pg_xlog directory and its contents moved to the new location, a symbolic link should be created from the old location to the new location, and the server restarted. pg_resetxlog should not be used for this, as it may suppress the latest transactions (this tool is typically used in crash recovery situations when all else fails).
You should never manually touch the WAL files, that is perfectly clear.
If there is dangling files in the pg_xlog directory, that is, there are is file which ends with .done* in the sub-folder archive_status which need to be cleaned up manually, that can be accomplished with the sql command
CHECKPOINT;
which forces a transaction checkpoint which includes cleaning up the WAL segment files.
See documentation for 9.3 but exists in all current versions of Postgresql.