How to create multiple znodes at once? - apache-zookeeper

Despite we have a znode named 'my-node'
I want to create this structure : /my-node/node-level-1/node-level-2..
create command will throw an error because of missing node of node-level-1.
How to create all missing nodes with one command automatically, like we do it for directories with mkdir -p
Documentation doesn't say anything about it. Is it impossible?

Related

Can't restore OpenLDAP to new server

I am trying to back up the config and database files from an existing LDAP server to move it to a different freshly installed server (Ubuntu 18.04). I followed the steps given here: https://tylersguides.com/articles/backup-restore-openldap/ to use slapcat to create both config and data ldif files.
When I execute slapadd on the server side,
slapadd -n 0 -F /etc/openldap/slapd.d -l /backups/config.ldif works fine, but executing
slapadd -n 1 -F /etc/openldap/slapd.d -l /backups/data.ldif gives the following error:
Database number selected via -n is out of range
Must be in the range 0 to 0 (the number of configured databases)
All the sites I have been able to find regarding this migration process follow steps similar to the ones above, but none of them mention anything about preconfiguring the number of databases or anything like that. I'm not sure how to proceed from here.
almost sure that you did a mistake during dump and the content of the second file is the same as the config file
Try removing -F /etc/openldap/slapd.d from the second slapadd.

docker-compose mongo needs read permissions for all

The mongo image has this nice mount option to initialize a database. You can mount ./initdb.d/ and *.js will be executed.
However, my project files and directories are 600 and 700 respectively, owned by redsandro:redsandro. The mongo image cannot read them.
It doesn't matter if I add group read + execute (dir) (i.e. 640 and 750). Only when I add read permissions for all on the file, and read + execute permissions for all on the directory (i.e. 644 and 755, let's call that "plan C"), will the mongo image execute the script.
Is there a way I can keep my files private on my machine (e.g. no permissions for all AKA umask 007) and still have the mongo image read them?
Update: I'm looking for a way to do this with docker-compose options, variables, environment etc. Not with building a custom image. E.g. some images that have similar problems allow me to simply set user: 1000 (uid for local user). This doesn't work with the mongo image.
You can copy files instead of mounting the directory. E.g. your Dockerfile can be something like that:
FROM mongo
COPY --chown=mongodb:mongodb /host/path/to/scripts/* /docker-entrypoint-initdb.d/
It means each time you change the scripts you will need to rebuild the image, not just re-create the container. It's a tiny layer on top of the base image so it shouldn't take much time to build the image.

Postgres equivalent to Oracle's "DIRECTORY" objects

Is it possible to create "DIRECTORY" object in Postgres?
If not can some help me with a solution how implement it on PostgreSQL.
Not the best option, but you could use:
COPY (select 1) TO PROGRAM 'mkdir --mode=777 -p /path/to/your/directory/'
Note that only the last part of directory get the permissions set in mode.
There is no equivalent concept to an "Oracle directory" in Postgres.
The alternatives depend on why the "Oracle directory" is needed.
If the directory is needed to read and write files on the database server, then this can be done through Generic File Access Functions. Access to those functions is restricted to superusers (details in the linked section of the manual). If regular users should be able to use them, the best thing would be to create wrapper functions and then grant execute on those functions to the users in question.
For security reasons, only directories inside the database cluster can be accessed.
But it's possible to create symlinks inside the data directory that point to directories outside the data directory. Access privileges on those directories need to be properly setup for the postgres operating system user (the one under which the postgres process is started)
If the directory is needed to access e.g. CSV files through Oracle's external tables, then there is no need for a "directory". The file FDW foreign data wrapper, can access files outside the data directory (provided access privileges have been setup correctly on the file system level).
The question doesn't even make sense really. PostgreSQL is a database management system. It doesn't have files and directories.
The closest parallel I can think of is schemas - see CREATE SCHEMA.
Now, if you want to use COPY to write output to the server's disk and want to create a directory to put that output in... then no, there's nothing like that. But you can use PL/Perlu or PL/Pythonu to do it easily enough.

Making PostgreSQL create directory for new tablespace

i would like to be able to do something like
CREATE TABLESPACE bob location 'C:\a\b\c\d\e\f\bob'
without needing to create all the directory tree beforehand.
this is because i have java code that creates tablespaces on the fly and i would like to be able to run it on a separate machine (so it couldnt mkdir() or anything).
is there any sort of postgres configuration that would allow me to make postgres create the appropriate directory tree by itself?
You could try do mkdir directly in postgres stored procedure using PL/sh or any of your favorite PL/* languages that are available for PostgreSQL

Moving MongoDB's data folder?

I have 2 computers in different places (so it's impossible to use the same wifi network).
One contains about 50GBs of data (MongoDB files) that I want to move to the second one which has much more computation power for analysis. But how can I make MongoDB on the second machine recognize that folder?
When you start mongodprocess you provide an argument to it --dbpath /directory which is how it knows where the data folder is.
All you need to do is:
stop the mongod process on the old computer. wait till it exits.
copy the entire /data/db directory to the new computer
start mongod process on the new computer giving it --dbpath /newdirectory argument.
The mongod on the new machine will use the folder you indicate with --dbpath. There is no need to "recognize" as there is nothing machine specific in that folder, it's just data.
I did this myself recently, and I wanted to provide some extra considerations to be aware of, in case readers (like me) run into issues.
The following information is specific to *nix systems, but it may be applicable with very heavy modification to Windows.
If the source data is in a mongo server that you can still run (preferred)
Look into and make use of mongodump and mongorestore. That is probably safer, and it's the official way to migrate your database.
If you never made a dump and can't anymore
Yes, the data directory can be directly copied; however, you also need to make sure that the mongodb user has complete access to the directory after you copy it.
My steps are as follows. On the machine you want to transfer an old database to:
Edit /etc/mongod.conf and change the dbPath field to the desired location.
Use the following script as a reference, or tailor it and run it on your system, at your own risk.
I do not guarantee this works on every system --> please verify it manually.
I also cannot guarantee it works perfectly in every case.
WARNING: will delete everything in the target data directory you specify.
I can say, however, that it worked on my system, and that it passes shellcheck.
The important part is simply copying over the old database directory, and giving mongodb access to it through chown.
#!/bin/bash
TARGET_DATA_DIRECTORY=/path/to/target/data/directory # modify this
SOURCE_DATA_DIRECTORY=/path/to/old/data/directory # modify this too
echo shutting down mongod...
sudo systemctl stop mongod
if test "$TARGET_DATA_DIRECTORY"; then
echo removing existing data directory...
sudo rm -rf "$TARGET_DATA_DIRECTORY"
fi
echo copying backed up data directory...
sudo cp -r "$SOURCE_DATA_DIRECTORY" "$TARGET_DATA_DIRECTORY"
sudo chown -R mongodb "$TARGET_DATA_DIRECTORY"
echo starting mongod back up...
sudo systemctl start mongod
sudo systemctl status mongod # for verification
quite easy for windows, just move the data folder to the target location
run cmd
"C:\your\mongodb\bin-path\mongod.exe" --dbpath="c:\what\ever\path\data\db"
In case of Windows in case you need just to configure new path for data, all you need to create new folder, for example D:\dev\mongoDb-data, open C:\Program Files\MongoDB\Server\6.0\bin\mongod.cfg and change there path :
Then, restart your PC. Check folder - it should contains new files/folders with data.
Maybe what you didn't do was export or dump the database.
Databases aren't portable therefore must be exported or created as a dumpfile.
Here is another question where the answer is further explained