I want to set up Liquibase (using Docker) for a PostgreSQL database running locally (not in a container). I followed multiple tutorials, including the one on Docker Hub.
As suggested I've created a liquibase.docker.properties file in my <PATH TO CHANGELOG DIR>
classpath: /liquibase/changelog
url: jdbc:postgresql://localhost:5432/mydb?currentSchema=public
changeLogFile: changelog.xml
username: myuser
password: mypass
to be able to run docker run --rm --net="host" -v <PATH TO CHANGELOG DIR>:/liquibase/changelog liquibase/liquibase --defaultsFile=/liquibase/changelog/liquibase.docker.properties <COMMAND>.
When I run [...] generateChangeLog I get the following output (with option --logLevel info):
[2021-04-27 06:08:20] INFO [liquibase.integration] No Liquibase Pro license key supplied. Please set liquibaseProLicenseKey on command line or in liquibase.properties to use Liquibase Pro features.
Liquibase Community 4.3.3 by Datical
####################################################
## _ _ _ _ ##
## | | (_) (_) | ##
## | | _ __ _ _ _ _| |__ __ _ ___ ___ ##
## | | | |/ _` | | | | | '_ \ / _` / __|/ _ \ ##
## | |___| | (_| | |_| | | |_) | (_| \__ \ __/ ##
## \_____/_|\__, |\__,_|_|_.__/ \__,_|___/\___| ##
## | | ##
## |_| ##
## ##
## Get documentation at docs.liquibase.com ##
## Get certified courses at learn.liquibase.com ##
## Free schema change activity reports at ##
## https://hub.liquibase.com ##
## ##
####################################################
Starting Liquibase at 06:08:20 (version 4.3.3 #52 built at 2021-04-12 17:08+0000)
BEST PRACTICE: The changelog generated by diffChangeLog/generateChangeLog should be inspected for correctness and completeness before being deployed.
[2021-04-27 06:08:22] INFO [liquibase.diff] changeSets count: 1
[2021-04-27 06:08:22] INFO [liquibase.diff] changelog.xml does not exist, creating and adding 1 changesets.
Liquibase command 'generateChangeLog' was executed successfully.
It looks like the command ran "successfully" but I could not find the file changelog.xml in my local directory which I mounted, i.e. <PATH TO CHANGELOG DIR>. The mounting however has to be working since it connects to the database successfully, i.e. the container is able to access and read liquibase.docker.properties.
First I thought I might have to "say" to Docker that it is allowed to write on my disk but it seems that this should be supported [from the description on Docker Hub]:
The /liquibase/changelog volume can also be used for commands that write output, such as generateChangeLog
What am I missing? Thanks in advance for any help!
Additional information
Output of docker inspect:
"Mounts": [
{
"Type": "bind",
"Source": "<PATH TO CHANGELOG DIR>",
"Destination": "/liquibase/changelog",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
...
],
When you run generateChangeLog, the path to the file should be specified as /liquibase/changelog/changelog.xml even though for update it needs to be changelog.xml
Example:
docker run --rm --net="host" -v <PATH TO CHANGELOG DIR>:/liquibase/changelog liquibase/liquibase --defaultsFile=/liquibase/changelog/liquibase.docker.properties --changeLogFile=/liquibase/changelog/changelog.xml generateChangeLog
For generateChangeLog, the changeLogFile argument is the specific path to the file to output vs. a path relative to the classpath setting that update and other commands use.
When you include the command line argument as well as a defaultsFile like above, the command line argument wins. That lets you leverage the same default settings while replacing specific settings when specific commands need more/different ones.
Details
There is a distinction between operations that are creating files and ones that are reading existing files.
With Liquibase, you almost always want to use paths to files that are relative to directories in the classpath like the examples have. The specified changelogFile gets stored in the tracking system, so if you ever run the same changelog but referenced in a different way (because you moved the root directory or are running from a different machine) then Liquibase will see it as a new file and attempt to re-run already ran changesets.
That is why the documentation has classpath: /liquibase/changelog and changeLogFile: com/example/changelog.xml. The update operation looks in the /liquibase/changelog dir to find a file called com/example/changelog.xml and finds it and stores the path as com/example/changelog.xml.
GenerateChangeLog is one of those "not always relative to classpath" cases because it needs to know where to store the file. If you just specify the output changeLogFile as changelog.xml it creates just creates that file relative to your process's working directory which is not what you are needing/expecting.
TL;DR
Prefix the changelog filename with /liquibase/changelog/ and pass it as a command line argument:
[...] --changeLogFile /liquibase/changelog/changelog.xml generateChangelog
See Nathan's answer for details.
Explanation
I launched the container with -it and overwrote the entrypoint to get an interactive shell within the container (see this post):
docker run --net="host" -v <PATH TO CHANGELOG DIR>:/liquibase/changelog -it --entrypoint /bin/bash liquibase/liquibase -s
Executing ls yields the following:
liquibase#ubuntu-rafael:/liquibase$ ls
ABOUT.txt UNINSTALL.txt docker-entrypoint.sh liquibase
GETTING_STARTED.txt changelog examples liquibase.bat
LICENSE.txt changelog.txt lib liquibase.docker.properties
README.txt classpath licenses liquibase.jar
Notable here is docker-entrypoint.sh which actually executes the liquibase command, and the folder changelog which is mounted to my local <PATH TO CHANGELOG DIR> (my .properties file is in there).
Now I ran the same command as before but now inside the container:
sh docker-entrypoint.sh --defaultsFile=/liquibase/changelog/liquibase.docker.properties --logLevel info generateChangeLog
I got the same output as above but guess what reveals when running ls again:
ABOUT.txt changelog examples liquibase.docker.properties
GETTING_STARTED.txt changelog.txt lib liquibase.jar
LICENSE.txt changelog.xml ...
The changelog actually exists! But it is created in the wrong directory...
If you prefix the changelog filename with /liquibase/changelog/, the container is able to write it to your local (mounted) disk.
P.S. This means that the description of the "Complete Example" using "a properties file" from here is not working. I will open an Issue for that.
UPDATE
Specifying the absolute path is only necessary for commands that write a new file, e.g. generateChangeLog (see Nathan's answer). But it is better practise to pass the absolute path via command line so that you can keep the settings in the defaults-file.
Related
In my loop, I run a dbt command and save the output to a .yml file. The following command works and generates a schema in my .yml file accurately:
for file in models/l30_mart/*.sql; do
table=$(basename "$file" .sql)
dbt run-operation generate_model_yaml --args "{\"model_name\": \"$table\"}" > test.yml
done
However, in the example above, I am saving the test.yml file in the root directory. When I try to save the file in another path for example models/l30_mart/test.yml like this, it doesn't work:
for file in models/l30_mart/*.sql; do
table=$(basename "$file" .sql)
dbt run-operation generate_model_yaml --args "{\"model_name\": \"$table\"}" > models/l30_mart/test.yml
done
In this case, when I open the test.ymlfile, I see this:
12:06:42 Running with dbt=1.0.1
12:06:43 Encountered an error:
Compilation Error
The schema file at models/l30_mart/test.yml is invalid because no version is specified. Please consult the documentation for more information on schema.yml syntax:
https://docs.getdbt.com/docs/schemayml-files
What am I missing out on?
If I try something like this to save different files with the extracted tablename variable as the filename, it also doesn't work:
for file in models/l30_mart/*.sql; do
table=$(basename "$file" .sql)
dbt run-operation generate_model_yaml --args "{\"model_name\": \"$table\"}" > models/l30_mart/$table.yml
done
In this case, the files either have this output:
20:39:44 Running with dbt=1.0.1
20:39:45 Encountered an error:
Compilation Error
The schema file at models/l30_mart/**firsttable.yml** is invalid because no version is specified. Please consult the documentation for more information on schema.yml syntax:
https://docs.getdbt.com/docs/schemayml-files
or this (eg in the secondtablename.yml file):
20:39:48 Running with dbt=1.0.1
20:39:49 Encountered an error:
Parsing Error
Error reading dbt_4flow: l30_mart/firstablename.yml - Runtime Error
Syntax error near line 2
------------------------------
1 | 20:39:44 Running with dbt=1.0.1
2 | 20:39:45 Encountered an error:
3 | Compilation Error
4 | The schema file at models/l30_mart/firsttablename.yml is invalid because no version is specified. Please consult the documentation for more information on schema.yml syntax:
5 |
Raw Error:
------------------------------
mapping values are not allowed in this context
in "<unicode string>", line 2, column 31
Note that the secondtablename.yml mentions the firsttablename.yml.
I don't know dbt but the explanation that seems likely is that dbt for some reason parses all *.yml files in that target directory when you call it. Since the shell opens the pipe to the *.yml file before calling dbt, the file already exists (but initially empty) when dbt is called. Since dbt expects the file to contain a version, you get an error.
To check whether this assessment is correct, write into a temporary file:
for file in models/l30_mart/*.sql; do
target_file=$(mktemp)
table=$(basename "$file" .sql)
dbt run-operation generate_model_yaml --args "{\"model_name\": \"$table\"}" > $target_file
mv $target_file models/l30_mart/test.yml
done
(Be aware of mktemp shenanigans if you're using macOS)
Edit: Since dbt seems to be affected by the files existing, you can also try to generate all files and move them into the correct directory afterwards:
target_dir=$(mktemp -d)
for file in models/l30_mart/*.sql; do
table=$(basename "$file" .sql)
dbt run-operation generate_model_yaml --args "{\"model_name\": \"$table\"}" > $target_dir/$table.yml
done
mv $target_dir/*.yml models/l30_mart/
rmdir $target_dir
looking to disable some SELinux modules (set to off) and create others in modules.conf. I don't see an obvious way of updating modules.conf as I tried adding my changes as a modules.conf patch but it failed given that the modules.conf file gets built and is not just downloaded by BR so it is not available for patching like other things under the refpolicy directory:
Build window output:
refpolicy 2.20190609 PatchingApplying 0001-refpolicy-update-modules-conf.patch using patch:
can't find file to patch at input line 3
I did see in the log that there is a support/sedoctool.py that autogenerates the policy/modules.conf file so that the file is NOT patchable like most other things in the ref policy.
The relevant section of the buildroot/output/build/refpolicy-2.20190609/Makefile:
# policy building support tools
support := support
genxml := $(PYTHON) $(support)/segenxml.py
gendoc := $(PYTHON) $(support)/sedoctool.py
<...snip...>
########################################
#
# Create config files
#
conf: $(mod_conf) $(booleans) generate$(booleans) $(mod_conf): conf.intermediate.INTERMEDIATE: conf.intermediate
conf.intermediate: $(polxml)
#echo "Updating $(booleans) and $(mod_conf)"
$(verbose) $(gendoc) -b $(booleans) -m $(mod_conf) -x $(polxml)
Part of the hsmlinux build.log showing the sedoctool.py (gendoc) being run:
Updating policy/booleans.conf and policy/modules.conf
.../build-buildroot-sawshark/buildroot/output/host/usr/bin/python3 support/sedoctool.py -b policy/booleans.conf -m policy/modules.conf -x doc/policy.xml
I'm sure there is a standard way of doing this, just doesn't seem to be documented anywhere I can find.
Thanks.
Turns out that the sedoctool.py script is reading the doc/policy.xml. Looking at sedoctool.py:
#modules enabled and disabled values
MOD_BASE = "base"
MOD_ENABLED = "module"
MOD_DISABLED = "off"
<...snip...>
def gen_module_conf(doc, file_name, namevalue_list):
"""
Generates the module configuration file using the XML provided and the
previous module configuration.
"""
# If file exists, preserve settings and modify if needed.
# Otherwise, create it.
<...snip...>
mod_name = node.getAttribute("name")
mod_layer = node.parentNode.getAttribute("name")
<...snip...>
if mod_name and mod_layer:
file_name.write("# Layer: %s\n# Module: %s\n" % (mod_layer,mod_name))
if required:
file_name.write("# Required in base\n")
file_name.write("#\n")
if [mod_name, MOD_DISABLED] in namevalue_list:
file_name.write("%s = %s\n\n" % (mod_name, MOD_DISABLED))
# If the module is set as enabled.
elif [mod_name, MOD_ENABLED] in namevalue_list:
file_name.write("%s = %s\n\n" % (mod_name, MOD_ENABLED))
# If the module is set as base.
elif [mod_name, MOD_BASE] in namevalue_list:
file_name.write("%s = %s\n\n" % (mod_name, MOD_BASE))
So sedoctool.py has the nice feature of: "# If file exists, preserve settings and modify if needed." and modules.conf can just be added whole here via a complete file patch and the modules that are not desired set as "off" : refpolicy-2.20190609/policy/modules.conf and the script will update as needed based on desired policy.
One more detail is that in the next stage of the refpolicy Makefile (Building) the modules.conf with the updates is deleted in the beginning which kind of clashes with the ability of sedoctool to preserve the patched version of modules.conf...so patched the removal in the Building stage of the Makefile.
[7m>>> refpolicy 2.20190609 Building^[
<...snip...>
rm -f policy/modules.conf
The Makefile in refpolicy-2.20190609 has this line that I patched out because we are patching in our own modules.conf:
bare: clean
<...snip...>
$(verbose) rm -f $(mod_conf)
That patch looks like:
--- BUILDROOT/Makefile 2020-08-17 13:25:06.963804709 -0400
+++ FIX/Makefile 2020-08-17 19:25:29.540607763 -0400
## -636,7 +636,6 ##
$(verbose) rm -f $(modxml)
$(verbose) rm -f $(tunxml)
$(verbose) rm -f $(boolxml)
- $(verbose) rm -f $(mod_conf)
$(verbose) rm -f $(booleans)
$(verbose) rm -fR $(htmldir)
$(verbose) rm -f $(tags)
BTW,
Creating a patch with a complete new file in pp1:q!:
diff -crB --new-file pp0 pp1 > pp0.patch
I'm learning openshift origin , in the master container I found a number of config files:
[root#openshift] cd /var/lib/origin
[root#openshift origin]# find . -name *kubeconfig
./openshift.local.config/node-localhost/node.kubeconfig
./openshift.local.config/master/admin.kubeconfig
./openshift.local.config/master/openshift-master.kubeconfig
[root#openshift origin]# find . -name *config.yaml
./openshift.local.config/node-localhost/node-config.yaml
./openshift.local.config/master/master-config.yaml
I found out these files also inspecting the origin container:
$ docker inspect 671fb8df3752 | grep config
"--master-config=/var/lib/origin/openshift.local.config/master/master-config.yaml",
"--node-config=/var/lib/origin/openshift.local.config/node-localhost/node-config.yaml"
"/var/lib/origin/openshift.local.config:/var/lib/origin/openshift.local.config:z",
"Source": "/var/lib/origin/openshift.local.config",
"Destination": "/var/lib/origin/openshift.local.config",
"KUBECONFIG=/var/lib/origin/openshift.local.config/master/admin.kubeconfig",
"--master-config=/var/lib/origin/openshift.local.config/master/master-config.yaml",
"--node-config=/var/lib/origin/openshift.local.config/node-localhost/node-config.yaml"
Could you help me to schematize / summarize the role and use of each of these files?
Specifically when executing commands of this type:
oadm policy add-scc-to-group anyuid system:authenticated --config=/var/lib/origin/openshift.local.config/master/admin.kubeconfig
they must be directed to each of the configurations I have found or only to the specific one?
When I try to enable PostGis extension on my database I receive the following:
postgis=# CREATE EXTENSION postgis;
ERROR: could not load library "/usr/pgsql-9.3/lib/rtpostgis-2.1.so": libhdf5.so.6: cannot open shared object file: No such file or directory
I used find -name to find the files:
[root#digihaul3-pc /]# find -name rtpostgis-2.1.so
./usr/pgsql-9.3/lib/rtpostgis-2.1.so
[root#digihaul3-pc /]# find -name libhdf5.so.6
./usr/lib64/mpich2/lib/libhdf5.so.6
./usr/pgsql-9.3/lib/libhdf5.so.6
./usr/lib/mpich2/lib/libhdf5.so.6
Credit to Thinking Monkey # on this post
it is for fordora 15. But i tried everything else and this actually fixed my issue and allowed me to install the postgis extentions. Doesn't take long to install.
Thinking Monkeys Post:
Checked for whether /etc/ld.so.conf has a reference to the path /usr/lib64/mpich2/lib.
by doing ldconfig -p | grep libhdf5.
Which did not output anything.
On checking that /etc/ld.so.conf had include ld.so.conf.d/*.conf.
Checked for the files in directory ld.so.conf.d. One of the conf file in include ld.so.conf.d was /etc/ld.so.conf.d/atlas-x8664.conf which contained /usr/lib64/atlas.
So I,
created a file called gdal.conf in the directory ld.so.conf.d.
Added the string /usr/lib64/mpich2/lib to the file.
Ran ldconfig.
Now, ldconfig -p | grep libhdf5 had the paths to llibhdf5 files.
After doing the above, postgis raster support installation went smoothly.
I'm initializing spot instances running a derivative of the standard Ubuntu 13.04 AMI by pasting a shell script into the user-data field.
This works. The script runs. But it's difficult to debug because I can't figure out where the output of the script is being logged, if anywhere.
I've looked in /var/log/cloud-init.log, which seems to contain a bunch of stuff that would be relevant to debugging cloud-init, itself, but nothing about my script. I grepped in /var/log and found nothing.
Is there something special I have to do to turn logging on?
The default location for cloud init user data is already /var/log/cloud-init-output.log, in AWS, DigitalOcean and most other cloud providers. You don't need to set up any additional logging to see the output.
You could create a cloud-config file (with "#cloud-config" at the top) for your userdata, use runcmd to call the script, and then enable output logging like this:
output: {all: '| tee -a /var/log/cloud-init-output.log'}
so I tried to replicate your problem. Usually I work in Cloud Config and therefore I just created a simple test user-data script like this:
#!/bin/sh
echo "Hello World. The time is now $(date -R)!" | tee /root/output.txt
echo "I am out of the output file...somewhere?"
yum search git # just for fun
ls
exit 0
Notice that, with CloudInit shell scripts, the user-data "will be executed at rc.local-like level during first boot. rc.local-like means 'very late in the boot sequence'"
After logging in into my instance (a Scientific Linux machine) I first went to /var/log/boot.log and there I found:
Hello World. The time is now Wed, 11 Sep 2013 10:21:37 +0200! I am
out of the file. Log file somewhere? Loaded plugins: changelog,
kernel-module, priorities, protectbase, security,
: tsflags, versionlock 126 packages excluded due to repository priority protections 9 packages excluded due to repository
protections ^Mepel/pkgtags
| 581 kB 00:00
=============================== N/S Matched: git =============================== ^[[1mGit^[[0;10mPython.noarch : Python ^[[1mGit^[[0;10m Library c^[[1mgit^[[0;10m.x86_64 : A fast web
interface for ^[[1mgit^[[0;10m
...
... (more yum search output)
...
bin etc lib lost+found mnt proc sbin srv tmp var
boot dev home lib64 media opt root selinux sys usr
(other unrelated stuff)
So, as you can see, my script ran and was rightly logged.
Also, as expected, I had my forced log 'output.txt' in /root/output.txt with the content:
Hello World. The time is now Wed, 11 Sep 2013 10:21:37 +0200!
So...I am not really sure what is happening in you script.
Make sure you're exiting the script with
exit 0 #or some other code
If it still doesn't work, you should provide more info, like your script, your boot.log, your /etc/rc.local, and your cloudinit.log.
btw: what is your cloudinit version?