How to change the order of rsync in symfony deployment task - deployment

I want to deploy a part of my symfony application, say, it's like a module.
I want to exclude all files first, and then include only the files of
my new module.
For deployment I use the following symfony task
php symfony project:deploy production -t
The parameter -t prints all files to the output that are included in this dry run of rsync.
Content of config/rsync_exclude.txt is only *, since I like to exclude everthing:
*
In config/rsync_include.txt I list all the files and folders for the inclusion:
config/
config/mysupermodule.yml
lib/model/doctrine/
lib/model/doctrine/MySuperclass.php
lib/model/doctrine/MySuperclassTable.php
lib/
lib/MySuperLibrary/
lib/MySuperLibrary/*
The symfony task builds the following rsync command:
rsync --dry-run -azC --force --delete --progress --exclude-from=config/rsync_exclude.txt --include-from=config/rsync_include.txt -e "ssh -p22" ./ user#www.server.com:/test_deployment/
Problem 1: The the task doesn't sync any files.
Solution to 1: Change order: Include first, then exclude.
I figured out, that if I change my need to this one:
I want to include all files of my new module and exclude then all
other.
This means using the following command:
rsync --dry-run -azC --force --delete --progress --include-from=config/rsync_include.txt --exclude-from=config/rsync_exclude.txt -e "ssh -p22" ./ user#www.server.com:/test_deployment/
The rsync works.
Problem 2: How can I change the order of the rsync when using the symfony task?
The symfony task first excludes than includes.
Solution 2: ?

It is NOT possible.
But you can edit the deployment task in lib/task/project/sfProjectDeployTask.class.php.
Replace this (line 145 to 154 in SF 1.4):
if (file_exists($options['rsync-dir'].'/rsync_exclude.txt'))
{
$parameters .= sprintf(' --exclude-from=%s/rsync_exclude.txt', $options['rsync-dir']);
}
if (file_exists($options['rsync-dir'].'/rsync_include.txt'))
{
$parameters .= sprintf(' --include-from=%s/rsync_include.txt', $options['rsync-dir']);
}
with this:
if (file_exists($options['rsync-dir'].'/rsync_include.txt'))
{
$parameters .= sprintf(' --include-from=%s/rsync_include.txt', $options['rsync-dir']);
}
if (file_exists($options['rsync-dir'].'/rsync_exclude.txt'))
{
$parameters .= sprintf(' --exclude-from=%s/rsync_exclude.txt', $options['rsync-dir']);
}
In short: switch this two IF statements.

Let's change the way you want to do.
You should use only the exclude file. Exclude only directories that changed but you don't want to sync.
Because anyway if you modules/, app/, ... directories haven't change, you don't have to put them in the exclude file because they will remain the same on both server.

Related

How to update modules.conf for SELINUX in BUILDROOT?

looking to disable some SELinux modules (set to off) and create others in modules.conf. I don't see an obvious way of updating modules.conf as I tried adding my changes as a modules.conf patch but it failed given that the modules.conf file gets built and is not just downloaded by BR so it is not available for patching like other things under the refpolicy directory:
Build window output:
refpolicy 2.20190609 PatchingApplying 0001-refpolicy-update-modules-conf.patch using patch:
can't find file to patch at input line 3
I did see in the log that there is a support/sedoctool.py that autogenerates the policy/modules.conf file so that the file is NOT patchable like most other things in the ref policy.
The relevant section of the buildroot/output/build/refpolicy-2.20190609/Makefile:
# policy building support tools
support := support
genxml := $(PYTHON) $(support)/segenxml.py
gendoc := $(PYTHON) $(support)/sedoctool.py
<...snip...>
########################################
#
# Create config files
#
conf: $(mod_conf) $(booleans) generate$(booleans) $(mod_conf): conf.intermediate.INTERMEDIATE: conf.intermediate
conf.intermediate: $(polxml)
#echo "Updating $(booleans) and $(mod_conf)"
$(verbose) $(gendoc) -b $(booleans) -m $(mod_conf) -x $(polxml)
Part of the hsmlinux build.log showing the sedoctool.py (gendoc) being run:
Updating policy/booleans.conf and policy/modules.conf
.../build-buildroot-sawshark/buildroot/output/host/usr/bin/python3 support/sedoctool.py -b policy/booleans.conf -m policy/modules.conf -x doc/policy.xml
I'm sure there is a standard way of doing this, just doesn't seem to be documented anywhere I can find.
Thanks.
Turns out that the sedoctool.py script is reading the doc/policy.xml. Looking at sedoctool.py:
#modules enabled and disabled values
MOD_BASE = "base"
MOD_ENABLED = "module"
MOD_DISABLED = "off"
<...snip...>
def gen_module_conf(doc, file_name, namevalue_list):
"""
Generates the module configuration file using the XML provided and the
previous module configuration.
"""
# If file exists, preserve settings and modify if needed.
# Otherwise, create it.
<...snip...>
mod_name = node.getAttribute("name")
mod_layer = node.parentNode.getAttribute("name")
<...snip...>
if mod_name and mod_layer:
file_name.write("# Layer: %s\n# Module: %s\n" % (mod_layer,mod_name))
if required:
file_name.write("# Required in base\n")
file_name.write("#\n")
if [mod_name, MOD_DISABLED] in namevalue_list:
file_name.write("%s = %s\n\n" % (mod_name, MOD_DISABLED))
# If the module is set as enabled.
elif [mod_name, MOD_ENABLED] in namevalue_list:
file_name.write("%s = %s\n\n" % (mod_name, MOD_ENABLED))
# If the module is set as base.
elif [mod_name, MOD_BASE] in namevalue_list:
file_name.write("%s = %s\n\n" % (mod_name, MOD_BASE))
So sedoctool.py has the nice feature of: "# If file exists, preserve settings and modify if needed." and modules.conf can just be added whole here via a complete file patch and the modules that are not desired set as "off" : refpolicy-2.20190609/policy/modules.conf and the script will update as needed based on desired policy.
One more detail is that in the next stage of the refpolicy Makefile (Building) the modules.conf with the updates is deleted in the beginning which kind of clashes with the ability of sedoctool to preserve the patched version of modules.conf...so patched the removal in the Building stage of the Makefile.
[7m>>> refpolicy 2.20190609 Building^[
<...snip...>
rm -f policy/modules.conf
The Makefile in refpolicy-2.20190609 has this line that I patched out because we are patching in our own modules.conf:
bare: clean
<...snip...>
$(verbose) rm -f $(mod_conf)
That patch looks like:
--- BUILDROOT/Makefile 2020-08-17 13:25:06.963804709 -0400
+++ FIX/Makefile 2020-08-17 19:25:29.540607763 -0400
## -636,7 +636,6 ##
$(verbose) rm -f $(modxml)
$(verbose) rm -f $(tunxml)
$(verbose) rm -f $(boolxml)
- $(verbose) rm -f $(mod_conf)
$(verbose) rm -f $(booleans)
$(verbose) rm -fR $(htmldir)
$(verbose) rm -f $(tags)
BTW,
Creating a patch with a complete new file in pp1:q!:
diff -crB --new-file pp0 pp1 > pp0.patch

How to dynamical create a file with git info and include it in the the image and save it on build system

We have several developers working on a project. The areas we are concerned about (and we regularly modify) are kernel, our custom code, and the yocto space itself.
We'd like to create a file at some point in the process (do_fetch, or do_install?) that contains info about what's being built. Such as the git branch name and hash for each of the repos above. We would then install that file (or files if need be) onto the image as well as archive it away on a centralized server.
I know that some of this info is available in the buildhistory, but I'm not sure if it is there when we'd like to install and package.
Getting the branch and hash should be easy to get via shell commands in the recipe functions.
Before I go off and hack something out, I thought I'd ask if there is a standard way to do something similar to this.
Thanks!
In case you need to include custom information. A nice way consists in creating a custom layer bbclass, defined as follow :
DEPENDS += "git-native"
do_rootfs_save_versions() {
#Do custom tasks here like getting layer names and linked SHA numbers
#Store these information in a file and deploy it in ${DEPLOY_DIR_IMAGE}
}
ROOTFS_POSTPROCESS_COMMAND += "do_rootfs_save_versions;"
Then, include the bbclass in your image file
IMAGE_CLASSES += "<bbclass_name>"
It is very useful when you want to determine the layer version/image name/.. running on target.
OK, Here is what I did.
Added appends to the do_install functions I wanted to keep track of and put them in the top of the build dir:
do_install_append () {
echo ${SRCPV} > ${TOPDIR}/kernel_manifest.txt
git rev-parse --abbrev-ref HEAD >> ${TOPDIR}/kernel_manifest.txt
}
Added a new bbclass in our meta- dir:
DEPENDS += "git-native"
do_rootfs_save_manifests[nostamp] = "1"
do_rootfs_save_manifests() {
date > ${TOPDIR}/buildinfo.txt
hostname >> ${TOPDIR}/buildinfo.txt
git config user.name >> ${TOPDIR}/buildinfo.txt
cp ${TOPDIR}/buildinfo.txt ${IMAGE_ROOTFS}/usr/custom_space/
if [ ! -f ${TOPDIR}/kernel_manifest.txt ]; then
echo "kernel_manifest empty: Rebuild or run cleanall on it's recipe" > ${TOPDIR}/error_kernel_manifest.txt
cp ${TOPDIR}/error_kernel_manifest.txt ${IMAGE_ROOTFS}/usr/custom_space/
else
cp ${TOPDIR}/kernel_manifest.txt ${IMAGE_ROOTFS}/usr/custom_space/
if [ -f ${TOPDIR}/error_kernel_manifest.txt ]; then
rm ${TOPDIR}/error_kernel_manifest.txt
fi
fi
if [ ! -f ${TOPDIR}/buildhistory/metadata-revs ]; then
echo " metadata_revs empty: Make sure INHERIT += \"buildhistory\" and" > ${TOPDIR}/error_yocto_manifest.txt
echo " BUILDHISTORY_COMMIT = "1" are in your local.conf " >> ${TOPDIR}/error_yocto_manifest.txt
cp ${TOPDIR}/error_yocto_manifest.txt ${IMAGE_ROOTFS}/usr/custom_space/
else
if [ -f ${TOPDIR}/error_yocto_manifest.txt ]; then
rm ${TOPDIR}/error_yocto_manifest.txt
fi
cp ${TOPDIR}/buildhistory/metadata-revs ${TOPDIR}/yocto_manifest.txt
cp ${TOPDIR}/buildhistory/metadata-revs ${IMAGE_ROOTFS}/usr/custom_space/yocto_manifest.txt
fi
}
ROOTFS_POSTPROCESS_COMMAND += "do_rootfs_save_manifests;"
Added the following lines to the image recipes that we wanted to use the process:
IMAGE_CLASSES += "manifest"
inherit ${IMAGE_CLASSES}
Thanks for the help!

Yocto: Install different config files based on MACHINE type or target image

I've got a couple of HW platforms (same cpu, etc.) that require different asound.conf files.
The way that I'm controlling the target platform is via the MACHINE variable and target image (i.e., MACHINE=machine_1 nice bitbake machine-1-bringup-image)
Normally, if just replacing the conf file I'd just create an alsa-state.bbappend and create a do_install_append function to replace it.
However since the different HW platforms require differ conf files I'm unsure how to handle it.
I've tried putting some logic into the append file do_install_append function but it's not working out. It's not always picking up the correct file (like it thinks that nothing has changed so that it uses the previous cached conf?)
Here's an example of one of the append files that I've tried:
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
SRC_URI += " \ file://asound_MACHINE1.conf \
file://asound_MACHINE2.conf \ "
do_install_append() {
echo " alsa-state.bbappend MACHINE: ${MACHINE}"
if [ "${MACHINE}" = "machine_1" ]; then
echo " machine_1"
echo " installing ${WORKDIR}/asound_MACHINE1.conf to ${D}${sysconfdir}/asound.conf"
install -m 644 ${WORKDIR}/asound_MACHINE1.conf {D}${sysconfdir}/asound.conf
else
echo " installing ${WORKDIR}/asound_MACHINE2.conf to ${D}${sysconfdir}/asound.conf"
install -m 644 ${WORKDIR}/asound_MACHINE2.conf ${D}${sysconfdir}/asound.conf
fi
}
I can see the correct echoes in the logs per the logic.
At any rate I don't think that the path I'm going down is the best way to deal with this.
Is there a 'standard' way to have different files installed based on either the target image or MACHINE variable?
do_install_append () {
// install common things here
}
do_install_append_machine-1 () {
// install machine-1 specific things here
}
do_install_append_machine-2 () {
// install machine-2 specific things here
}
The value of MACHINE is automatically added to OVERRIDES, which can be used at the end of a function append to have a MACHINE-specific addition to a function.
Maybe useful: https://www.yoctoproject.org/docs/2.4/mega-manual/mega-manual.html#var-OVERRIDES
You can have configuration files in machine specific directories in your particular case (just a specific configuration file for each machine). OpenEmbedded will fetch the most specific one. The directory structure in your recipe directory will look like:
files/<machine1>/asound.conf
files/<machine2>/asound.conf
And your alsa-state.bbappend will contain just one line (you don't need to change do_install because alsa-state.bb already installs asound.conf):
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
BTW: We are using that setup to have specific asound.state file per machine in our project.
Moreover, OpenEmbedded will detect that SRC_URI contains machine specific file and change the PACKAGE_ARCH accordingly, see: https://www.yoctoproject.org/docs/2.5/mega-manual/mega-manual.html#var-SRC_URI_OVERRIDES_PACKAGE_ARCH
Few more words on machine, distro or arch specific files: OE is trying to fetch the most specific file in file:// fetcher. It searches also in the directories named by distro (e.g files/<distro>/asound.conf) and architecture (e.g. armv7a, arm). It might be useful if you want to have file specific for some set of devices. More information: https://www.yoctoproject.org/docs/2.5/mega-manual/mega-manual.html#var-FILESOVERRIDES and also https://www.yoctoproject.org/docs/2.5/mega-manual/mega-manual.html#best-practices-to-follow-when-creating-layers (section "Place Machine-Specific Files in Machine-Specific Locations")
The above answer by clsulliv worked better than advertised. For future reference below is the append file I used:
FILESEXTRAPATHS_prepend:= "${THISDIR}/${PN}:"
SRC_URI += " \
file://machine1_asound.conf \
file://machine2_asound.conf \
"
do_install_append_machine1() {
echo " machine1"
echo " installing ${WORKDIR}/machine1_asound.conf to ${D}${sysconfdir}/asound.conf"
install -m 644 ${WORKDIR}/machine1_asound.conf ${D}${sysconfdir}/asound.conf
}
do_install_append_machine2() {
echo " machine2"
echo " installing ${WORKDIR}/machine2_asound.conf to ${D}${sysconfdir}/asound.conf"
install -m 644 ${WORKDIR}/machine2_asound.conf ${D}${sysconfdir}/asound.conf
}
Thanks for the help!

How to fix "Test reports were found but none of them are new. Did tests run?" in Jenkins

I am getting the error "Test reports were found but none of them are new. Did tests run?" when trying to send unit test results by email. The reason is that I have a dedicated Jenkins job that imports the artifacts from a test job to itself, and sends the test results by email. The reason why I am doing this is because I don't want Jenkins to send all the developers email during the night :) so I am "post-poning" the email sending since Jenkins itself does not support delayed email notifications (sadly).
However, by the time the "send test results by email" job executes, the tests are hours old and I get the error as specified in the question title. Any ideas on how to get around this problem?
You could try updating the timestamps of the test reports as a build step ("Execute shell script"). E.g.
cd path/to/test/reports
touch *.xml
mvn clean test
via terminal or jenkins. This generates new tests reports.
The other answer that says cd path/to/test/reports touch *.xml didn't work for me, but mvn clean test yes.
Updating the last modified date can also be achieved in gradle itself is desired:
task jenkinsTest{
inputs.files test.outputs.files
doLast{
def timestamp = System.currentTimeMillis()
test.testResultsDir.eachFile { it.lastModified = timestamp }
}
}
build.dependsOn(jenkinsTest)
As mentioned here: http://www.practicalgradle.org/blog/2011/06/incremental-tests-with-jenkins/
Here's an updated version for Jenkinsfile (Declarative Pipeline):
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'make build'
}
}
stage('Test') {
steps {
sh 'make test'
script {
def testResults = findFiles(glob: 'build/reports/**/*.xml')
for(xml in testResults) {
touch xml.getPath()
}
}
}
}
}
post {
always {
archiveArtifacts artifacts: 'build/libs/**/*.jar', fingerprint: true
junit 'build/reports/**/*.xml'
}
}
}
Because gradle caches results from previous builds I ran into the same problem.
I fixed it by adding this line to my publish stage:
sh 'find . -name "TEST-*.xml" -exec touch {} \\;'
So my file is like this:
....
stage('Unit Tests') {
sh './gradlew test'
}
stage('Publish Results') {
// Fool Jenkins into thinking the tests results are new
sh 'find . -name "TEST-*.xml" -exec touch {} \\;'
junit '**/build/test-results/test/TEST-*.xml'
}
Had same issue for jobs running repeatedly (every 30 mins).
For the job, go to Configure, Build, Advanced and within the Switches section add:
--stacktrace
--continue
--rerun-tasks
This worked for me
Navigate to report directory cd /report_directory
Delete all older report rm *.xml
Add junit report_directory/*.xml in pipeline
Rerun the test script , navigate to Build Number → Test Result
Make sure you have one successful build without any failure, only after this you can able to see the reports
Make sure that you have mentioned the correct path against "Test report XMLs" under jenkins configuration, such as "target/surefire-reports/*.xml"
There is no need to touch *.xml as jenkins won't complain even though test results xml file does not change.
if you use Windows slave, you can 'touch' results using groovy pipeline stage with powershell:
powershell 'ls "junitreports\\*.*" | foreach-object { $_.LastWriteTime = Get-Date }'
It happens if you are using a test report which is not modified by that job in that run.
In case for test purpose if you are testing with already created file then, add below command inside jenkins job under Build > Execute Shell
chmod -R 775 /root/.jenkins/workspace/JmeterTest/output.xml
echo " " >> /root/.jenkins/workspace/JmeterTest/output.xml
Above command changes timestamp of file hence error wont display.
Note: To achieve same in Execute Shell instead of above, do not try renaming file using move mv command etc. it won't work , append and delete same for change file timestamp only works.
For me commands like chmod -R 775 test-results.xml or touch test-results.xml does not work due to permission error. As work around use is to set new file in test report settings and command to copy old xml report file to new file.
you can add following shell command to your "Pre Steps" section when configure your job on Jenkins
mvn clean test
this will clean the test
Here's an updated version of the gradle task that touch each test result files.
From Jenkins pipeline script, just call "testAndTouchTestResult" task instead of "test" task.
The code below is with Kotlin syntax:
tasks {
register("testAndTouchTestResult") {
setGroup("verification")
setDescription("touch Test Results for Jenkins")
inputs.files(test.get().outputs)
doLast {
val timestamp = System.currentTimeMillis()
fileTree(test.get().reports.junitXml.destination).forEach { f ->
f.setLastModified(timestamp)
}
}
}
}
The solution for me was delete node_modules and change node version (from 7.1 to 8.4) on jenkins. That's it.

How do I loop over several files, keeping the base name for further processing?

I have multiple text files that need to be tokenised, POS and NER. I am using C&C taggers and have run their tutorial, but I am wondering if there is a way to tag multiple files rather than one by one.
At the moment I am tokenising the files:
bin/tokkie --input working/tutorial/example.txt--quotes delete --output working/tutorial/example.tok
as follows and then Part of Speech tagging:
bin/pos --input working/tutorial/example.tok --model models/pos --output working/tutorial/example.pos
and lastly Named Entity Recognition:
bin/ner --input working/tutorial/example.pos --model models/ner --output working/tutorial/example.ner
I am not sure how I would go about creating a loop to do this and keep the file name the same as the input but with the extension representing the tagging it has. I was thinking of a bash script or perhaps Perl to open the directory but I am not sure on how to enter the C&C commands in order for the script to understand.
At the moment I am doing it manually and it's pretty time consuming to say the least!
Untested, likely needs some directory mangling.
use autodie qw(:all);
use File::Basename qw(basename);
for my $text_file (glob 'working/tutorial/*.txt') {
my $base_name = basename($text_file, '.txt');
system 'bin/tokkie',
'--input' => "working/tutorial/$base_name.txt",
'--quotes' => 'delete',
'--output' => "working/tutorial/$base_name.tok";
system 'bin/pos',
'--input' => "working/tutorial/$base_name.tok",
'--model' => 'models/pos',
'--output' => "working/tutorial/$base_name.pos";
system 'bin/ner',
'--input' => "working/tutorial/$base_name.pos",
'--model' => 'models/ner',
'--output' => "working/tutorial/$base_name.ner";
}
In Bash:
#!/bin/bash
dir='working/tutorial'
for file in "$dir"/*.txt
do
noext=${file/%.txt}
bin/tokkie --input "$file" --quotes delete --output "$noext.tok"
bin/pos --input "$noext.tok" --model models/pos --output "$noext.pos"
bin/ner --input "$noext.pos" --model models/ner --output "$noext.ner"
done