When I ran SITL simulation in WSL2, I got the following message:
john#DESKTOP-0P475SS:~/ardupilot/ArduCopter$ sim_vehicle.py
SIM_VEHICLE: Start
SIM_VEHICLE: Killing tasks
...
Checking for program 'rsync' : /usr/bin/rsync
'configure' finished successfully (9.977s)
SIM_VEHICLE: Building
SIM_VEHICLE: "/home/john/ardupilot/modules/waf/waf-light" "build" "--target" "bin/arducopter"
Waf: Entering directory `/home/john/ardupilot/build/sitl'
Command ['/usr/bin/git', 'rev-parse', '--short=8', 'HEAD'] returned 128
SIM_VEHICLE: Build failed
SIM_VEHICLE: Killing tasks
john#DESKTOP-0P475SS:~/ardupilot/ArduCopter$
What was going wrong and what does this
Waf: Entering directory `/home/john/ardupilot/build/sitl'
Command ['/usr/bin/git', 'rev-parse', '--short=8', 'HEAD'] returned 128
mean?
Related
I am trying to use Snyk Security Scan task in Azure Pipeline (Classic). My Application runtime is .Net and framework is ASP.Net 4.4.1 . There is no issue regarding authentication as i had create valid service connection of Snyk.
When i am running my pipeline it is giving error "Could not detect supported target files in D:\a\1\s".
log of failed Snyk Security Scan task:
##[debug]debug=undefined
##[debug]task result: Failed
** We have a problem! :( **
##[error]There was an error when attempting to execute the process 'C:\Program Files\nodejs\npm.cmd'. This may indicate the process failed to start. Error: spawn C:\windows\system32\cmd.exe ENOENT
##[debug]Processed: ##vso[task.issue type=error;]There was an error when attempting to execute the process 'C:\Program Files\nodejs\npm.cmd'. This may indicate the process failed to start. Error: spawn C:\windows\system32\cmd.exe ENOENT
There was an error when attempting to execute the process 'C:\Program Files\nodejs\npm.cmd'. This may indicate the process failed to start. Error: spawn C:\windows\system32\cmd.exe ENOENT
##[debug]Processed: ##vso[task.complete result=Failed;]There was an error when attempting to execute the process 'C:\Program Files\nodejs\npm.cmd'. This may indicate the process failed to start. Error: spawn C:\windows\system32\cmd.exe ENOENT
I had this same problem and needed to take advantage of the "Custom path to manifest file to test" field with an absolute location of the packages file: C:\agent_work[buildId]\s[solution folder][project folder]\packages.config
This was running on a private build server.
In appveyor I use the statement:
- initexmf --admin --force --mklinks
but due to a problem it gives the message:
initexmf --admin --force --mklinks
Sorry, but "MiKTeX Configuration Utility" did not succeed for the following reason:
Script configuration file not found.
The log file hopefully contains the information to get MiKTeX going again:
C:\ProgramData\MiKTeX\2.9\miktex\log\initexmf_admin.log
The system cannot find the path specified.
Command exited with code 1
due to the error code the process terminates and I cannot type the C:\ProgramData\MiKTeX\2.9\miktex\log\initexmf_admin.log anymore, so a bit hard to debug ...
questions:
How to continue after an error
How to stop after the outputting the file (exit 1 ?)
To run a script on failure use on_failure section, for example to push initexmf_admin.log to artifacts:
on_failure:
- appveyor PushArtifact C:\ProgramData\MiKTeX\2.9\miktex\log\initexmf_admin.log
I am trying to build swupdate image by running bitbake swupdate-image but getting following errors,
ERROR: swupdate-2019.04-r0 do_package: SYSTEMD_SERVICE_swupdate value swupdate.service does not exist
ERROR: swupdate-2019.04-r0 do_package:
ERROR: swupdate-2019.04-r0 do_package: Function failed: systemd_populate_packages
ERROR: Logfile of failure stored in: /home/panther2/warrior/build/tmp/work/corei7-64-poky-linux/swupdate/2019.04-r0/temp/log.do_package.22017
ERROR: Task (/home/panther2/warrior/sources/meta-swupdate/recipes-support/swupdate/swupdate_2019.04.bb:do_package) failed with exit code '1'
I am building for warrior yocto branch. Error indicates that swupdate.service does not exist but swupdate.service does exist under meta-swupdate/recipes-support/swupdate/swupdate. Any help is really appreciated. Thanks for your time.
swupdate-image is a rescue system - it generates a Ramdisk. Care was taken about footprint - it runs then with SystemV and not with systemd. If you want to buid swupdate-image with systemd as init, add your own swupdate-image.bbappend and rearrange the list of packages.
I have a python script on server_A that connects to server_B via SSH and calls a local rsync command to reset a directory B with a fresh set of files. Then the script on A proceeds to rsync over additional set of files to B. My hope was to run this on a schedule in Rundeck. However, it is erroring on me every time during run with this output. What am I doing wrong?
Remote command failed with exit status 1
Failed: NonZeroResultCode: Remote command failed with exit status 1
Execution failed: 9 in project Test: [Workflow result: , step failures: {1=Dispatch failed on 1 nodes: [server_A: NonZeroResultCode: Remote command failed with exit status 1]}, Node failures: {server_A=[NonZeroResultCode: Remote command failed with exit status 1]}, flow control: Continue, status: failed]
Exit status 1 was returned by the command you called. What are you running?
Every build has failed as of Tuesday. I'm not exactly sure what happened. The Phing targets (clean/prepare) are being executed properly. Additionally, the unit tests are passing with flying colors, with only a warning for duplicate code (not a reason for a fail). I tried removing the phpDoc target to see if that was causing the error, but the build still failed.
Started by user chris Updating
file://localhost/projects/svn/ips-com/trunk
At revision 234 no change for
file://localhost/projects/svn/ips-com/trunk
since the previous build [trunk] $
/opt/phing/bin/phing clean prepare
-logger phing.listener.NoBannerLogger Buildfile:
/var/lib/hudson/.hudson/jobs/IPS/workspace/trunk/build.xml
IPS > clean:
[echo] Clean... [delete] Deleting directory
/var/lib/hudson/.hudson/jobs/IPS/workspace/build
IPS > prepare:
[echo] Prepare...
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs/coverage
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs/coverage-html
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/docs
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/app
BUILD FINISHED
Total time: 1.0244 second
[workspace] $ /bin/bash -xe
/tmp/hudson3259012225710915845.sh
+ cd trunk/tests
+ /usr/local/bin/phpunit --verbose -d memory_limit=512M --log-junit
../../build/logs/phpunit.xml
--coverage-clover ../../build/logs/coverage/clover.xml
--coverage-html ../../build/logs/coverage-html/
PHPUnit 3.5.0 by Sebastian Bergmann.
IPS Default_IndexControllerTest .
Default_AuthControllerTest ......
Manage_UsersControllerTest .....
testDeleteInvalidUserId ..
testGetPermissionsForInvalidUserId .. Audit_OverviewControllerTest
............
Time: 14 seconds, Memory: 61.00Mb
[30;42m[2KOK (28 tests, 198
assertions) [0m[2K Writing code
coverage data to XML file, this may
take a moment.
Generating code coverage report, this
may take a moment.
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
[workspace] $ /bin/bash -xe
/tmp/hudson1439023061736436000.sh
+ /usr/local/bin/phpcpd --log-pmd ./build/logs/cpd.xml ./trunk phpcpd
1.3.2 by Sebastian Bergmann.
Found 1 exact clones with 6 duplicated
lines in 2 files:
library/Ips/Form/Decorator/SplitInput.php:8-14
library/Ips/Form/Decorator/FeetInches.php:10-16
0.04% duplicated lines out of 16585 total lines of code.
Time: 4 seconds, Memory: 19.50Mb [DRY]
Skipping publisher since build result
is FAILURE Publishing Javadoc [xUnit]
[INFO] - Starting to record. [xUnit]
[WARNING] - Can't create the path
/var/lib/hudson/.hudson/jobs/IPS/workspace/generatedJUnitFiles.
Maybe the directory already exists.
[xUnit] [INFO] - Processing
PHPUnit-3.4 (default) [xUnit] [INFO] -
[PHPUnit-3.4 (default)] - 1 test
report file(s) were found with the
pattern 'build/logs/phpunit.xml'
relative to
'/var/lib/hudson/.hudson/jobs/IPS/workspace'
for the testing framework 'PHPUnit-3.4
(default)'. [xUnit] [INFO] -
Converting
'/var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs/phpunit.xml'
. [xUnit] [INFO] - Stopping recording.
Publishing Clover coverage report...
Publishing Clover XML report...
Publishing Clover coverage results...
Finished: FAILURE
What changed since Tuesday? Try to manually run exactly the same commands that Hudson tries to run from the same directory that Hudson starts it from (usually the jobs workspace directory). Of course with the user account that Hudson is started under.
There are several possibilities. ranging from standard groups for a directory, to permission, or other things outside of Hudson. Was Hudson upgraded? Was a plugin upgraded? Was the OS or php upgraded? Was there a change in the default or user .profile or .env (or the equivalent files)? Does another process accesses the workspace? ......
Once I had the problem that all of the sudden my deployment scripts did not run anymore. The mystery was, that I could still run the script from command line with the Hudson user account. The reason was simple but took a while to uncover. There was a java upgrade from 5 to 6. Both versions were available. After comparing the environment variables, there was a difference in the path. The problem was that the new path was set in the global .profile. But Hudson does not open an interactive shell, therefore the .profile will not be executed. If you have a problem like this, you can put the initialization in the .env file (or whatever the filename is for your system), because this will be run regardless if it is a interactive shell or not. Alternatively you can configure Hudson to set it on master or node/slave level.
if you want a command to not break the 'build' as a failure you have to add #! in front of the command to prevent the flags -xe which produce this behaviour.