Are there any reasons why this is not a good idea? I ask because I constantly experience very, very inconsistent results. For example, while setting up my GH Actions over the last few days, I must have run at least 200 workflows. However, for the first time ever, I am now seeing this error:
Run ruby/setup-ruby#v1
with:
ruby-version: 3.0.2
bundler-cache: true
bundler: default
working-directory: .
cache-version: 0
env:
BUNDLE_GEMS__CONTRIBSYS__COM: ***
ImageOS: ubuntu20
Modifying PATH
Entries added to PATH to use selected Ruby:
/opt/hostedtoolcache/Ruby/3.0.2/x64/bin
Downloading Ruby
https://github.com/ruby/ruby-builder/releases/download/toolcache/ruby-3.0.2-ubuntu-20.04.tar.gz
Took 0.71 seconds
Extracting Ruby
/usr/bin/tar -xz -C /opt/hostedtoolcache/Ruby/3.0.2 -f /home/ubuntu/actions-runner-2/_work/_temp/7d0937cf-69b1-4c73-b1bd-7386fca820a2
/usr/bin/tar: x64/lib: Cannot utime: No such file or directory
/usr/bin/tar: Exiting with failure status due to previous errors
Took 0.52 seconds
Error: The process '/usr/bin/tar' failed with exit code 2
I have absolutely no clue whatsoever why this would be presenting itself. If I re-run the same workflow, the error goes away. I'm not sure if this is because one runner is conflicting with another while trying to access the /opt/hostedtoolcache/ directory or something else.
Here's the exact same job re-run without any issues:
Related
In appveyor I use the statement:
- initexmf --admin --force --mklinks
but due to a problem it gives the message:
initexmf --admin --force --mklinks
Sorry, but "MiKTeX Configuration Utility" did not succeed for the following reason:
Script configuration file not found.
The log file hopefully contains the information to get MiKTeX going again:
C:\ProgramData\MiKTeX\2.9\miktex\log\initexmf_admin.log
The system cannot find the path specified.
Command exited with code 1
due to the error code the process terminates and I cannot type the C:\ProgramData\MiKTeX\2.9\miktex\log\initexmf_admin.log anymore, so a bit hard to debug ...
questions:
How to continue after an error
How to stop after the outputting the file (exit 1 ?)
To run a script on failure use on_failure section, for example to push initexmf_admin.log to artifacts:
on_failure:
- appveyor PushArtifact C:\ProgramData\MiKTeX\2.9\miktex\log\initexmf_admin.log
I am building Yocto for AGL image (for more details: automotivelinux.org).
The below error occurred during the build progress (do_rootfs).
In packagegroup-agl-demo-platform.bb, declared packagegroup-agl-image-ivi as a runtime dependent package.
RDEPENDS_${PN} += "\
packagegroup-agl-image-ivi \
"
I can build successfully the packagegroup-agl-image-ivi separately. But when building the whole agl-demo-platform image, happened as follows:
ERROR: agl-demo-platform-1.0-r0 do_rootfs: Unable to install packages. Command '/LTSI4.9/LTSI4.4/build/tmp/work/m3ulcb-agl-linux/agl-demo-platform/1.0-r0/opkg.conf -t /LTSI4.9/build/tmp/work/m3ulcb-agl-linux/agl-demo-platform/1.0-r0/temp/ipktemp/ -o /LTSI4.9/build/tmp/work/m3ulcb-agl-linux/agl-demo-platform/1.0-r0/rootfs --force_postinstall --prefer-arch-to-version install
run-postinsts
screen
kernel-modules
packagegroup-agl-devel
packagegroup-core-eclipse-debug
mc packagegroup-core-tools-profile
kernel-module-vsp2 kernel-module-pvrsrvkm
kernel-module-vspm-if
opkg packagegroup-core-tools-debug
psplash kernel-module-vspm
packagegroup-core-ssh-openssh
packagegroup-agl-demo-platform
omx-user-module kernel-devicetree'
returned 1:
Solver encountered 1 problem(s):
Problem 1/1:
- package packagegroup-agl-demo-platform-1.0-r0.all requires packagegroup-agl-image-ivi, but none of the providers can be installed
Solution 1:
- do not ask to install a package providing packagegroup-agl-demo-platform
ERROR: agl-demo-platform-1.0-r0 do_rootfs: Function failed: do_rootfs
ERROR: Logfile of failure stored in: /LTSI4.9/build/tmp/work/m3ulcb-agl-linux/agl-demo-platform/1.0-r0/temp/log.do_rootfs.14498
ERROR: Task (/LTSI4.9/meta-agl-demo/recipes-platform/images/agl-demo-platform.bb:do_rootfs) failed with exit code '1'
Can anyone help me out in this case ?
I tried 02 ways as follows. They did work.
First method, I cleaned all relative packages and rebuild the whole image.
$ bitbake -c cleanall -c cleansstate <recipes>
recipes consisted of all dependent & runtime dependent packages. But it was a little bit confused to inexperienced users to determine which ones.
Second method, I wiped out the build/tmp/, cache/, sstate-cache/ folders, and re-build all Yocto packages.
There were nothing happening any more. It was really a bad idea if be in critical period of time, but if have free time, be helpful.
I am working on a Linux From Scratch project and I've run into some (potential) issues. In chapter 6.49: Libffi-3.2.1, I executed the "sed", "configure" and "make" commands successfully but when I executed "make check", it appears that every test fails:
MAKE x86_64-unknown-linux-gnu : 0 * check
...stuff...
Test run by root on Fri Jul 13 23:51:21 2018
Native configuration is x86_64-unknown-linux-gnu
=== libffi tests ===
Schedule of variations:
unix
Running target unix
...a lot of failures...
FAIL: libffi.call/va_struct3.c -W -Wall -Wno-psabi -Os (test for excess errors)
FAIL: libffi.call/va_struct3.c -W -Wall -Wno-psabi -O2 -fomit-frame-pointer (test for excess errors)
=== libffi Summary ===
# of unexpected failures 685
# of unresolved testcases 685
I've been following the book pretty closely, but maybe I missed something along the way. Should I even worry about this or should I just install it anyway and move on?
Let me know if more information is needed.
Nevermind, I solved it. I basically went back and redid everything.
I usually I copy and paste the commands from the LFS book into a bash script and run it that way (if I can get a away with it). Well, apparently I forgot to write one for the diffutils package. But this seems strange to me because I wouldn't think that missing that script would cause issues with a test that seems to use DejaGNU let alone get me that far without any troubles.
Anyway, my LFS is up and running now! 😁
I'm trying to compile Slic3r 1.2.9 (Git 65a23b) on Raspbian, and running sudo perl Build.PL --verbose fails while building the Perl module Time-HiRes-1.9754:
...
--> Working on Time::HiRes
Fetching http://www.cpan.org/authors/id/J/JH/JHI/Time-HiRes-1.9754.tar.gz ... OK
Configuring Time-HiRes-1.9754 ... FAIL
! Timed out (> 60s). Use --verbose to retry.
! Configure failed for Time-HiRes-1.9754. See /root/.cpanm/work/1520227993.988/build.log for details.
The log file shows a little more information, but I've never worked with Perl and I don't know where to start debugging:
$ tail /root/.cpanm/work/1520234788.2186/build.log
Looking for clock_getres()... found.
Looking for clock_nanosleep()... found.
Looking for clock()... found.
Looking for working futimens()... found.
Looking for working utimensat()... found.
You seem to have subsecond timestamp setting.
Looking for stat() subsecond timestamps...
Trying struct stat st_atimespec.tv_nsec...-> FAIL Timed out (> 60s). Use --verbose to retry.
-> N/A
-> FAIL Configure failed for Time-HiRes-1.9754. See /root/.cpanm/work/1520234788.2186/build.log for details.
I've posted an issue with Slic3r on GitHub, but I haven't had any suggestions yet - presumably it's not actually a problem with Slic3r itself.
What should I do next to work out what's going wrong?
Every build has failed as of Tuesday. I'm not exactly sure what happened. The Phing targets (clean/prepare) are being executed properly. Additionally, the unit tests are passing with flying colors, with only a warning for duplicate code (not a reason for a fail). I tried removing the phpDoc target to see if that was causing the error, but the build still failed.
Started by user chris Updating
file://localhost/projects/svn/ips-com/trunk
At revision 234 no change for
file://localhost/projects/svn/ips-com/trunk
since the previous build [trunk] $
/opt/phing/bin/phing clean prepare
-logger phing.listener.NoBannerLogger Buildfile:
/var/lib/hudson/.hudson/jobs/IPS/workspace/trunk/build.xml
IPS > clean:
[echo] Clean... [delete] Deleting directory
/var/lib/hudson/.hudson/jobs/IPS/workspace/build
IPS > prepare:
[echo] Prepare...
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs/coverage
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs/coverage-html
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/docs
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/app
BUILD FINISHED
Total time: 1.0244 second
[workspace] $ /bin/bash -xe
/tmp/hudson3259012225710915845.sh
+ cd trunk/tests
+ /usr/local/bin/phpunit --verbose -d memory_limit=512M --log-junit
../../build/logs/phpunit.xml
--coverage-clover ../../build/logs/coverage/clover.xml
--coverage-html ../../build/logs/coverage-html/
PHPUnit 3.5.0 by Sebastian Bergmann.
IPS Default_IndexControllerTest .
Default_AuthControllerTest ......
Manage_UsersControllerTest .....
testDeleteInvalidUserId ..
testGetPermissionsForInvalidUserId .. Audit_OverviewControllerTest
............
Time: 14 seconds, Memory: 61.00Mb
[30;42m[2KOK (28 tests, 198
assertions) [0m[2K Writing code
coverage data to XML file, this may
take a moment.
Generating code coverage report, this
may take a moment.
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
[workspace] $ /bin/bash -xe
/tmp/hudson1439023061736436000.sh
+ /usr/local/bin/phpcpd --log-pmd ./build/logs/cpd.xml ./trunk phpcpd
1.3.2 by Sebastian Bergmann.
Found 1 exact clones with 6 duplicated
lines in 2 files:
library/Ips/Form/Decorator/SplitInput.php:8-14
library/Ips/Form/Decorator/FeetInches.php:10-16
0.04% duplicated lines out of 16585 total lines of code.
Time: 4 seconds, Memory: 19.50Mb [DRY]
Skipping publisher since build result
is FAILURE Publishing Javadoc [xUnit]
[INFO] - Starting to record. [xUnit]
[WARNING] - Can't create the path
/var/lib/hudson/.hudson/jobs/IPS/workspace/generatedJUnitFiles.
Maybe the directory already exists.
[xUnit] [INFO] - Processing
PHPUnit-3.4 (default) [xUnit] [INFO] -
[PHPUnit-3.4 (default)] - 1 test
report file(s) were found with the
pattern 'build/logs/phpunit.xml'
relative to
'/var/lib/hudson/.hudson/jobs/IPS/workspace'
for the testing framework 'PHPUnit-3.4
(default)'. [xUnit] [INFO] -
Converting
'/var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs/phpunit.xml'
. [xUnit] [INFO] - Stopping recording.
Publishing Clover coverage report...
Publishing Clover XML report...
Publishing Clover coverage results...
Finished: FAILURE
What changed since Tuesday? Try to manually run exactly the same commands that Hudson tries to run from the same directory that Hudson starts it from (usually the jobs workspace directory). Of course with the user account that Hudson is started under.
There are several possibilities. ranging from standard groups for a directory, to permission, or other things outside of Hudson. Was Hudson upgraded? Was a plugin upgraded? Was the OS or php upgraded? Was there a change in the default or user .profile or .env (or the equivalent files)? Does another process accesses the workspace? ......
Once I had the problem that all of the sudden my deployment scripts did not run anymore. The mystery was, that I could still run the script from command line with the Hudson user account. The reason was simple but took a while to uncover. There was a java upgrade from 5 to 6. Both versions were available. After comparing the environment variables, there was a difference in the path. The problem was that the new path was set in the global .profile. But Hudson does not open an interactive shell, therefore the .profile will not be executed. If you have a problem like this, you can put the initialization in the .env file (or whatever the filename is for your system), because this will be run regardless if it is a interactive shell or not. Alternatively you can configure Hudson to set it on master or node/slave level.
if you want a command to not break the 'build' as a failure you have to add #! in front of the command to prevent the flags -xe which produce this behaviour.