This is the first time I have every used make. I am trying to install the Julia language. So I cloned from GitHub
git clone git://github.com/JuliaLang/julia.git
The instructions then say enter in the Julia directory and type make. It ran for a very long time - I ate a pizza.
When I got back, typing Julia did not work. Towards the end of the installation, I got a long error message:
/usr/bin/install -c -m 644 libpcre.pc libpcreposix.pc libpcrecpp.pc '/home/john/Downloads/julia/usr/lib/pkgconfig'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 130 100 130 0 0 243 0 --:--:-- --:--:-- --:--:-- 337
0 0 0 8773k 0 0 310k 0 --:--:-- 0:00:28 --:--:-- 0
curl: (28) Operation too slow. Less than 1 bytes/sec transferred the last 15 seconds
curl: (6) name lookup timed out
make[2]: *** [openblas-v0.2.8.tar.gz] Error 6
make[1]: *** [julia-release] Error 2
make: *** [release] Error 2
I tried sudo make - putting sudo in front seems to solve everything but not this:
gzip: stdin: unexpected end of file
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
make[2]: *** [openblas-v0.2.8/config.status] Error 2
make[1]: *** [julia-release] Error 2
make: *** [release] Error 2
What steps can I take to makes sure Julia installs properly?
I need version 2.0 so I can use iJulia with my iPython notebook. If there is an easier way without compiling directly, I would just do that.
The problem is that the makefile is trying to download a file (curl is a command line program that acts like a web browser, and is often used to download files from websites).
However, for whatever reason (maybe the internet was tired) the download failed and timed out.
The reason it fails now with the unexpected end of file error is that (a) the download gave you part of a file before it failed, and (b) the makefile you're using is badly written so it didn't clean up the partly-downloaded file on failure.
So, that file exists and thus make won't try to download it, but it's only partial so when you try to uncompress it, it fails.
You should delete the file it tried to download by hand (with something like rm -f openblas-v0.2.8.tar.gz) then re-run make. Maybe the internet has woken up, or drunk some coffee, and the download will work this time.
Related
Are there any reasons why this is not a good idea? I ask because I constantly experience very, very inconsistent results. For example, while setting up my GH Actions over the last few days, I must have run at least 200 workflows. However, for the first time ever, I am now seeing this error:
Run ruby/setup-ruby#v1
with:
ruby-version: 3.0.2
bundler-cache: true
bundler: default
working-directory: .
cache-version: 0
env:
BUNDLE_GEMS__CONTRIBSYS__COM: ***
ImageOS: ubuntu20
Modifying PATH
Entries added to PATH to use selected Ruby:
/opt/hostedtoolcache/Ruby/3.0.2/x64/bin
Downloading Ruby
https://github.com/ruby/ruby-builder/releases/download/toolcache/ruby-3.0.2-ubuntu-20.04.tar.gz
Took 0.71 seconds
Extracting Ruby
/usr/bin/tar -xz -C /opt/hostedtoolcache/Ruby/3.0.2 -f /home/ubuntu/actions-runner-2/_work/_temp/7d0937cf-69b1-4c73-b1bd-7386fca820a2
/usr/bin/tar: x64/lib: Cannot utime: No such file or directory
/usr/bin/tar: Exiting with failure status due to previous errors
Took 0.52 seconds
Error: The process '/usr/bin/tar' failed with exit code 2
I have absolutely no clue whatsoever why this would be presenting itself. If I re-run the same workflow, the error goes away. I'm not sure if this is because one runner is conflicting with another while trying to access the /opt/hostedtoolcache/ directory or something else.
Here's the exact same job re-run without any issues:
I have a QEMU VM running an image of the Linux kernel 4.14.78.
On the host machine (a server with 96 cores), I am trying to compile a new update for the kernel with some changes I have made.
To make this process faster, I was using the host machine to compile for the target VM.
To do that I follow these steps:
copy the /boot/config-4.14.78 file from VM to host
put the copied file into the kernel source-code root directory, renaming it to .config in my
run make clean to clean it
run make menuconfig just to update the configs
run make -j$(nproc)
However, I am getting this error:
AS arch/x86/purgatory/setup-x86_64.o
CC arch/x86/purgatory/sha256.o
AS arch/x86/purgatory/entry64.o
CC arch/x86/purgatory/string.o
In file included from scripts/selinux/mdp/mdp.c:49:
./security/selinux/include/classmap.h:245:2: error: #error New address family defined, please update secclass_map.
245 | #error New address family defined, please update secclass_map.
| ^~~~~
make[3]: *** [scripts/Makefile.host:102: scripts/selinux/mdp/mdp] Error 1
make[2]: *** [scripts/Makefile.build:587: scripts/selinux/mdp] Error 2
make[2]: *** Waiting for unfinished jobs....
In file included from scripts/selinux/genheaders/genheaders.c:19:
./security/selinux/include/classmap.h:245:2: error: #error New address family defined, please update secclass_map.
245 | #error New address family defined, please update secclass_map.
| ^~~~~
CHK scripts/mod/devicetable-offsets.h
make[3]: *** [scripts/Makefile.host:102: scripts/selinux/genheaders/genheaders] Error 1
make[2]: *** [scripts/Makefile.build:587: scripts/selinux/genheaders] Error 2
make[1]: *** [scripts/Makefile.build:587: scripts/selinux] Error 2
make[1]: *** Waiting for unfinished jobs....
I have checked what causes this, and turns out that it is because of the:
include/linux/socket.h:211:#define AF_MAX 44 /* For now.. */
include/linux/socket.h:260:#define PF_MAX AF_MAX
Then, I followed this solution to print out the definition of PF_MAX at preprocessing time, and turns out that the PF_MAX is 45:
In file included from scripts/selinux/mdp/mdp.c:49:
./security/selinux/include/classmap.h:247:9: note: #pragma message: 45
247 | #pragma message(STRING(PF_MAX))
| ^~~~~~~
./security/selinux/include/classmap.h:250:2: error: #error New address family defined, please update secclass_map.
250 | #error New address family defined, please update secclass_map.
| ^~~~~
This 45 makes no sense for me, because I just checked that it is supposed to be 44.
I wonder if the build is considering the host machine instead of the target?
P.S.: These steps works fine on my local machine, which is a 8-cores machine, look the kernel version:
uname -a
Linux campes-note 5.4.86 #1 SMP Fri Jan 1 16:26:25 -03 2021 x86_64 x86_64 x86_64 GNU/Linux
UPDATE 1:
I tried to compile the kernel without any of my changes, following th above steps mentioned, and it did not compile as well, I get the same error.
UPDATE 2:
I found out that somehow, the compilation is looking at the host /usr/src/linux-headers-x.x.x files.
Instead, it should point to the same version as the target.
For that, I tried to follow this tutorial but I did not success. I am having a problem in one of the steps stated on this tutorial.
(Gathered from the now removed comments)
I have tried myself to build myself v4.14.78 followed by the latest available v4.14.214. I have found that former fails while the latter builds. So, I have bisected down to v4.14.116 that first builds correctly. Then I simple looked into the changes and found commit 760f8522ce08 ("selinux: use kernel linux/socket.h for genheaders and mdp") in the Linux stable tree which fixes the issue.
You may try to cherry-pick it to your branch and compile again.
I cloned yay through git clone https://aur.archlinux.org/yay.git. I Enter the directory and run makepkg -sic but I unfortunately get the error:
==> Making package: yay 10.1.0-1 (Mon 26 Oct 2020 06:25:36 AM +0330)
==> Checking runtime dependencies...
==> Checking buildtime dependencies...
==> Retrieving sources...
-> Downloading yay-10.1.0.tar.gz...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 118 100 118 0 0 97 0 0:00:01 0:00:01 --:--:-- 97
100 339k 100 339k 0 0 103k 0 0:00:03 0:00:03 --:--:-- 168k
==> Validating source files with sha256sums...
yay-10.1.0.tar.gz ... Passed
==> Extracting sources...
-> Extracting yay-10.1.0.tar.gz with bsdtar
==> Starting build()...
go build -v -trimpath -mod=readonly -modcacherw -ldflags '-s -w -extldflags "-Wl,-O1,--sort-common,--as-needed,-z,relro,-z,now" -X "main.yayVersion=10.1.0" -X "main.localePath=/usr/share/locale/"' -buildmode=pie -o yay
go: github.com/Jguer/go-alpm/v2#v2.0.1: Get "https://gocenter.io/github.com/%21jguer/go-alpm/v2/#v/v2.0.1.mod": dial tcp 35.230.74.213:443: i/o timeout
make: *** [Makefile:127: yay] Error 1
==> ERROR: A failure occurred in build().
Aborting...
Are you getting the error consistently? If so, it looks like the host you are using to build does not have access to GoCenter.
exec: "gcc": executable file not found in $PATH
check prerequisites. see https://wiki.archlinux.org/index.php/Ar … Repository
But most of the time there is a problem with gcc, which can be solved in the following way :
pamac install base-devel
choose a gcc number from list
Hi I use fedora23 to calculate matrix.
So I am trying to install CLAPACK-3.2.1 to my computer.
In the procedure,
1. download clapack.tgz (version 3.2.1) from www.netlib.org/clapack -> done
2. cd CLAPACK-3.2.1 and cp make.inc.example make.inc -> done
3. make f2clib -> done properly
4. make blaslib -> done properly
5. make (this takes a while) -> problem starts here.
in a procedure of make, there are two errors. The error message is this.
make[2]: Leaving directory '/home/optics/CLAPACK/TESTING/EIG'
NEP: Testing Nonsymmetric Eigenvalue Problem routines
./xeigtstz < nep.in > znep.out 2>&1
/bin/sh: line 1: 9412 Segmentation fault (core dumped) ./xeigtstz < nep.in > znep.out 2>&1
Makefile:438: recipe for target 'znep.out' failed
make[1]: *** [znep.out] Error 139
make[1]: Leaving directory '/home/optics/CLAPACK/TESTING'
Makefile:44: recipe for target 'lapack_testing' failed
make: *** [lapack_testing] Error 2
==============================================================================
I cannot understand this. Please help me dealing with these errors.
I have also encountered this problem and was able to resolve it by increasing the stack size using ulimit as suggested here. The following worked for me:
$ sudo ulimit -s 100000
Followed by running make as usual. If you would like a primer on what this command does, check out this question: What does “ulimit -s unlimited” do?. Basically, it increases the limits on the scratch space in memory allocated to a thread.
In Kubuntu 17.10 it worked in this way:
ulimit -s unlimited
Every build has failed as of Tuesday. I'm not exactly sure what happened. The Phing targets (clean/prepare) are being executed properly. Additionally, the unit tests are passing with flying colors, with only a warning for duplicate code (not a reason for a fail). I tried removing the phpDoc target to see if that was causing the error, but the build still failed.
Started by user chris Updating
file://localhost/projects/svn/ips-com/trunk
At revision 234 no change for
file://localhost/projects/svn/ips-com/trunk
since the previous build [trunk] $
/opt/phing/bin/phing clean prepare
-logger phing.listener.NoBannerLogger Buildfile:
/var/lib/hudson/.hudson/jobs/IPS/workspace/trunk/build.xml
IPS > clean:
[echo] Clean... [delete] Deleting directory
/var/lib/hudson/.hudson/jobs/IPS/workspace/build
IPS > prepare:
[echo] Prepare...
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs/coverage
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs/coverage-html
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/docs
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/app
BUILD FINISHED
Total time: 1.0244 second
[workspace] $ /bin/bash -xe
/tmp/hudson3259012225710915845.sh
+ cd trunk/tests
+ /usr/local/bin/phpunit --verbose -d memory_limit=512M --log-junit
../../build/logs/phpunit.xml
--coverage-clover ../../build/logs/coverage/clover.xml
--coverage-html ../../build/logs/coverage-html/
PHPUnit 3.5.0 by Sebastian Bergmann.
IPS Default_IndexControllerTest .
Default_AuthControllerTest ......
Manage_UsersControllerTest .....
testDeleteInvalidUserId ..
testGetPermissionsForInvalidUserId .. Audit_OverviewControllerTest
............
Time: 14 seconds, Memory: 61.00Mb
[30;42m[2KOK (28 tests, 198
assertions) [0m[2K Writing code
coverage data to XML file, this may
take a moment.
Generating code coverage report, this
may take a moment.
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
[workspace] $ /bin/bash -xe
/tmp/hudson1439023061736436000.sh
+ /usr/local/bin/phpcpd --log-pmd ./build/logs/cpd.xml ./trunk phpcpd
1.3.2 by Sebastian Bergmann.
Found 1 exact clones with 6 duplicated
lines in 2 files:
library/Ips/Form/Decorator/SplitInput.php:8-14
library/Ips/Form/Decorator/FeetInches.php:10-16
0.04% duplicated lines out of 16585 total lines of code.
Time: 4 seconds, Memory: 19.50Mb [DRY]
Skipping publisher since build result
is FAILURE Publishing Javadoc [xUnit]
[INFO] - Starting to record. [xUnit]
[WARNING] - Can't create the path
/var/lib/hudson/.hudson/jobs/IPS/workspace/generatedJUnitFiles.
Maybe the directory already exists.
[xUnit] [INFO] - Processing
PHPUnit-3.4 (default) [xUnit] [INFO] -
[PHPUnit-3.4 (default)] - 1 test
report file(s) were found with the
pattern 'build/logs/phpunit.xml'
relative to
'/var/lib/hudson/.hudson/jobs/IPS/workspace'
for the testing framework 'PHPUnit-3.4
(default)'. [xUnit] [INFO] -
Converting
'/var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs/phpunit.xml'
. [xUnit] [INFO] - Stopping recording.
Publishing Clover coverage report...
Publishing Clover XML report...
Publishing Clover coverage results...
Finished: FAILURE
What changed since Tuesday? Try to manually run exactly the same commands that Hudson tries to run from the same directory that Hudson starts it from (usually the jobs workspace directory). Of course with the user account that Hudson is started under.
There are several possibilities. ranging from standard groups for a directory, to permission, or other things outside of Hudson. Was Hudson upgraded? Was a plugin upgraded? Was the OS or php upgraded? Was there a change in the default or user .profile or .env (or the equivalent files)? Does another process accesses the workspace? ......
Once I had the problem that all of the sudden my deployment scripts did not run anymore. The mystery was, that I could still run the script from command line with the Hudson user account. The reason was simple but took a while to uncover. There was a java upgrade from 5 to 6. Both versions were available. After comparing the environment variables, there was a difference in the path. The problem was that the new path was set in the global .profile. But Hudson does not open an interactive shell, therefore the .profile will not be executed. If you have a problem like this, you can put the initialization in the .env file (or whatever the filename is for your system), because this will be run regardless if it is a interactive shell or not. Alternatively you can configure Hudson to set it on master or node/slave level.
if you want a command to not break the 'build' as a failure you have to add #! in front of the command to prevent the flags -xe which produce this behaviour.