cabal install scion-browser fails on Ubuntu 12.04 because haskeline needs Cabal library version >= 1.16 - eclipse

I installed EclipseFP, the Haskell plugin for Eclipse, on my Ubuntu 12.04 machine running Eclipse 3.7.2 and ghc(i) 7.4.1. Everytime I start Eclipse, EclipseFP asks me to install the helper executable scion-browser (0.2.12) and buildrunner (0.7.2) but ultimately fails installing both.
Trying cabal install scion-browser (or cabal install haskeline) on the command line fails with
Resolving dependencies...
cabal: Error: some packages failed to install:
haskeline-0.7.1.2 failed during the configure step. The exception was:
user error (The package requires Cabal library version -any && >=1.16 but no
suitable version is installed.)
Whereas cabal install buildwrapper fails with
Resolving dependencies...
Configuring buildwrapper-0.7.7...
Building buildwrapper-0.7.7...
Preprocessing library buildwrapper-0.7.7...
[1 of 7] Compiling Language.Haskell.BuildWrapper.Base ( src/Language/Haskell/BuildWrapper/Base.hs, dist/build/Language/Haskell/BuildWrapper/Base.o )
[2 of 7] Compiling Language.Haskell.BuildWrapper.GHCStorage ( src/Language/Haskell/BuildWrapper/GHCStorage.hs, dist/build/Language/Haskell/BuildWrapper/GHCStorage.o )
src/Language/Haskell/BuildWrapper/GHCStorage.hs:542:22:
Couldn't match expected type `scientific-0.2.0.1:Data.Scientific.Scientific'
with actual type `Number'
In the pattern: I l
In the pattern: Number (I l)
In the pattern: Just (Number (I l))
cabal: Error: some packages failed to install:
buildwrapper-0.7.7 failed during the building phase. The exception was:
ExitFailure 1
Any help would be greatly appreciated as I don't seem to be able to find any Google hits on either error.
EDIT:
After reinstalling Haskell (it seems I had two versions of containers installed which runhaskell Setup.hs configure --user rightly complained about), I can now configure BuildWrapper but building it fails with the following error:
[3 of 7] Compiling Language.Haskell.BuildWrapper.GHC ( src/Language/Haskell/BuildWrapper/GHC.hs, dist/build/Language/Haskell/BuildWrapper/GHC.o )
src/Language/Haskell/BuildWrapper/GHC.hs:522:37:
The function `showPpr' is applied to two arguments,
but its type `a0 -> String' has only one
In the second argument of `(++)', namely `showPpr dflags bname'
In the expression: "show " ++ showPpr dflags bname
In an equation for `exprS': exprS = "show " ++ showPpr dflags bname

The issue with BuildWrapper is due to a breaking change in Aeson. See https://github.com/JPMoresmau/BuildWrapper/issues/20. You can get the buildwrapper source code from github (which fixes the bounds and adapts the code) or force the install of Aeson 0.6.
For Haskeline I'm not sure, can you try to install haskeline on its own?

i think you should install newer cabal library acoording message: The package requires Cabal library version -any && >=1.16.
cabal update
cabal install cabal
cabal install cabal-install
cabal --version
should be:
using version 1.20.0.0 of the Cabal library
if not fix your PATH. new cabal binary is probably in ~/.cabal/bin
then:
cabal install haskeline

I had the same error (The function `showPpr' is applied to two arguments), both while installing with cabal and building from source.
I tried with an older version of aeson,
cabal install buildwrapper --constraint=aeson==0.6.2.1
Be careful to replace the aeson version with the version you have closer to 0.6.2.1
It produced lot of warning, still it could build successfully.

Related

ModuleNotFoundError: No module named 'cartopy' when import SkewT from metpy.plots under Python3

When trying to import SkewT into my python3 code on a Mac (Mojave 10.14.6):
from metpy.plots import SkewT
I get the error:
ModuleNotFoundError: No module named 'cartopy'
pip3 install cartopy gives the output
Collecting cartopy
Downloading https://files.pythonhosted.org/packages/e5/92/fe8838fa8158931906dfc4f16c5c1436b3dd2daf83592645b179581403ad/Cartopy-0.17.0.tar.gz (8.9MB)
|████████████████████████████████| 8.9MB 616kB/s
Installing build dependencies ... done
Getting requirements to build wheel ... error
ERROR: Complete output from command /usr/local/opt/python/bin/python3.7 /usr/local/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py get_requires_for_build_wheel /tmp/tmpj50b1vfe:
ERROR: setup.py:171: UserWarning: Unable to determine GEOS version. Ensure you have 3.3.3 or later installed, or installation may fail.
'.'.join(str(v) for v in GEOS_MIN_VERSION), ))
Proj 4.9.0 must be installed.
----------------------------------------
ERROR: Command "/usr/local/opt/python/bin/python3.7 /usr/local/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py get_requires_for_build_wheel /tmp/tmpj50b1vfe" failed with error code 1 in /private/tmp/pip-install-b5cu8485/cartopy
To start, I tried to install Proj and geos, but pip3 only lists version 0.1.0 for proj and 0.2.2 for geos. Before I get too far down this rabbit hole, I thought I'd see if anyone else has encountered this problem. Thanks!
So it looks like MetPy 0.10 accidentally picked up a hard dependency on CartoPy, which we did not really plan. You can track our resolution of that here.
CartoPy depends on a lot of compiled libraries that are not pip-installable unfortunately. Your best bet is to look at CartoPy's install instructions. If you're using Anaconda or Canopy, those distributions have pre-built CartoPy packages available.
One option to work around this is to install MetPy 0.9:
pip install metpy==0.9
Do you use Conda? The easiest way to remedy this problem is to install CartoPy (or MetPy for that manner) via conda, so that all of the right dependencies are also downloaded: conda install -c conda-forge cartopy or conda install -c conda-forge metpy. Pip doesn't bring all of them together, so that leads to this problem being raised.
Thanks. Without conda, I was also able to complete this (more painful) installation:
- brew install geos
- brew install proj
- pip3 install cython
- pip3 install git+https://github.com/SciTools/cartopy.git#master 
(see http://louistiao.me/posts/installing-cartopy-on-mac-osx-1011/)

vscode assistance with building cpptools needed

Ubuntu 18.04 ARM64
I have download and built vscode and this appears to be working.
I can see the extension market place and install extensions. The C/C++ IntelliSense, debugging, and code browsing install but gets a dependency install failure sometimes:
Updating C/C++ dependencies...
Downloading package 'Mono Framework Assemblies' (5368 KB) Done!
Installing package 'Mono Framework Assemblies'
Failed at stage: installPackages
Error: end of central directory record signature not found
It seems to succeed on the second attempt but I'm not convinced.
So I have cloned vscode-cpptools and would like to built it myself but I'm not sure what dependencies it has or how to build it correctly.
Any tips appreciated!
So following the build and debug guide at:
$ git clone -b release https://github.com/Microsoft/vscode-cpptools
$ cd vscode-cpptools/Extension
$ npm install ! should install all dependencies but it forgets gulp
$ npm install gulp ! Manually install. I wonder what else it forgets
$ vsce package ! Should trigger the build and make the vsix package.
So the package is created and when I try and install it via the vscode extensions I get
Unable to start the C/C++ language server. IntelliSense features will be disabled. Error: Missing binary at ~/.vscode-oss-dev/extensions/ms-vscode.cpptools-0.22.1/bin/Microsoft.VSCode.CPP.Extension.linux.
Me thinks there are a lot of other dependencies that are missing!!!
Looking in the Extension bin folder and two important binaries are missing:
Microsoft.VSCode.CPP.Extension.linux
Microsoft.VSCode.CPP.IntelliSense.Msvc.linux
I also tried this on Intel Ubuntu 18.04 and while the Intel build appeared to do a whole lot more it also fails to build the binaries.
Found the answer in here : github.com/Microsoft/vscode-cpptools/issues/429 which indicates there is no support for Aaarch64 Arm64 at this point in time.

module not found: org.scala-sbt#sbt;1.1.6

I installed SBT via terminal with following commands:
echo "deb https://dl.bintray.com/sbt/debian /" | sudo tee -a /etc/apt/sources.list.d/sbt.list
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2EE0EA64E40A89B84B2DF73499E82A75642AC823
sudo apt-get update
sudo apt-get install sbt
on my Ubuntu 18.04 and with java version:
openjdk version "1.8.0_171"
OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-0ubuntu0.18.04.1-b11)
OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode)
The installation was successful but when I tried to start SBT via terminal, then I've got
https://repo.scala-sbt.org/scalasbt/ivy-snapshots/org.scala-sbt/sbt/1.1.6/ivys/ivy.xml
::::::::::::::::::::::::::::::::::::::::::::::
:: UNRESOLVED DEPENDENCIES ::
::::::::::::::::::::::::::::::::::::::::::::::
:: org.scala-sbt#sbt;1.1.6: not found
::::::::::::::::::::::::::::::::::::::::::::::
:::: ERRORS
Server access Error: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty url=https://repo1.maven.org/maven2/org/scala-sbt/sbt/1.1.6/sbt-1.1.6.pom
Server access Error: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty url=https://repo1.maven.org/maven2/org/scala-sbt/sbt/1.1.6/sbt-1.1.6.jar
Server access Error: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty url=https://repo.scala-sbt.org/scalasbt/maven-releases/org/scala-sbt/sbt/1.1.6/sbt-1.1.6.pom
Server access Error: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty url=https://repo.scala-sbt.org/scalasbt/maven-releases/org/scala-sbt/sbt/1.1.6/sbt-1.1.6.jar
Server access Error: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty url=https://repo.scala-sbt.org/scalasbt/maven-snapshots/org/scala-sbt/sbt/1.1.6/sbt-1.1.6.pom
Server access Error: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty url=https://repo.scala-sbt.org/scalasbt/maven-snapshots/org/scala-sbt/sbt/1.1.6/sbt-1.1.6.jar
Server access Error: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty url=https://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt/1.1.6/ivys/ivy.xml
Server access Error: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty url=https://repo.scala-sbt.org/scalasbt/ivy-snapshots/org.scala-sbt/sbt/1.1.6/ivys/ivy.xml
What is wrong?
Update
developer#monad:~$ sudo apt-get purge openjdk-8-jdk java-common
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
openjdk-11-jre-headless : Depends: ca-certificates-java but it is not going to be installed
Depends: java-common (>= 0.28) but it is not going to be installed
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.
List of installed java version:
developer#monad:~$ update-java-alternatives --list
java-1.11.0-openjdk-amd64 1101 /usr/lib/jvm/java-1.11.0-openjdk-amd64
java-1.8.0-openjdk-amd64 1081 /usr/lib/jvm/java-1.8.0-openjdk-amd64
Update 2
sudo apt-get install ca-certificates-java
Reading package lists... Done
Building dependency tree
Reading state information... Done
ca-certificates-java is already the newest version (20170930ubuntu1).
ca-certificates-java set to manually installed.
The following packages were automatically installed and are no longer required:
libice-dev libsm-dev libxt-dev
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
The root cause is a conflict between openjdk-11-jdk (which is default in Ubuntu 18.04) and sbt packages settings. It has already been fixed in Debian and will be included in Ubuntu shortly. Meanwhile the simplest workaround is to demote your java to version 8. Other solutions employing ca-certificates-java are much more complicated.
First remove conflicting packages:
sudo apt-get remove --purge openjdk* java-common default-jdk
sudo apt-get remove --purge sbt
sudo apt-get autoremove --purge
Check weather you successfully removed all related packages by:
sudo update-alternatives --config java
The system shall prompt you there is no Java available to config, otherwise this workaround fails.
Then reinstall required packages:
sudo apt-get install openjdk-8-jdk sbt
Test by:
sbt compile
Problem is java-certificates so you need to run these commands:
Reinstall JDK
$ sudo apt-get purge openjdk-8-jdk java-common
$ sudo apt-get install openjdk-8-jdk
Run sbt
$ sbt
I would recommend you install Java/Scala/SBT totally independent from the OS packages. It does not matter (or should not matter!) for a developer which specific version of Java was installed by apt-get, yum or whatever.
From the developer's perspective, it may even make sense to test your application under several versions of Java. I've seen this situation before: "it works" under Java 1.8.x but "it fails" under Java 1.8.y.
So, from a developers, perspective, you would be interested on quickly swithing versions of Java, so that you can quickly make sure it works properly under those several different versions, no matter which specific version is installed on your specific version of the OS.
If you liked the idea, this is how it works:
Download manually all JDK versions you like and uncompress them under a given folder under your home folder, say: $HOME/tools. I have a script which automagically installs a certain version of the JDK for you: https://github.com/frgomes/bash-scripts/blob/master/scripts/user-install/install-java.sh
Adjust environment variables in a virtualenv-like fashion. It's easier than you think and you dont't even need virtualenv. All you need is to source a shell script which runs at login time which defines JAVA_HOME, pointing to a given version of Java, under your $HOME/tools. This is an example: https://github.com/frgomes/bash-scripts/blob/master/bashrc-virtualenvs/j8s12/bin/postactivate
Install Scala and SBT versions by hand, or using my automated script at https://github.com/frgomes/bash-scripts/blob/master/scripts/user-install/install-scala.sh
This fixed it for me.
sudo update-ca-certificates -f
In my case, I solved it by removing .m2, .ivy & .sbt.
Next time I executed sbt, it fixed everything.
I believe I broke sbt when I was playing with the cached dependencies or something else.

Error F_SETLK when building bazel on CentOS 6.5

I am working on building and installing tensorflow on my institution's cluster computer, which is running CentOS 6.5.
Obviously, the first step is building and installing bazel. The build works just fine, but when I try to run the bazel binary, I get the following error:
Error: unexpected result from F_SETLK: Function not implemented
gcc version is 4.7.2
java version is jdk1.8.0_65
edit: I have also tried compiling gcc 4.9.4 and building with this version, and I have tried building both the latest dist of bazel, and the 0.3.1 from the git repo. All variants get the same error.
This happens if the filesystem where Bazel tries to install itself (unpack its embedded tools) doesn't support locking.
Workaround (until the relevant issue is resolved) is to specify a path on a local, writable (and file-lockable) filesystem for --output_user_root, for example:
bazel --output_user_root=/usr/local/$USER/bazelout build <targets>

Strange behaviour of rpm command on error in %pre section of a rpm package

I've got several versions of some package. The last of it has an error in %pre section that terminates the install script:
mypak-0.0.1-1.el6.noarch.rpm
mypak-0.0.1-2.el6.noarch.rpm
mypak-0.0.1-3.el6.noarch.rpm <-- bad package
All of my packages have debug output in pre, post, preun and postun sections.
I install the first package:
rpm -Uhv mypak-0.0.1-1.el6.noarch.rpm
Output (param is the parameter passed to script sections) is:
Preparing... ########################################### [100%]
pre: 0.0.1-1.el6 ; param = 1
1:mypak ########################################### [100%]
post: 0.0.1-1.el6 ; param = 1
Then I try to update my package and (accidentally) launch rpm command with all the packages at once:
rpm -Uhv mypak-0.0.1-*
warning: package mypak = 0.0.1-1.el6 was already added, replacing with mypak > 0.0.1-2.el6
warning: package mypak = 0.0.1-2.el6 was already added, replacing with mypak > 0.0.1-3.el6
Preparing... ########################################### [100%]
pre: 0.0.1-3.el6 ; param = 2
!!!version 3 is bad!!!
error: %pre(mypak-0.0.1-3.el6.noarch) scriptlet failed, exit status 1
error: install: %pre scriptlet failed (2), skipping mypak-0.0.1-3.el6
preun: 0.0.1-1.el6 ; param = 0
postun: 0.0.1-1.el6 ; param = 0
As you can see, my package was removed in the end. Moreover, the package is removed even if other packages depend on it. I don't even get any warnings about corrupted dependencies!
If I install my packages one after another, I don't have that problem. In this case installation of the third package fails and that's all. The previous version of my package is still in place.
I think this is really a strange behaviour. Is it a bug in rpm or am I missing something?
I use rpm 4.8.0 on Centos 6.5.
RPM will ignore (by substituting newer version) older versions of identically named packages when installing. Either rename some of the packages, or don't install multiple versions of an identically named package in the same transactio
%pre failures when upgrading are tricky. The newer package will not install if %pre fails. and (on upgrade when there is an older version already installed) will remove the already installed package. The best answer here is don't rely on %pre failures in packaging while installing. Instead add Provides:/Requires:
so that the package (and transaction) failure occurs during dependency checking, or design a different means for testing a dynamic property than %pre,
either in configuration, or in documentation, or by renaming and obsoleteing the previous package to get a more reliable packaging.
I wrote about this behaviour to the RPM mailing list and Lubos Kardos answered that this is a bug and now it is fixed.
Mailing list thread: http://lists.rpm.org/pipermail/rpm-list/2015-April/001740.html
Commit: https://github.com/rpm-software-management/rpm/commit/c7fa7b2fd7205b73c833831ab9f8c311f40b2ff1