I have an open source scala project (https://github.com/lucidsoftware/xtract). The build on travis-ci consistently hangs while running tests, for both scala 2.11 and 2.12. Sometimes it hangs after compiling and before any output from tests, other times it hangs in the middle of running tests. I have tried several iterations of changing the travis, inclding trying with and without sudo: false, different versions of sbt, splitting up the sbt commands in different ways, using both oraclejdk8 and openjdk8, etc.
Am I doing something wrong or is this a bug?
Sample failure: https://travis-ci.org/lucidsoftware/xtract/jobs/280974227
My .travis.yml:
language: scala
scala:
- 2.11.11
- 2.12.3
jdk:
- openjdk8
dist: trusty
sudo: false
cache:
directories:
- $HOME/.ivy2/cache
- $HOME/.sbt/
after_success:
- sbt ++$TRAVIS_SCALA_VERSION package
- |
if [ -n "$TRAVIS_TAG" ] || ([ "$TRAVIS_PULL_REQUEST" == false ] && [ "$TRAVIS_BRANCH" == master ])
then
mkdir ~/.pgp
echo $PGP_PUBLIC | base64 --decode > ~/.pgp/pubring
echo $PGP_SECRET | base64 --decode > ~/.pgp/secring
echo "Publishing snapshot"
sbt ++$TRAVIS_SCALA_VERSION xtract/publishSigned xtractTesting/publishSigned
fi
deploy:
api_key: $GITHUB_AUTH
file:
- xtract-core/target/**/*.jar
- testing/target/**/*.jar
file_glob: true
provider: releases
skip_cleanup: true
on:
tags: true
install: sbt ++$TRAVIS_SCALA_VERSION update
before_cache:
#Avoid unncessary cache updates
- find $HOME/.ivy2 -name "ivydata-*.properties" -print -delete
- find $HOME/.sbt -name "*.lock" -print -delete
EDIT
Failure with -debug option on sbt: https://travis-ci.org/lucidsoftware/xtract/jobs/281081862
The last thing it does is
[debug] Running TaskDef(com.lucidchart.open.xtract.DefaultXmlReadersSpec, specs2 Specification fingerprint, false, [SuiteSelector])
Edit 2
Some notes. This project has multiple subprojects. The build pauses while running tests, and the test are in their own project, because they depend on both the core code, and an a separate sub-project for specs2 matchers specific to the project.
It's a 10min compiler timeout. Maybe the memory limits cause too much swapping.
-Xms2048M
-Xmx2048M
-Xss6M
-XX:MaxPermSize=512M
I think I've finally figured out what was going on.
I was able to reproduce the problem in the travisci/ci-garnet:packer-1512502276-986baf0 docker container. Although to get sbt running I had to find and install the sbt-launch.jar for version 1.1.1 because the installed bootstrapper doesn't work for any version 1.0 or higher. I deleted several folders from the home folder with stuff for other languages to free disk space for downloading artifacts.
After it stalled I took a thread dump of the java process (by sending it a QUIT signal).
The output included this:
Found one Java-level deadlock:
=============================
"specs2-6":
waiting to lock monitor 0x00007fc6a4b9fb68 (object 0x00000000997e39f0, a sbt.internal.inc.classpath.ClasspathFilter),
which is held by "specs2-3"
"specs2-3":
waiting to lock monitor 0x00007fc6d0df7298 (object 0x0000000098f700b0, a sbt.internal.inc.classpath.ClasspathUtilities$$anon$1),
which is held by "specs2-6"
So I knew there was a deadlock which was preventing it from progressing.
After some googling I found a bug for mockito (https://github.com/mockito/mockito/issues/1067).
A workaround is to disable parallelExecution for tests.
I got the same issue. Add to your build.sbt
logLevel := Level.Debug
so that you can use the log-debug to check what is going in on. In my case sbt was looking for
sbt-chain: module revision found in cache: com.fasterxml.jackson#jackson-parent;2.8
[debug] tried /home/travis/.ivy2/local/com.fasterxml.jackson/jackson-bom/2.8.11/jars/jackson-bom.jar
[debug] tried https://repo1.maven.org/maven2/com/fasterxml/jackson/jackson-bom/2.8.11/jackson-bom-2.8.11.jar
[debug] CLIENT ERROR: Not Found url=https://repo1.maven.org/maven2/com/fasterxml/jackson/jackson-bom/2.8.11/jackson-bom-2.8.11.jar
[debug] tried /home/travis/.sbt/preloaded/com.fasterxml.jackson/jackson-bom/2.8.11/jars/jackson-bom.jar
[debug] tried file:////home/travis/.sbt/preloaded/com/fasterxml/jackson/jackson-bom/2.8.11/jackson-bom-2.8.11.jar
[debug] tried https://repo1.maven.org/maven2/com/fasterxml/jackson/jackson-bom/2.8.11/jackson-bom-2.8.11.jar
[debug] CLIENT ERROR: Not Found url=https://repo1.maven.org/maven2/com/fasterxml/jackson/jackson-bom/2.8.11/jackson-bom-2.8.11.jar
Related
I have multiple installations of Eclipse(2021-12) + PyDev(9.3.0.202203051235) all using Iron Python(2.7). All running on Windows 10. They all run the scripts as expected, but one installation has a much more robust console output when debugging, almost like a tracing option is enabled. I've tried reinstalling, deleting workspaces, deleting '.metadata' folders, etc. All the project settings seem identical.
Any ideas how to minimize the console output? Something in registry?
Expected Console output:
pydev debugger: starting (pid: 15312)
Actual Console output:
1.99s - Using GEVENT_SUPPORT: False
0.00s - Using GEVENT_SHOW_PAUSED_GREENLETS: False
0.00s - pydevd __file__: C:\\Eclipse-2021-12-R\plugins\org.python.pydev.core_9.3.0.202203051235\pysrc\pydevd.py
0.11s - Initial arguments: (['C:\\Eclipse-2021-12-R\\plugins\\org.python.pydev.core_9.3.0.202203051235\\pysrc\\pydevd.py', '--multiprocess', '--protocol-http', '--print-in-debugger-startup', '--vm_type', 'python', '--client', '127.0.0.1', '--port', '60413', '--file', 'C:\\Test.py'],)
0.00s - Current pid: 8884
pydev debugger: starting (pid: 8884)
Those should only appear if you add an environment variable asking it to be shown.
i.e.: Something as:
PYDEVD_DEBUG=1
PYDEV_DEBUG=1
Maybe you have such an environment set in your launch configuration or interpreter configuration or elsewhere in your system?
You may want to check the os.environ of the running program to see what's set there.
my e2e test task sends some http requests to the server. i want to start that server (Play framework based) on a separate jvm, then start the test which hits the server and let it finish, then stop the server.
i looked through many SO threads so far found these options:
use sbt-sequential
use sbt-revolver
use alias
but in my experiments setting fork doesn't work, i.e. it still blocks execution when server is started
fork := true
fork in run := true
fork in Test := true
fork in IntegrationTest := true
The startServer/stopServer examples in sbt docs are also blocking it seems
I also tried just starting the server in background from shell but server is quickly shut down, similar to this question
nohup sbt -Djline.terminal=jline.UnsupportedTerminal web/run < /dev/null > /tmp/sbt.log 2>&1 &
related questions:
scala sbt test run setup and cleanup command once on multi project
How do I start a server before running a test suite in SBT?
fork doesn't run task in parallel - it just makes sure that tests are run in a separate JVM which helps with things like shutdown webhooks or disconnecting from services that doesn't handle resource release properly (e.g. DB connection that never calls disconnect).
If you want to use the same sbt to start server AND run test against that instance (which sounds like easily breakable antipattern BTW) you can use somethings like:
reStart
it:test
reStop
However that would be tricky because reStart yields immediately so tests would start when the server setup started but not necessarily completed. Race condition, failing tests, or blocking all tests until server finishes starting.
This is why nobody does it. Much easier to handle solution is to:
start the server in test in some beforeAll method and make this method complete only after server is responding to queries
shutdown it in some afterAll method (or somehow handle both of these using something like cats.effect.Resource or similar)
depending on situation:
running tests sequentially to avoid starting two instances at the same time or
generating config for each test so that they could be run in parallel without clashing on ports allocations
Anything else is just a hack that is going to fail sooner rather than later.
answering my own question, what we ended up doing is
use "sbt stage" to create standalone server jar & run script for the Play web app (in web/target/universal/stage/bin/)
create run_integration_tests.sh shell script that starts the server, waits 30 sec and starts test
add runIntegrationTests task in build.sbt which calls run_integration_tests.sh, and add it to it:test
run_integration_tests.sh:
#! /bin/bash
CURDIR=$(pwd)
echo "Starting integration/e2e test runner"
date >runner.log
export JAVA_OPTS="-Dplay.server.http.port=9195 -Dconfig.file=$CURDIR/web/conf/application_test.conf -Xmx2G"
rm -f "$CURDIR/web/target/universal/stage/RUNNING_PID"
echo "Starting server"
nohup web/target/universal/stage/bin/myapp >>runner.log 2>&1 &
echo "Webserver PID is $pid"
echo "Waiting for server start"
sleep 30
echo "Running the tests"
sbt "service/test:run-main com.blah.myapp.E2ETest"
ERR="$?"
echo "Tests Done at $(date), killing server"
kill $pid
echo "Waiting for server exit"
wait $pid
echo "All done"
if [ $ERR -ne 0 ]; then
cat runner.log
exit "$ERR"
fi
build.sbt:
lazy val runIntegrationTests = taskKey[Unit]("Run integration tests")
runIntegrationTests := {
val s: TaskStreams = streams.value
s.log.info("Running integration tests...")
val shell: Seq[String] = Seq("bash", "-c")
val runTests: Seq[String] = shell :+ "./run_integration_tests.sh"
if ((runTests !) == 0) {
s.log.success("Integration tests successful!")
} else {
s.log.error("Integration tests failed!")
throw new IllegalStateException("Integration tests failed!")
}
}
lazy val root = project.in(file("."))
.aggregate(service, web, tools)
.configs(IntegrationTest)
.settings(Defaults.itSettings)
.settings(
publishLocal := {},
publish := {},
(test in IntegrationTest) := (runIntegrationTests dependsOn (test in IntegrationTest)).value
)
calling sbt in CI/jenkins:
sh 'sbt clean coverage test stage it:test'
I created a test module by following all the conventions, but when I run the test, I get the following message:
collecting 0 items
Here's my directory hierarchy:
integration_tests (Directory)-> tests (Directory)-> test_integration_use_cases.py (python file)
And this is the content of the file:
import pytest
from some_tests.integration_tests.backbone.SomeIntegrationTestBase import SomeIntegrationTestBase
class TestSomeIntegration(SomeIntegrationTestBase):
#pytest.mark.p1
def test_some_integration_use_cases(self):
print("**** Executing integration tests ****")
result = self.execute_test(4)
assert (True == result)
when I run the following command:
pytest test_integration_use_cases.py
I see the following result without any errors:
collecting 0 items
FYI: I am running this on a development machine (Like vagrant)
so I had the same problem as you have even after following all the recommended conventions. My application structure was as follows;
Application
-- API
app.py
-- docs
-- venv
-- tests
-- unit_test
test_factory
...
...
I, however, resolved the issue by moving the tests directory under the API package so that my application structure looked as below;
Application
-- API
app.py
-- tests
-- unit_test
test_factory
...
-- docs
-- venv
...
Although pytest is supposed to auto-discover the tests, it seems to do that if they are placed in the application root. Check out the pytest for flask
I also found this resource helpful.
I am trying to use Apache Zeppelin (0.7.2, net install running locally on a Mac) to explore data loaded from an s3 bucket. The data seems to load just fine, as the command:
val p = spark.read.textFile("s3a://sparkcookbook/person")
gives the result:
p: org.apache.spark.sql.Dataset[String] = [value: string]
However, when I try to call methods on the object p, I get an error. For example:
p.take(1)
results in:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.spark.rdd.RDDOperationScope$
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:225)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:308)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2371)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2765)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2370)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2377)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2113)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2112)
at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2795)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2112)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2327)
My conf/zeppelin-env.sh is the same as the default, except that I have amazon access key and secret key environment variables defined there. In the Spark interpreter in the Zeppelin notebook, I have added the following artifacts:
org.apache.hadoop:hadoop-aws:2.7.3
com.amazonaws:aws-java-sdk:1.7.9
com.fasterxml.jackson.core:jackson-core:2.9.0
com.fasterxml.jackson.core:jackson-databind:2.9.0
com.fasterxml.jackson.core:jackson-annotations:2.9.0
(I think only the first two are necessary). The two commands above work fine in the Spark shell, just not in the Zeppelin notebook (see How to use s3 with Apache spark 2.2 in the Spark shell for how that was set up).
So it seems that there is a problem with one of the Jackson libraries. Maybe I'm using the wrong artifacts above for the Zeppelin interpreter?
UPDATE: Following the advice in the proposed answer below, I removed the jackson jars that came with Zeppelin, and replaced them with the following:
jackson-annotations-2.6.0.jar
jackson-core-2.6.7.jar
jackson-databind-2.6.7.jar
And replaced the artifacts with these, so my artifacts are now:
org.apache.hadoop:hadoop-aws:2.7.3
com.amazonaws:aws-java-sdk:1.7.9
com.fasterxml.jackson.core:jackson-core:2.6.7
com.fasterxml.jackson.core:jackson-databind:2.6.7
com.fasterxml.jackson.core:jackson-annotations:2.6.0
The error I get, however, from running the above commands is the same.
UDPATE2: As per I removed the jackson libraries from the list of artifacts, since they are already now in the jars/ folder - the only added artifacts are now the aws artifacts above. I then cleaned the classpath by entering the following in the notebook (as per the instructions):
%spark.dep
z.reset()
I get a different error now:
val p = spark.read.textFile("s3a://sparkcookbook/person")
p.take(1)
p: org.apache.spark.sql.Dataset[String] = [value: string]
java.lang.NoSuchMethodError: com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer$.handledType()Ljava/lang/Class;
at com.fasterxml.jackson.module.scala.deser.NumberDeserializers$.<init>(ScalaNumberDeserializersModule.scala:49)
at com.fasterxml.jackson.module.scala.deser.NumberDeserializers$.<clinit>(ScalaNumberDeserializersModule.scala)
at com.fasterxml.jackson.module.scala.deser.ScalaNumberDeserializersModule$class.$init$(ScalaNumberDeserializersModule.scala:61)
at com.fasterxml.jackson.module.scala.DefaultScalaModule.<init>(DefaultScalaModule.scala:20)
at com.fasterxml.jackson.module.scala.DefaultScalaModule$.<init>(DefaultScalaModule.scala:37)
at com.fasterxml.jackson.module.scala.DefaultScalaModule$.<clinit>(DefaultScalaModule.scala)
at org.apache.spark.rdd.RDDOperationScope$.<init>(RDDOperationScope.scala:82)
at org.apache.spark.rdd.RDDOperationScope$.<clinit>(RDDOperationScope.scala)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:225)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:308)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2371)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2765)
UPDATE3: As per the suggestion in a comment to the proposed answer below, I cleaned the class path by deleting all the files in the local repo:
rm -rf local-repo/*
I then restarted the Zeppelin server. To check the class path, I executed the following in the notebook:
val cl = ClassLoader.getSystemClassLoader
cl.asInstanceOf[java.net.URLClassLoader].getURLs.foreach(println)
This gave the following output (I include only the jackson libraries from the output here, otherwise the output is too long to paste):
...
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-annotations-2.1.1.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-annotations-2.2.3.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-core-2.1.1.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-core-2.2.3.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-core-asl-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-databind-2.1.1.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-databind-2.2.3.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-jaxrs-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-mapper-asl-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/local-repo/2CT9CPAA9/jackson-xc-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/lib/jackson-annotations-2.6.0.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/lib/jackson-core-2.6.7.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/lib/jackson-databind-2.6.7.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/jackson-annotations-2.6.5.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/jackson-core-2.6.5.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/jackson-core-asl-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/jackson-databind-2.6.5.jar
file:/Users/shafiquejamal/allfiles/scala/spark/zeppelin-0.7.2-bin-netinst/jackson-mapper-asl-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-annotations-2.6.5.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-core-2.6.5.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-core-asl-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-databind-2.6.5.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-jaxrs-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-mapper-asl-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-module-paranamer-2.6.5.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-module-scala_2.11-2.6.5.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/jackson-xc-1.9.13.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/json4s-jackson_2.11-3.2.11.jar
file:/Users/shafiquejamal/allfiles/scala/spark/spark-2.1.0-bin-hadoop2.7/jars/parquet-jackson-1.8.1.jar
...
It seems that multiple versions are fetched from the repo. Should I exclude the older versions? If so, how do I do that?
Use this jar versions;
aws-java-sdk-1.7.4.jar
hadoop-aws-2.6.0.jar
like in this script : https://github.com/2dmitrypavlov/sparkDocker/blob/master/zeppelin.sh
do not use package but download the jars and put them in a path, let's say in "/root/jars/" then edit your zeppelin-env.sh;
then run this command from zeppelin/conf dir;
echo 'export SPARK_SUBMIT_OPTIONS="--jars /root/jars/mysql-connector-java-5.1.39.jar,/root/jars/aws-java-sdk-1.7.4.jar,/root/jars/hadoop-aws-2.6.0.jar"'>>zeppelin-env.sh
after that restart zeppelin.
The code at the link above is pasted below (just in case the link becomes stale):
#!/bin/bash
# Download jars
cd /root/jars
wget http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.39/mysql-connector-java-5.1.39.jar
cd /usr/share/
wget http://archive.apache.org/dist/zeppelin/zeppelin-0.7.1/zeppelin-0.7.1-bin-all.tgz
tar -zxvf zeppelin-0.7.1-bin-all.tgz
cd zeppelin-0.7.1-bin-all/conf
cp zeppelin-env.sh.template zeppelin-env.sh
echo 'export MASTER=spark://'$MASTERZ':7077'>>zeppelin-env.sh
echo 'export SPARK_SUBMIT_OPTIONS="--jars /root/jars/mysql-connector-java-5.1.39.jar,/root/jars/aws-java-sdk-1.7.4.jar,/root/jars/hadoop-aws-2.6.0.jar"'>>zeppelin-env.sh
echo 'export ZEPPELIN_NOTEBOOK_STORAGE="org.apache.zeppelin.notebook.repo.VFSNotebookRepo, org.apache.zeppelin.notebook.repo.zeppelinhub.ZeppelinHubRepo"'>>zeppelin-env.sh
echo 'export ZEPPELINHUB_API_ADDRESS="https://www.zeppelinhub.com"'>>zeppelin-env.sh
echo 'export ZEPPELIN_PORT=9999'>>zeppelin-env.sh
echo 'export SPARK_HOME=/usr/share/spark'>>zeppelin-env.sh
cd ../bin/
./zeppelin.sh
You are probably using a too recent Jackson version. Even spark 2.3 is still on `2.6.7. Downgrade, and make sure that all your jackson JARs are consistent.
The MetaCPAN Travis CI coverage builds are quite slow. See https://travis-ci.org/metacpan/metacpan-web/builds/238884497 This is likely in part because we're not successfully ignoring the /local folder that gets created by Carton as part of our build. See https://coveralls.io/builds/11809290
We're using perl-helpers to help with our Travis configuration. I thought I should be able to use the DEVEL_COVER_OPTIONS environment variable in order to fix this, but I guess I don't have the correct incantation. I've included the entire config below because a few snippets out of context seemed misleading.
language: perl
perl:
- "5.22"
matrix:
fast_finish: true
allow_failures:
- env: COVERAGE=1 USE_CPANFILE_SNAPSHOT=true
- env: USE_CPANFILE_SNAPSHOT=false HARNESS_VERBOSE=1
env:
global:
# Carton --deployment only works on the same version of perl
# that the snapshot was built from.
- DEPLOYMENT_PERL_VERSION=5.22
- DEVEL_COVER_OPTIONS="-ignore ^local/"
matrix:
# Get one passing run with coverage and one passing run with Test::Vars
# checks. If run together they more than double the build time.
- COVERAGE=1 USE_CPANFILE_SNAPSHOT=true
- USE_CPANFILE_SNAPSHOT=false HARNESS_VERBOSE=1
- USE_CPANFILE_SNAPSHOT=true
before_install:
- git clone git://github.com/travis-perl/helpers ~/travis-perl-helpers
- source ~/travis-perl-helpers/init
- npm install -g less js-beautify
# Pre-install from backpan to avoid upgrade breakage.
- cpanm -n http://cpan.metacpan.org/authors/id/M/ML/MLEHMANN/common-sense-3.6.tar.gz
- cpanm -n App::cpm Carton
install:
- cpan-install --coverage # installs converage prereqs, if enabled
- 'cpm install `test "${USE_CPANFILE_SNAPSHOT}" = "false" && echo " --resolver metadb" || echo " --resolver snapshot"`'
before_script:
- coverage-setup
script:
# Devel::Cover isn't in the cpanfile
# but if it's installed into the global dirs this should work.
- carton exec prove -lr -j$(test-jobs) t
after_success:
- coverage-report
notifications:
email:
recipients:
- olaf#seekrit.com
on_success: change
on_failure: always
irc: "irc.perl.org#metacpan-travis"
# Use newer travis infrastructure.
sudo: false
cache:
directories:
- local
The syntax for the Devel::Cover options on the command line is weird. You need to put stuff comma-separated. At least when you use PERL5OPT.
DEVEL_COVER_OPTIONS="-ignore,^local/"
See for example https://github.com/simbabque/AWS-S3/blob/master/.travis.yml#L26, where it's a whole lot of stuff with commas.
PERL5OPT=-MDevel::Cover=-ignore,"t/",+ignore,"prove",-coverage,statement,branch,condition,path,subroutine prove -lrs t