In a multi-module Scala project I'm running several integration tests where I use scala-ssh (v. 0.8) to connect to a remote machine via SSH and transfer a file from there.
If I run an integration test once in an sbt session, everything works as expected - I can connect to the machine and download any file. The related bits of Scala code are:
private lazy val fileInventory: AnsibleYamlFileInventory = {
val inventory = SSH(ansibleHost, HostResourceConfig()) { client =>
client.fileTransfer {
scp =>
val tmpLocalFile = Files.createTempFile("inventory", ".yaml")
scp.download(remoteYamlInventoryFile, tmpLocalFile.toAbsolutePath.toString)
new AnsibleYamlFileInventory(tmpLocalFile)
}
}
inventory.fold(s => throw new RuntimeException(s), identity)
}
The problem occurs if I try to run the same test (or another integration test) within the same sbt session. I get the same error message as mentioned here:
14:32:11.751 [reader] ERROR net.schmizz.sshj.transport.TransportImpl - Dying because - {}
net.schmizz.sshj.common.SSHRuntimeException: null
at net.schmizz.sshj.common.Buffer.readPublicKey(Buffer.java:432)
at net.schmizz.sshj.transport.kex.AbstractDHG.next(AbstractDHG.java:75)
at net.schmizz.sshj.transport.KeyExchanger.handle(KeyExchanger.java:367)
at net.schmizz.sshj.transport.TransportImpl.handle(TransportImpl.java:509)
at net.schmizz.sshj.transport.Decoder.decode(Decoder.java:107)
at net.schmizz.sshj.transport.Decoder.received(Decoder.java:175)
at net.schmizz.sshj.transport.Reader.run(Reader.java:60)
Caused by: java.security.GeneralSecurityException: java.security.spec.InvalidKeySpecException: key spec not recognised
at net.schmizz.sshj.common.KeyType$3.readPubKeyFromBuffer(KeyType.java:156)
at net.schmizz.sshj.common.Buffer.readPublicKey(Buffer.java:430)
... 6 common frames omitted
Caused by: java.security.spec.InvalidKeySpecException: key spec not recognised
at org.bouncycastle.jcajce.provider.asymmetric.util.BaseKeyFactorySpi.engineGeneratePublic(Unknown Source)
at org.bouncycastle.jcajce.provider.asymmetric.ec.KeyFactorySpi.engineGeneratePublic(Unknown Source)
at java.security.KeyFactory.generatePublic(KeyFactory.java:334)
at net.schmizz.sshj.common.KeyType$3.readPubKeyFromBuffer(KeyType.java:154)
... 7 common frames omitted
If I kill that sbt session and relaunch another one, I can again run only a single integration test before the problem reoccurs.
I have already installed the JCE 8 files as suggested. So, I'm wondering what I need to fix to get multiple tests running successfully where one after another they can ssh into that remote machine.
After some debugging I found out that the problem was due to BouncyCastle, which remains registered as a JCE provider in a follow-up test and causes problems. This shows up in the stack trace as:
INFO net.schmizz.sshj.common.SecurityUtils - BouncyCastle already registered as a JCE provider
I decided to add a security provider dynamically and remove it after tests are done.
def doTests(): Unit = {
import org.bouncycastle.jce.provider.BouncyCastleProvider
import java.security.Security
Security.addProvider(new BouncyCastleProvider)
"Some test" should {
"be BLABLA" in {
assert(...) // some test
}
}
"Some other test" should {
"be BLABLABLA" in {
assert(...) // some other test
}
}
Security.removeProvider(BouncyCastleProvider.PROVIDER_NAME)
}
Related
I'm using the embedded Kafka server in my test described here: https://micronaut-projects.github.io/micronaut-kafka/latest/guide/#kafkaEmbedded. The problem is I'm getting this io.micronaut.context.exceptions.BeanContextException: Error processing bean [Definition: org.app.messaging.TestConsumer] method definition [void receive(String msg)]: Failed to inject value for parameter [testService] of method [setTestService] of class: org.app.messaging.TestConsumer when I run the test. Any ideas how to fix this?
Here's what the test looks like:
void "test run kafka embedded server"() {
given:
ApplicationContext applicationContext = ApplicationContext.run(
Collections.singletonMap(
AbstractKafkaConfiguration.EMBEDDED, true
)
)
when:
AbstractKafkaConsumerConfiguration config = applicationContext.getBean(AbstractKafkaConsumerConfiguration)
Properties props = config.getConfig()
then:
props[ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG] == 9091
when:
KafkaEmbedded kafkaEmbedded = applicationContext.getBean(KafkaEmbedded)
then:
kafkaEmbedded.kafkaServer.isPresent()
kafkaEmbedded.zkPort.isPresent()
cleanup:
applicationContext.close()
}
Placing a test anywhere other than the root package seem to be causing multiple "bean definition not found" issues. There's no ComponentScan support in the framework so the only thing that worked for me was to move the test file to the root package. There's some ideas here: https://github.com/micronaut-projects/micronaut-core/issues/511 if you're experiencing similar issues with a CLI app. However, it didn't work when using the embedded server and embedded kafka.
Could anyone give some considerations to get started using the ESAPI on a no-web context?
I came with this little test that validates a string with DefaultValidator.isValidCreditCard, but I got some web-container dependency errors.
The following method is consumed from a Junit Test:
#Override
public ValidationErrorList creditCard(String value) {
this.value = value;
ValidationErrorList errorList = new ValidationErrorList();
try {
isValid = validator.isValidCreditCard(null, value, false, errorList);
}catch(Exception ie){
System.out.println(">>> CCValidator: [ " + value + "] " + ie.getMessage());
messages = (ArrayList) errorList.errors();
}
return messages;
}
This is the error that I get (relevant part) of course I'm not running in a container:
Attempting to load ESAPI.properties via file I/O.
Attempting to load ESAPI.properties as resource file via file I/O.
Found in 'org.owasp.esapi.resources' directory: C:\foundation\validation\providers\esapi\ESAPI.properties
Loaded 'ESAPI.properties' properties file
Attempting to load validation.properties via file I/O.
Attempting to load validation.properties as resource file via file I/O.
Found in 'org.owasp.esapi.resources' directory: C:\foundation\validation\providers\esapi\validation.properties
Loaded 'validation.properties' properties file
SecurityConfiguration for Encoder.AllowMixedEncoding not found in ESAPI.properties. Using default: false
SecurityConfiguration for Encoder.AllowMixedEncoding not found in ESAPI.properties. Using default: false
javax/servlet/ServletRequest
java.lang.NoClassDefFoundError: javax/servlet/ServletRequest
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.owasp.esapi.util.ObjFactory.make(ObjFactory.java:74)
at org.owasp.esapi.ESAPI.httpUtilities(ESAPI.java:121)
at org.owasp.esapi.ESAPI.currentRequest(ESAPI.java:70)
at org.owasp.esapi.reference.Log4JLogger.log(Log4JLogger.java:434)
...
Calls to ESAPI..xxxMethods() also raise dependency errors.
Any advice to get started will be appreciate.
Best,
jose
ESAPI has a servlet filter API that requires javax.servlet.ServletRequest to be on the classpath. ESAPI is owned by OWASP --> "Open Web Application Security Project." Therefore, ESAPI is designed with web applications in mind.
If you're not writing a web application, then its either a console application or a rich client application. If you don't expect to use it to connect to the outside world, then the main secure practices you really need to worry about are ensuring that you always use safely parameterized queries, and that any data passed into your application from a source that IS connected to the outside world is properly escaped. For that, the only thing you need is OWASP's encoder project.
Have a somewhat simple project deployed to a JAR. I am starting up a supervisor actor that confirms it is booting up by sending out the following log message:
[akka://service-kernel/user/Tracker] Starting new Tracker
However, when I go to reference the actor via actorFor locally with an sbt run, it finds it no problem. In production, I use the same .actorFor("akka://service-kernel/user/Tracker") and it throws a NullPointerException. I can confirm via the logs that in production, the Tracker has sent out its confirmation that it booted up.
Are there any issues when using a Microkernel deployed to a JAR to make actor references?
Edit
I am suspecting that both the way I reference the system and the way Akka treats the start up class are related to the issue. Since I have specified a start up class called ServiceKernel, I am performing the reference as such: ServiceKernel.system.actorFor. Will provide an answer if confirmed.
Confirmed that it was related to the startup class handling the Microkernel.
The ServiceKernel mentioned above is used in the start script to boot up the Microkernel JAR: ./start com.package.ServiceKernel. In an sbt shell, this isn't needed so the alternative class I provided works well for referencing an Actor System.
However, in a Microkernel the ServiceKernel appears to be using a different Actor System altogether, so if you reference that system (like I did) then actorFor lookups will always fail. I solved the problem by passing the system down into the boot classes into the specific class where I was making the actorFor reference and it worked. Did it like this (pseudo-code):
class ServiceKernel extends Bootable {
val system = ActorSystem("service-kernel")
def startup = {
system.actorOf(Props(new Boot(isDev, system))) ! Start
}
}
And then passing it to an HttpApi class:
class Boot(val isDev: Boolean, system: ActorSystem) extends Actor with SprayCanHttpServerApp {
def receive = {
case Start =>
// setup HTTP server
val service = system.actorOf(Props(new HttpApi(system)), "tracker-http-api")
}
}
I have a Play! project where I would like to add some code coverage information. So far I have tried JaCoCo and scct. The former has the problem that it is based on bytecode, hence it seems to give warning about missing tests for methods that are autogenerated by the Scala compiler, such as copy or canEqual. scct seems a better option, but in any case I get many errors during tests with both.
Let me stick with scct. I essentially get errors for every test that tries to connect to the database. Many of my tests load some fixtures into an H2 database in memory and then make some assertions. My Global.scala contains
override def onStart(app: Application) {
SessionFactory.concreteFactory = Some(() => connection)
def connection() = {
Session.create(DB.getConnection()(app), new MySQLInnoDBAdapter)
}
}
while the tests usually are enclosed in a block like
class MySpec extends Specification {
def app = FakeApplication(additionalConfiguration = inMemoryDatabase())
"The models" should {
"be five" in running(app) {
Fixtures.load()
MyModels.all.size should be_==(5)
}
}
}
The line running(app) allows me to run a test in the context of a working application connected to an in-memory database, at least usually. But when I run code coverage tasks, such as scct coverage:doc, I get a lot of errors related to connecting to the database.
What is even more weird is that there are at least 4 different errors, like:
ObjectExistsException: Cache play already exists
SQLException: Attempting to obtain a connection from a pool that has already been shutdown
Configuration error [Cannot connect to database [default]]
No suitable driver found for jdbc:h2:mem:play-test--410454547
Why is that launching tests in the default configuration is able to connect to the database, while running in the context of scct (or JaCoCo) fails to initialize the cache and the db?
specs2 tests run in parallel by default. Play disables parallel execution for the standard unit test configuration, but scct uses a different configuration so it doesn't know not to run in parallel.
Try adding this to your Build.scala:
.settings(parallelExecution in ScctPlugin.ScctTest := false)
Alternatively, you can add sequential to the beginning of your test classes to force all possible run configurations to run sequentially. I've got both in my files still, as I think I had some problems with the Build.scala solution at one point when I was using an early release candidate of Play.
A better option for Scala code coverage is Scoverage which gives statement line coverage.
https://github.com/scoverage/scalac-scoverage-plugin
Add to project/plugins.sbt:
addSbtPlugin("com.sksamuel.scoverage" % "sbt-scoverage" % "1.0.1")
Then run SBT with
sbt clean coverage test
You need to add sequential in the beginning of your Specification.
class MySpec extends Specification {
sequential
"MyApp" should {
//...//
}
}
I am having problems running multiple functional specs (using specs2), specifically tests that start a TestServer, open an HTMLUNIT browser, and navigate to a page to check an element. The page in question loads the elements that we test on an ajax request. The wait for the element to be present times out with the error message below.
Code snippet:
trait CommonSteps extends BaseSpecfication {
val testServer: TestServer = TestServer(3333)
val testServerBaseURL: String = "http://localhost:3333/"
override def map(fs: => Fragments) =
Step(testServer.start()) ^ super.map(fs) ^ Step(testServer.stop())
}
class FunctionalTest1 extends Specification with CommonSteps { def is =
...
... extends When[...] {
val browser: TestBrowser = TestBrowser.of(HTMLUNIT)
browser.goTo(testServerBaseURL + "/some_path")
browser
}
... extends Then[...] {
browser.await.until("element that is loaded on ajax request").isPresent()
...
}
}
We get the error:
Caused by: java.sql.SQLException: Attempting to obtain a connection from a pool that has already been shutdown.
Stack trace of location where pool was shutdown follows:
java.lang.Thread.getStackTrace(Thread.java:1479)
com.jolbox.bonecp.BoneCP.captureStackTrace(BoneCP.java:543)
com.jolbox.bonecp.BoneCP.shutdown(BoneCP.java:159)
com.jolbox.bonecp.BoneCPDataSource.close(BoneCPDataSource.java:123)
play.api.db.BoneCPApi.shutdownPool(DB.scala:387)
play.api.db.BoneCPPlugin$$anonfun$onStop$1.apply(DB.scala:252)
play.api.db.BoneCPPlugin$$anonfun$onStop$1.apply(DB.scala:250)
scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
scala.collection.immutable.List.foreach(List.scala:45)
play.api.db.BoneCPPlugin.onStop(DB.scala:250)
play.api.Play$$anonfun$stop$1$$anonfun$apply$1.apply(Play.scala:75)
play.api.Play$$anonfun$stop$1$$anonfun$apply$1.apply(Play.scala:74)
scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
scala.collection.immutable.List.foreach(List.scala:45)
play.api.Play$$anonfun$stop$1.apply(Play.scala:74)
play.api.Play$$anonfun$stop$1.apply(Play.scala:74)
scala.Option.map(Option.scala:133)
play.api.Play$.stop(Play.scala:73)
play.core.server.NettyServer.stop(NettyServer.scala:73)
While the test works when run in isolation, we get the error when running two or more of them together.
It seems to be related to this issue, though my example is a Scala Play app. Can anyone confirm that this issue is fixed in a newer version of Play? Or, is there a workaround to avoid this error in Play 2.0.1.
This issue will be fixed in Play 2.0.2, which is currently in RC state. It is safe to upgrade from 2.0.1 to 2.0.2, as everything is backwards compatible.
Thanks to #guillaume-bort for providing this information.