install4j how to parse json in a form component - install4j

I'm trying to parse json in an action script for a form component.
I tried to:
import javax.json.JsonObject;
And get the following on a test compile:
Failed to compile script
1. ERROR in /private/var/folders/3t/l3dvn7tx1j76wsx17xfjcpww0000gn/T/script10053200329813627000.java.dir/code/Completion.java (at line 1)
import javax.json.JsonObject;
The import javax.json cannot be resolved
How do I achieve the equivalent of this import so I can parse json? I started going down the rabbit hole of create a JMOD for javax.json but that's starting to seem like the wrong path.

The package javax.json is not part of the JRE, but a JEE 7 API. You can download the JAR file from
https://mvnrepository.com/artifact/javax.json/javax.json-api/1.1.4
and add it on the "Installer->Screens & Actions->Custom code" step. Then the class will be available in all scripts.

Related

Cucumber Test Runner file-Not executing step definitions

I am building a Restassured API test framework with cucumber.(This is a new adventure for me so apologies if this seems basic)
Below is how I setup my test runner file.
package cucumber.Options;
import org.junit.runner.RunWith;
import io.cucumber.junit.Cucumber;
import io.cucumber.junit.CucumberOptions;
#RunWith(Cucumber.class)
#CucumberOptions(features="src/test/java/features", glue={"stepDefinitions"},strict=true,stepNotifications = true)
public class TestRunner{
}
I have made sure the glue matches my step def file, ive also tried adding the full path.
However my step definitions no matter what I set as the glue are not being executed.
As soon as I run the TestRunner File as a junit test junit marks the runs as complete.
I set up a simple scenario where I am just printing a test output to console and console never shows the output.
I even tried setting my glue to a filename that doesnt even exist to see if i got an error but i get the same result as above.
And whenever I click one of the the lines in the feature file I get error "Test class not found in selected project"
Error
Has anyone experienced the same behavior with eclipse?

Import Leaflet.awesome-marker in angular/cli component

I want to use the Leaflet.awesome-marker plugin in my angular project.
I installed the package throught yarn and imported in my component using
import * as awesome from 'leaflet.awesome-marker';
But I receive the following error:
Cannot find module 'leaflet.awesome-marker'
Doing the same thing with the geojson module works fine, why not with this one?
Try importing leaflet.awesome-markers, not leaflet.awesome-marker

Can't import the Svg library in elm?

Trying to use Svg and Svg.Attributes. Getting the error message
I cannot find module 'Svg'.
Module 'Main' is trying to import it.
Potential problems could be:
* Mispelled the module name
* Need to add a source directory or new dependency to elm-package.json
I'm certain that there aren't any spelling errors because I copy and pasted the imports from a tutorial. Where do I install this library?
The tutorial I'm going through is the one elm-lang.org, specifically the section on time.
You need the elm-lang/svg package as a dependency in your elm-package.json. Run elm package install elm-lang/svg in the project directory.

How to import libraries in Spark Notebook

I'm having trouble importing magellan-1.0.4-s_2.11 in spark notebook. I've downloaded the jar from https://spark-packages.org/package/harsha2010/magellan and have tried placing SPARK_HOME/bin/spark-shell --packages harsha2010:magellan:1.0.4-s_2.11 in the Start of Customized Settings section of the spark-notebook file of the bin folder.
Here are my imports
import magellan.{Point, Polygon, PolyLine}
import magellan.coord.NAD83
import org.apache.spark.sql.magellan.MagellanContext
import org.apache.spark.sql.magellan.dsl.expressions._
import org.apache.spark.sql.Row
import org.apache.spark.sql.types._
And my errors...
<console>:71: error: object Point is not a member of package org.apache.spark.sql.magellan
import magellan.{Point, Polygon, PolyLine}
^
<console>:72: error: object coord is not a member of package org.apache.spark.sql.magellan
import magellan.coord.NAD83
^
<console>:73: error: object MagellanContext is not a member of package org.apache.spark.sql.magellan
import org.apache.spark.sql.magellan.MagellanContext
I then tried to import the new library like any other library by placing it into the main script like so:
$lib_dir/magellan-1.0.4-s_2.11.jar"
This didn't work and I'm left scratching my head wondering what I've done wrong. How do I import libraries such as magellan into spark notebook?
Try evaluating something like
:dp "harsha2010" % "magellan" % "1.0.4-s_2.11"
It will load the library into Spark, allowing it to be imported - assuming it can be obtained though the Maven repo. In my case it failed with a message:
failed to load 'harsha2010:magellan:jar:1.0.4-s_2.11 (runtime)' from ["Maven2 local (file:/home/dev/.m2/repository/, releases+snapshots) without authentication", "maven-central (http://repo1.maven.org/maven2/, releases+snapshots) without authentication", "spark-packages (http://dl.bintray.com/spark-packages/maven/, releases+snapshots) without authentication", "oss-sonatype (https://oss.sonatype.org/content/repositories/releases/, releases+snapshots) without authentication"] into /tmp/spark-notebook/aether/b2c7d8c5-1f56-4460-ad39-24c4e93a9786
I think file was to big and connection was interrupted before whole file could be downloaded.
Workaround
So I downloaded the JAR manually from:
http://dl.bintray.com/spark-packages/maven/harsha2010/magellan/1.0.4-s_2.11/
and copied it into the:
/tmp/spark-notebook/aether/b2c7d8c5-1f56-4460-ad39-24c4e93a9786/harsha2010/magellan/1.0.4-s_2.11
And then :dp command worked. Try Calling it first, and if it will fail copy JAR into the right path to make things work.
Better solution
I should investigate why download failed to fix it in the first place... or put that library in my local M2 repo. But that should get you going.
I would suggest to check this:
https://github.com/spark-notebook/spark-notebook/blob/master/docs/metadata.md#import-download-dependencies
and
https://github.com/spark-notebook/spark-notebook/blob/master/docs/metadata.md#add-spark-packages
I think the :dp magic command is depreciated, instead you should add your custom dependencies in the notebook metadata. You can go in the menu Edit > Edit notebook metadata, there add something like:
"customDeps": [
"harsha2010 % magellan % 1.0.4-s_2.11"
]
Once done, you will need to restart the kernel, you can check in the browser console if the package is being downloaded properly.
The easy way, you should set or add the EXTRA_CLASSPATH environnent variable to point to your .jar file downloaded :
export EXTRA_CLASSPATH = </link/to/your.jar> or set EXTRA_CLASSPATH= </link/to/your.jar> in wondows OS. Here find the detailed solution.

Why does the scala-ide not allow multiple package definitions at the top of a file?

In scala it is common practice to stack package statements to allow shorter imports, but when I load a file using stacked packages into the scala ide and I attempt to use an import starting with the same organization I get a compiler error from what appears to be the presentation compiler. The code compiles fine in sbt outside of the IDE.
An example code snippet is as follows:
package com.coltfred
package util
package time
import com.github.nscala_time.time.Imports._
On the import I get the error object github is not a member of package com.coltfred.util.com.
If I move the import to a single line the error will go away, but we've used this practice frequently in our code base so changing them all to be single line package statements would be a pain.
Why is this happening and is there anything I can do to fix it?
Edit:
I used the eclipse-sbt plugin to generate the eclipse project file for this. The directory structure is what it should be and all of the dependencies are in the classpath.
Edit 2:
It turns out there was a file in the test tree of the util package (which should have been in the same package), but had a duplicate package statement at the top. I didn't check the test tree because it shouldn't affect the compilation of the main tree, but apparently I was wrong.
Not sure why the Scala IDE is not liking this, but you can force the import to start at the top level using _root_:
import _root_.com.github.nscala_time.time.Imports._
See if that avoids irritating the IDE.
This is a common annoyance that annoyed paulp into an attempt to fix it. His idea was that a dir that doesn't contribute class files shouldn't be taken as a package. If you can take util as scala.util, you should do so in preference to foo.util where that util is empty.
The util dir is the usual suspect, because who doesn't have a util dir lying around, and in particular, ./util?
apm#mara:~/tmp/coltfred$ mkdir -p com/coltfred/util/time
apm#mara:~/tmp/coltfred$ mkdir -p com/coltfred/util/com
apm#mara:~/tmp/coltfred$ vi com/coltfred/util/time/test.scala
apm#mara:~/tmp/coltfred$ scalac com/coltfred/util/time/test.scala
./com/coltfred/util/time/test.scala:5: error: object github is not a member of package com.coltfred.util.com
import com.github.nscala_time.time._
^
one error found
apm#mara:~/tmp/coltfred$ cat com/coltfred/util/time/test.scala
package com.coltfred
package util
package time
import com.github.nscala_time.time._
class Test
apm#mara:~/tmp/coltfred$
To debug, find out where the offending package is getting loaded from.