Using Groovy (ActiveXobject) to Import a Dll in SOAP UI - soap

I am using groovy script to use a class from a dll in soap ui using ActiveXobject and i am getting the following error each time i invoke it..
org.codehaus.groovy.control.MultipleCompilationErrorsException:
startup failed: Script15.groovy: 3: unable to resolve class
ActiveXObject # line 3, column 15. def signLib = new
ActiveXObject('iop.systemtest.tools.signlib.SignLibForSoapUI') ^
org.codehaus.groovy.syntax.SyntaxException: unable to resolve class
ActiveXObject # line 3, column 15. at
org.codehaus.groovy.ast.ClassCodeVisitorSupport.addError(ClassCodeVisitorSupport.java:148)
at
Below is the piece of the scripts i am using
import org.codehaus.groovy.scriptom.*
def signLib = new ActiveXObject('iop.systemtest.tools.signlib.SignLibForSoapUI');
Any one has any idea. ???

Related

How to mock an S3AFileSystem locally for testing spark.read.csv with pytest?

What I'm trying to do
I am attempting to unit test an equivalent of the following function, using pytest:
def read_s3_csv_into_spark_df(s3_uri, spark):
df = spark.read.csv(
s3_uri.replace("s3://", "s3a://")
)
return df
The test is defined as follows:
def test_load_csv(self, test_spark_session, tmpdir):
# here I 'upload' a fake csv file using tmpdir fixture and moto's mock_s3 decorator
# now that the fake csv file is uploaded to s3, I try read into spark df using my function
baseline_df = read_s3_csv_into_spark_df(
s3_uri="s3a://bucket/key/baseline.csv",
spark=test_spark_session
)
In the above test, the test_spark_session fixture used is defined as follows:
#pytest.fixture(scope="session")
def test_spark_session():
test_spark_session = (
SparkSession.builder.master("local[*]").appName("test").getOrCreate()
)
return test_spark_session
The problem
I am running pytest on a SageMaker notebook instance, using python 3.7, pytest 6.2.4, and pyspark 3.1.2. I am able to run other tests by creating the DataFrame using test_spark_session.createDataFrame, and then performing aggregations. So the local spark context is indeed working on the notebook instance with pytest.
However, when I attempt to read the csv file in the test I described above, I get the following error:
py4j.protocol.Py4JJavaError: An error occurred while calling o84.csv.
E : java.lang.RuntimeException: java.lang.ClassNotFoundException: Class
org.apache.hadoop.fs.s3a.S3AFileSystem not found
How can I, without actually uploading any csv files to S3, test this function?
I have also tried providing the S3 uri using s3:// instead of s3a://, but got a different, related error: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "s3".

Dataproc: functools.partial no attribute '__module__' error for pyspark UDF

I am using GCP/Dataproc for some spark/graphframe calculations.
In my private spark/hadoop standalone cluster,
I have no issue using functools.partial when defining pysparkUDF.
But, now with GCP/Dataproc, I have an issue as below.
Here are some basic settings to check whether partial works well or not.
import pyspark.sql.functions as F
import pyspark.sql.types as T
from functools import partial
def power(base, exponent):
return base ** exponent
In the main function, functools.partial works well in ordinary cases as we expect:
# see whether partial works as it is
square = partial(power, exponent=2)
print "*** Partial test = ", square(2)
But, if I put this partial(power, exponent=2) function to PySparkUDF as below,
testSquareUDF = F.udf(partial(power, exponent=2),T.FloatType())
testdf = inputdf.withColumn('pxsquare',testSquareUDF('px'))
I have this error message:
Traceback (most recent call last):
File "/tmp/bf297080f57a457dba4d3b347ed53ef0/gcloudtest-partial-error.py", line 120, in <module>
testSquareUDF = F.udf(square,T.FloatType())
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1971, in udf
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1955, in _udf
File "/opt/conda/lib/python2.7/functools.py", line 33, in update_wrapper
setattr(wrapper, attr, getattr(wrapped, attr))
AttributeError: 'functools.partial' object has no attribute '__module__'
ERROR: (gcloud.dataproc.jobs.submit.pyspark) Job [bf297080f57a457dba4d3b347ed53ef0] entered state [ERROR] while waiting for [DONE].
=========
I had no this kind of issue with my standalone cluster.
My spark cluster version is 2.1.1.
The GCP dataproc's is 2.2.x
Anyone can recognize what prevents me from passing the partial function to the UDF?
As discussed in the comments, the issue was with spark 2.2. And, since spark 2.3 is also supported by Dataproc, just using --image-version=1.3 when creating the cluster fixes it.

flink: sortPartition(0, Order.ASCENDING ) error: "not found: value Order"

I am running the following code and getting "error: not found: value Order"
I am not able to figure out a reason. What am I doing wrong?
version : Flink v 0.9.1 (hadoop 1) not using hadoop: Local execution shell: scala shell
Scala-Flink> val data_avg = data_split.map{x=> ((x._1), (x._2._2/x._2._1))}.sortPartition(1, Order.ASCENDING).setParallelism(1)
<console>:16: error: not found: value Order
val data_avg = data_split.map{x=> ((x._1), (x._2._2/x._2._1))}.sortPartition(0, Order.ASCENDING).setParallelism(1)
The problem is that the enum Order is not automatically imported by Flink's Scala shell. Therefore, you have to add the following import manually.
import org.apache.flink.api.common.operators.Order

How to use JDBC from within the Spark/Scala interpreter (REPL)?

I'm attempting to access a database in the Scala interpreter for Spark, but am having no success.
First, I have imported the DriverManager, and I have added my SQL Server JDBC driver to the class path with the following commands:
scala> import java.sql._
import java.sql._
scala> :cp sqljdbc41.jar
The REPL crashes with a long dump message:
Added 'C:\spark\sqljdbc41.jar'. Your new classpath is:
";;C:\spark\bin\..\conf;C:\spark\bin\..\lib\spark-assembly-1.1.1-hadoop2.4.0.jar;;C:\spark\bin\..\lib\datanucleus-api-jdo-3.2.1.jar;C:\spark\bin\..\lib\datanucleus-core-3.2.2.jar;C:\spark\bin\..\lib\datanucleus-rdbms-3.2.1.jar;;C:\spark\sqljdbc41.jar"
Replaying: import java.sql._
error:
while compiling: <console>
during phase: jvm
library version: version 2.10.4
compiler version: version 2.10.4
reconstructed args:
last tree to typer: Apply(constructor $read)
symbol: constructor $read in class $read (flags: <method> <triedcooking>)
symbol definition: def <init>(): $line10.$read
tpe: $line10.$read
symbol owners: constructor $read -> class $read -> package $line10
context owners: class iwC -> package $line10
== Enclosing template or block ==
Template( // val <local $iwC>: <notype>, tree.tpe=$line10.iwC
"java.lang.Object", "scala.Serializable" // parents
ValDef(
private
"_"
<tpt>
<empty>
)
...
== Expanded type of tree ==
TypeRef(TypeSymbol(class $read extends Serializable))
uncaught exception during compilation: java.lang.AssertionError
java.lang.AssertionError: assertion failed: Tried to find '$line10' in 'C:\Users\Username\AppData\Local\Temp\spark-28055904-e7d2-4052-9354-ae3769266cb4' but it is not a directory
That entry seems to have slain the compiler. Shall I replay
your session? I can re-run each line except the last one.
I am able to run a Scala program with the driver and everything works just fine.
How can I initialize my REPL to allow me to access data from SQL Server through JDBC?
It looks like the interactive :cp command does not work in Windows. But I found that if I launch the spark shell using the following command, the JDBC driver is loaded and available:
C:\spark> .\bin\spark-shell --jars sqljdbc41.jar
In this case, I had copied my jar file into the C:\spark folder.
(Also, one can also get help to see other commands available at launch using --help.)

Scala + stax compile problem during deploy process

I developed an app in scala-ide (eclipse plugin), no errors or warnings. Now I'm trying to deploy it to the stax cloud:
$ stax deploy
But it fails to compile it:
compile:
[scalac] Compiling 2 source files to /home/gleontiev/workspace/rss2lj/webapp/WEB-INF/classes
error: error while loading FlickrUtils, Scala signature FlickrUtils has wrong version
expected: 4.1
found: 5.0
/home/gleontiev/workspace/rss2lj/src/scala/example/snippet/DisplaySnippet.scala:8: error: com.folone.logic.FlickrUtils does not have a constructor
val dispatcher = new FlickrUtils("8196243#N02")
^
error: error while loading Photo, Scala signature Photo has wrong version
expected: 4.1
found: 5.0
/home/gleontiev/workspace/rss2lj/src/scala/example/snippet/DisplaySnippet.scala:9: error: value link is not a member of com.folone.logic.Photo
val linksGetter = (p:Photo) => p.link
^
/home/gleontiev/workspace/rss2lj/src/scala/example/snippet/DisplaySnippet.scala:15: error: com.folone.logic.FlickrUtils does not have a constructor
val dispatcher = new FlickrUtils("8196243#N02")
^
/home/gleontiev/workspace/rss2lj/src/scala/example/snippet/DisplaySnippet.scala:16: error: value medium1 is not a member of com.folone.logic.Photo
val picsGetter = (p:Photo) => p.medium1
^
/home/gleontiev/workspace/rss2lj/src/scala/example/snippet/RefreshSnippet.scala:12: error: com.folone.logic.FlickrUtils does not have a constructor
val dispatcher = new FlickrUtils("8196243#N02")
^
7 errors found
ERROR: : The following error occurred while executing this line:
/home/gleontiev/workspace/rss2lj/build.xml:61: Compile failed with 7 errors; see the compiler error output for details.
I see two errors, it is complaining about: the first one is FlickrUtils class constructor, which is defined like this:
class FlickrUtils(val userId : String) {
//...
}
The second one is the fact, that two fields are missing from Photo class, which is:
class Photo (val photoId:String, val userId:String, val secret:String, val server:String) {
private val _medium1 = "/sizes/m/in/photostream"
val link = "http://flickr.com/photos/" + userId + "/" + photoId
val medium1 = link + _medium1
}
Seems like stax sdk uses the wrong comliler (?). How do I make it use the right one? If it is not, what is the problem here, and what are some ways to resolve it?
Edit: $ scala -version says
Scala code runner version 2.8.0.final -- Copyright 2002-2010, LAMP/EPFL
I tried compiling everything with scalac manually, puting everything to their places, and running stax deploy afterwards -- same result.
I actually resolved this by moving FlickrUtils and Photo classes to the packages, where snippets originally are, but I still don't get, why it was not able to compile and use them from the other package.