com.ibm.db2.jcc.a.SqlException: tableName is an undefined name - db2

Our code is running in Websphere Application Server 7.0.0.37 and it is connecting to couple of DB2 databases. Occasionally we are getting the exception "com.ibm.db2.jcc.a.SqlException: "SCHEMA.TABLENAME" is an undefined name".This is happening everyday when the load is particularly high. Please advise if this has to do anything with the connection settings in websphere. Below is the stacktrace:
[4/6/16 21:54:08:083 PDT] 000000f7 SystemErr R
com.ibm.db2.jcc.a.SqlException: "SCHEMA.TABLENAME" is an undefined
name. [4/6/16 21:54:08:083 PDT] 000000f7 SystemErr R at
com.ibm.db2.jcc.a.zc.e(zc.java:1606) [4/6/16 21:54:08:084 PDT]
000000f7 SystemErr R at com.ibm.db2.jcc.a.zc.a(zc.java:1206)
[4/6/16 21:54:08:084 PDT] 000000f7 SystemErr R at
com.ibm.db2.jcc.c.eb.h(eb.java:149) [4/6/16 21:54:08:084 PDT] 000000f7
SystemErr R at com.ibm.db2.jcc.c.eb.a(eb.java:43) [4/6/16
21:54:08:084 PDT] 000000f7 SystemErr R at
com.ibm.db2.jcc.c.r.a(r.java:30) [4/6/16 21:54:08:084 PDT] 000000f7
SystemErr R at com.ibm.db2.jcc.c.tb.g(tb.java:152) [4/6/16
21:54:08:084 PDT] 000000f7 SystemErr R at
com.ibm.db2.jcc.a.zc.n(zc.java:1186) [4/6/16 21:54:08:084 PDT]
000000f7 SystemErr R at com.ibm.db2.jcc.a.ad.db(ad.java:1761)
[4/6/16 21:54:08:084 PDT] 000000f7 SystemErr R at
com.ibm.db2.jcc.a.ad.d(ad.java:2203) [4/6/16 21:54:08:084 PDT]
000000f7 SystemErr R at com.ibm.db2.jcc.a.ad.W(ad.java:1276)
[4/6/16 21:54:08:084 PDT] 000000f7 SystemErr R at
com.ibm.db2.jcc.a.ad.execute(ad.java:1260) [4/6/16 21:54:08:084 PDT]
000000f7 SystemErr R at
com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.pmiExecute(WSJdbcPreparedStatement.java:942)
[4/6/16 21:54:08:084 PDT] 000000f7 SystemErr R at
com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.execute(WSJdbcPreparedStatement.java:618)

Connection settings in WebSphere Application Server can impact the schema that is used (based on the user name of the connection), however, it would be consistent and not vary based on whether or not a database is under load. If you've observed that the problem occurs under load, but not otherwise, it's possible there could be a threading bug.
To clarify the problem,
Is SCHEMA.TABLENAME the schema and table that you expect to be used? If not, which part is unexpected - the schema, the table name, or both? Alternately, if SCHEMA.TABLENAME is expected, do queries to this same schema/table work when not under load? Is the SCHEMA schema specified explicitly in the application, or implicitly from the connection username or data source settings?

Related

setoid_rewrite failed with MathClasses Coq

I have been trying to solve the following for quite a moment now.
Require Import
Coq.Classes.Morphisms
MathClasses.interfaces.abstract_algebra
MathClasses.interfaces.vectorspace
MathClasses.misc.workaround_tactics
MathClasses.theory.setoids
MathClasses.theory.groups.
Lemma f_equiv' `{Equiv A} `{f : A -> A} :
f = f -> forall x y, x = y -> f x = f y.
Proof.
intros.
f_equiv.
assumption.
Qed.
Goal forall `{HVS : VectorSpace K V}, forall α : K, α · mon_unit = mon_unit.
Proof.
intros.
setoid_rewrite <- right_identity at 1.
setoid_rewrite <- right_inverse with (x := α · mon_unit) at 2 3.
setoid_rewrite associativity.
apply #f_equiv' with (f := fun v => v & - (α · mon_unit)).
{ cbv; intros ?? Hxy; now rewrite Hxy. }
setoid_rewrite <- distribute_l.
setoid_rewrite left_identity. (* Error: setoid rewrite failed *)
As written in the last line, the setoid_rewrite fails with this error message :
Error: setoid rewrite failed: Unable to satisfy the following constraints:
UNDEFINED EVARS:
?X6739==[K V Ke Kplus Kmult Kzero Kone Knegate Krecip Ve Vop Vunit Vnegate
sm HVS α |- relation V] (internal placeholder) {?r}
?X6740==[K V Ke Kplus Kmult Kzero Kone Knegate Krecip Ve Vop Vunit Vnegate
sm HVS α (do_subrelation:=do_subrelation)
|- Proper (equiv ==> ?r) (scalar_mult α)] (internal placeholder) {?p}
?X6840==[K V Ke Kplus Kmult Kzero Kone Knegate Krecip Ve Vop Vunit Vnegate
sm HVS α |- relation V] (internal placeholder) {?r0}
?X6841==[K V Ke Kplus Kmult Kzero Kone Knegate Krecip Ve Vop Vunit Vnegate
sm HVS α (do_subrelation:=do_subrelation)
|- Proper (?r ==> ?r0 ==> flip impl) equiv] (internal placeholder) {?p0}
?X6842==[K V Ke Kplus Kmult Kzero Kone Knegate Krecip Ve Vop Vunit Vnegate
sm HVS α |- ProperProxy ?r0 (α · mon_unit)] (internal placeholder) {?p1}
TYPECLASSES:?X6739 ?X6740 ?X6840 ?X6841 ?X6842
SHELF:||
FUTURE GOALS STACK:?X6842 ?X6841 ?X6840 ?X6740 ?X6739 ?X6611 ?X6610 ?X6609
?X6608 ?X6607 ?X6606 ?X6605||?X64 ?X62 ?X60 ?X58 ?X57 ?X56 ?X55 ?X54 ?X53
?X52 ?X51 ?X50 ?X49 ?X48 ?X47 ?X46 ?X45 ?X44 ?X43 ?X42
I have tried changing notations, using cbv, as suggested in this question.
How can I use the left_identity lemma without the error appearing ?
I'm not an expert in the details of how the unification with the implicit variables works, so I can't explain why the rewrite fails, but I've encountered it enough to at least give a "hack" solution.
Before the final "rewrite left_identity", do a
pose proof scalar_mult_proper.
As the context now contains a proof saying that it is ok to rewrite "under" scalar multiplication that the rewrite tactic is able to use, you can now finish the proof as expected with
rewrite left_identity.
reflexivity.
(Btw, you don't need the f_equiv' lemma, for this proof, simple rewriting is enough.)
To me the problem you encountered is a problem that I run into now and then. To me it is a usability-bug, or perhaps a bug in my expectation of how the instance resolution works, and I would also love a mechanistic explanation of this behaviour.
Some background for those who didn't paste the code into a Coq session to see what happens:
The goal just before the rewrite that fails is
α · (mon_unit & mon_unit) = α · mon_unit
Here "=" is notation for equiv which is Ve in this context (which is the equality relation for vectors), and · is notation for scalar_mult, and & is notation for the semigroup operation, i.e. vector addition in this case. And we want to rewrite one of the arguments of scalar_mult inside the equiv relation. Therefore we need instances of type Proper that enables this. These instances already exist. In particular we have
scalar_mult_proper
: Proper (equiv ==> equiv ==> equiv) sm
which I found with Search Proper scalar_mult. To use this instance we need a lot of implicit variables to be filled in. Here is the full list:
Print scalar_mult_proper.
scalar_mult_proper =
λ (R M : Type) (Re : Equiv R) (Rplus : Plus R) (Rmult : Mult R)
(Rzero : Zero R) (Rone : One R) (Rnegate : Negate R)
(Me : Equiv M) (Mop : SgOp M) (Munit : MonUnit M)
(Mnegate : Negate M) (sm : ScalarMult R M) (Module0 : Module R M),
let (_, _, _, _, _, _, scalar_mult_proper) := Module0 in scalar_mult_proper
: ∀ (R M : Type) (Re : Equiv R) (Rplus : Plus R)
(Rmult : Mult R) (Rzero : Zero R) (Rone : One R)
(Rnegate : Negate R) (Me : Equiv M) (Mop : SgOp M)
(Munit : MonUnit M) (Mnegate : Negate M)
(sm : ScalarMult R M),
Module R M → Proper (equiv ==> equiv ==> equiv) sm
Almost all of those values should be filled in automatically, but for some reason the setoid_rewrite fails to do that by itself.
However, when I just add a copy of this rule to the context with pose, then the implicit variables are filled in, and the setoid_rewrite can use the rule without getting confused about what values should be used for R M Rplus Rnegate Me Module and all the other the arguments to scalar_mult_proper.

How to convert Array of any of elements in to dataframe in spark scala?

I have a Array like Array[(Any, Any, Any)]. For example:
l1 = [(a,b,c),(d,e,f),(x,y,z)]
I want to convert it to a Dataframe as:
c1 c2 c3
a b c
d e f
x y z
I tried to convert the existing dataframe to a list:
val l1= test_df.select("c1","c2","c3").rdd.map(x =>
(x(0),x(1),x(2))).collect()
println (lst)
val c = Seq(l1).toDF("c1","c2","c3")
c.show()
But it is throwing this error:
xception in thread "main" java.lang.ClassNotFoundException: scala.Any
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
In Pyspark:
l1 = [('a','b','c'),('d','e','f'),('x','y','z')]
sdf=spark.createDataFrame(l1)
sdf.show()

What 28 frames are elided when dividing by zero in the REPL?

scala> 5 / 0
java.lang.ArithmeticException: / by zero
... 28 elided
Twenty eight frames elided for a simple arithmetic expression?! What are these frames, why does Scala need that many to do safe division, and why are they being elided in the first place?
scala> import scala.util.Try
import scala.util.Try
scala> Try(5/0)
res2: scala.util.Try[Int] = Failure(java.lang.ArithmeticException: / by zero)
scala> res2.recover { case e: ArithmeticException => e.printStackTrace }
java.lang.ArithmeticException: / by zero
at $line8.$read$$iw$$iw$$anonfun$1.apply$mcI$sp(<console>:13)
at $line8.$read$$iw$$iw$$anonfun$1.apply(<console>:13)
at $line8.$read$$iw$$iw$$anonfun$1.apply(<console>:13)
at scala.util.Try$.apply(Try.scala:192)
at $line8.$read$$iw$$iw$.<init>(<console>:13)
at $line8.$read$$iw$$iw$.<clinit>(<console>)
at $line8.$eval$.$print$lzycompute(<console>:7)
at $line8.$eval$.$print(<console>:6)
at $line8.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:415)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:923)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
at scala.tools.nsc.MainGenericRunner.runTarget$1(MainGenericRunner.scala:74)
at scala.tools.nsc.MainGenericRunner.run$1(MainGenericRunner.scala:87)
at scala.tools.nsc.MainGenericRunner.process(MainGenericRunner.scala:98)
at scala.tools.nsc.MainGenericRunner$.main(MainGenericRunner.scala:103)
at scala.tools.nsc.MainGenericRunner.main(MainGenericRunner.scala)
res3: scala.util.Try[AnyVal] = Success(())
The elided lines are basically the REPL's overhead of reading a line, compiling it into a class, and constructing an instance of the class, which is where the code written in the REPL is evaluated.

Scala Reduce() operation on a RDD[Array[Int]]

I have a RDD of a 1 dimensional matrix. I am trying to do a very basic reduce operation to sum up the values of the same position of the matrix from various partitions.
I am using:
var z=x.reduce((a,b)=>a+b)
or
var z=x.reduce(_ + _)
But I am getting an error saying:
type mismatch; found Array[Int], expected:String
I looked it up and found the link
Is there a better way for reduce operation on RDD[Array[Double]]
So I tried using
import.spire.implicits._
So now I don't have any compilation error, but after running the code I am getting a java.lang.NoSuchMethodError. I have provided the entire error below. Any help would be appreciated.
java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V
at spire.math.NumberTag$Integral$.<init>(NumberTag.scala:9)
at spire.math.NumberTag$Integral$.<clinit>(NumberTag.scala)
at spire.std.BigIntInstances.$init$(bigInt.scala:80)
at spire.implicits$.<init>(implicits.scala:6)
at spire.implicits$.<clinit>(implicits.scala)
at main.scala.com.ucr.edu.SparkScala.HistogramRDD$$anonfun$9.apply(HistogramRDD.scala:118)
at main.scala.com.ucr.edu.SparkScala.HistogramRDD$$anonfun$9.apply(HistogramRDD.scala:118)
at scala.collection.TraversableOnce$$anonfun$reduceLeft$1.apply(TraversableOnce.scala:190)
at scala.collection.TraversableOnce$$anonfun$reduceLeft$1.apply(TraversableOnce.scala:185)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.reduceLeft(TraversableOnce.scala:185)
at scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1336)
at org.apache.spark.rdd.RDD$$anonfun$reduce$1$$anonfun$15.apply(RDD.scala:1012)
at org.apache.spark.rdd.RDD$$anonfun$reduce$1$$anonfun$15.apply(RDD.scala:1010)
at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2125)
at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2125)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
From my understanding you're trying to reduce the items by position in the arrays. You should consider to zip your arrays while reducing the rdd :
val a: RDD[Array[Int]] = ss.createDataset[Array[Int]](Seq(Array(1,2,3), Array(4,5,6))).rdd
a.reduce{case (a: Array[Int],b: Array[Int]) =>
val ziped = a.zip(b)
ziped.map{case (i1, i2) => i1 + i2}
}.foreach(println)
outputs :
5
7
9

Unity .obj import is regrouping sub-objects

I have a .obj file representing a map (generated by OSM2World) and containing multiple sub-objects as Bench, Bridges, Buildings... These sub-objects have the following naming convention : ObjectName+index.
Here is an abstract from the file :
...
o Building1536
usemtl BUILDING_DEFAULT_0
f 67932 67933 67930
f 67933 67931 67930
f 67930 67931 67928
f 67931 67929 67928
v -47.185 7.5 280.942
f 67928 67929 71008
v -47.185 0.0 280.942
f 67929 71009 71008
f 71008 71009 67932
f 71009 67933 67932
usemtl ROOF_DEFAULT_0
f 67928 67932 67930
f 71008 67932 67928
o Building1537
usemtl BUILDING_DEFAULT_0
f 65093 65094 65091
f 65094 65092 65091
f 65091 65092 65089
f 65092 65090 65089
f 65089 65090 63035
f 65090 63036 63035
f 63035 63036 63033
f 63036 63034 63033
f 63033 63034 65093
f 63034 65094 65093
usemtl ROOF_DEFAULT_0
f 65091 63033 65093
f 65089 63035 63033
f 65089 63033 65091
...
When I import it in a Unity project, the ObjectNameX sub-objects are "abstracted" as a single "Building" object :
When I open it in Blender, I have the expected result (multiple ObjectNameX objects) :
Any ideas to avoid this behavior ?
Thank you for your help.