Is there such function like IsCallable? - autohotkey

I'd like to have in AutoHotkey function named, for example IsCallable, that can tell me whether an object (or anything that can be stored in AHK variable) is callable.
A callable object includes:
a normal function represented as a string.
a function object by Func("somefuncname").
a BoundFunc Object.
The key point is, if fnobj is callable, then the user can write %fnobj%(...) to actually call something.
Clarify my idea with code below:
test_check_callable()
{
fnstr := "fnhello"
fnobjA := Func("fnhello")
fnobjB := fnobjA.bind("0th")
fnobjC := fnobjB
%fnstr%()
%fnobjA%("A1")
%fnobjB%("B1")
fnobjC.("C1", "C2")
r1 := IsCallable(fnstr) ; true
r2 := IsCallable(fnobjA) ; true
r3 := IsCallable(fnobjB) ; true
r4 := IsCallable(fnobjC) ; true
e1 := IsCallable("NoSuch") ; false
e2 := IsCallable({key1:"value1"}) ; false
}
fnhello(args*)
{
Dbgwin_Output("fnhello() parameters: " args.Length())
for i,arg in args
{
Dbgwin_Output(Format(" param{} = {}", i, arg))
}
}
IsCallable(pobj)
{
; How to implement this? IsFunc? IsObject?
}
I hope r1, r2, r3, r4 should all get true, and e1, e2 get false.
I'm using Autohotkey 1.1.36.2 .
PS: Dbgwin_Output() is implemented here: https://github.com/chjfth/AmHotkey/blob/master/libs/debugwin.ahk

If you used AHKv2, you could make use of HasMethod. I'd recommend the usage of AHKv2, it's already at the RC3 stage.
Something like this should work pretty well to cover all basic use cases:
fnStr := "fnHello"
fnObjA := %fnStr%
fnObjB := fnobjA.bind("0th")
fnObjC := fnObjB
class HelloClass
{
Call() => MsgBox("Hello")
}
fnClass := HelloClass()
class NotCallable
{
}
no1 := "NoSuch"
no2 := {key: "value"}
classNo := NotCallable()
MsgBox(
"`"fnHello`": " IsCallable("fnHello") "`n" ;support pure strings
"fnStr: " IsCallable(fnStr) "`n" ;support string objects
"fnObjA: " IsCallable(fnObjA) "`n" ;support user-defined function objects
"fnObjB: " IsCallable(fnObjB) "`n" ;support bound function objects
"fnObjC: " IsCallable(fnObjC) "`n" ;same as fnObjA
"`"MsgBox`": " IsCallable("MsgBox") "`n" ;support built-in functions as pure strings
"MsgBox: " IsCallable(MsgBox) "`n" ;support built-in functions
"fnClass: " IsCallable(fnClass) "`n`n" ;support user defined classes
"`"NoSuch`": " IsCallable("NoSuch") "`n"
"no1: " IsCallable(no1) "`n"
"no2: " IsCallable(no2) "`n"
"classNo: " IsCallable(classNo) "`n"
)
fnHello(param := "")
{
MsgBox("hi " param)
}
IsCallable(inp)
{
if (HasMethod(inp))
return true
try
{
inp := %inp%
}
catch
{
return false
}
return HasMethod(inp)
}
Result:
"fnHello": 1
fnStr: 1
fnObjA: 1
fnObjB: 1
fnObjC: 1
"MsgBox": 1
MsgBox: 1
fnClass: 1
"NoSuch": 0
no1: 0
no2: 0
classNo: 0

Related

Copy files from COM device using AHK

I have a function which can effevtively copy a file from my android device,
GetDeviceFolder(deviceName) {
shell := ComObjCreate("Shell.Application")
computer := shell.Namespace("::{20d04fe0-3aea-1069-a2d8-08002b30309d}")
for item in computer.Items
if item.Name = deviceName
return item.GetFolder()
}
save_data_file()
{
GuiControlGet,phonename
GuiControlGet,datapath
GuiControlGet,savepath
phone := GetDeviceFolder(phonename)
phone.ParseName(datapath).InvokeVerb("copy")
}
however, I can't figure out how "paste" it to a local drive. I know it's in the clipboard because I can paste it manually after running this function.
The local disk also needs to be handled by COM.
Example:
GetDeviceFolder(deviceName) {
shell := ComObjCreate("Shell.Application")
computer := shell.Namespace("::{20d04fe0-3aea-1069-a2d8-08002b30309d}")
for item in computer.Items
if item.Name = deviceName
return item.GetFolder()
}
save_data_file(src, dest) {
src := StrSplit(src, "\", , 2)
dest := StrSplit(dest, "\", , 2)
GetDeviceFolder(src[1]).ParseName(src[2]).InvokeVerb("copy")
GetDeviceFolder(dest[1]).ParseName(dest[2]).InvokeVerb("paste")
}
save_data_file("Phone Name\Internal Storage\Download\5a5f641e9893c.jpg", "Disk Name (E:)\incoming")
I did it like using this helper function
InvokeVerb(path, menu, validate=True) {
;by A_Samurai
;v 1.0.1 http://sites.google.com/site/ahkref/custom-functions/invokeverb
objShell := ComObjCreate("Shell.Application")
if InStr(FileExist(path), "D") || InStr(path, "::{") {
;~ MsgBox % path
objFolder := objShell.NameSpace(path)
;~ MsgBox % namespace(path) . "k"
objFolderItem := objFolder.Self
}
else {
SplitPath, path, name, dir
;~ MsgBox % path . "`n" name . "`n" . dir
;~ loop, % path0
;~ MsgBox % path%A_index%
objFolder := objShell.NameSpace(dir)
objFolderItem := objFolder.ParseName(name)
}
if validate {
colVerbs := objFolderItem.Verbs
colVerbs.Count
loop % colVerbs.Count {
verb := colVerbs.Item(A_Index - 1)
retMenu := verb.name
StringReplace, retMenu, retMenu, &
if (retMenu = menu) {
verb.DoIt
Return True
}
}
Return False
} else
objFolderItem.InvokeVerbEx(Menu)
}
then I just did this:
InvokeVerb(savepath, "Paste", "false")

Kubernetes operator-sdk : How to delete controller?

We have developed a bunch of controllers and APIs, we need to delete some controllers but we are unable to find a way to delete the API and controllers.
We looked at the available options but no flag to delete the apis.
operator-sdk --help
CLI tool for building Kubernetes extensions and tools.
Usage:
operator-sdk [flags]
operator-sdk [command]
Examples:
The first step is to initialize your project:
operator-sdk init [--plugins=<PLUGIN KEYS> [--project-version=<PROJECT VERSION>]]
<PLUGIN KEYS> is a comma-separated list of plugin keys from the following table
and <PROJECT VERSION> a supported project version for these plugins.
Plugin keys | Supported project versions
-------------------------------------+----------------------------
ansible.sdk.operatorframework.io/v1 | 3
declarative.go.kubebuilder.io/v1 | 2, 3
go.kubebuilder.io/v2 | 2, 3
go.kubebuilder.io/v3 | 3
helm.sdk.operatorframework.io/v1 | 3
kustomize.common.kubebuilder.io/v1 | 3
quarkus.javaoperatorsdk.io/v1-alpha | 3
For more specific help for the init command of a certain plugins and project version
configuration please run:
operator-sdk init --help --plugins=<PLUGIN KEYS> [--project-version=<PROJECT VERSION>]
Default plugin keys: "go.kubebuilder.io/v3"
Default project version: "3"
Available Commands:
alpha Alpha-stage subcommands
bundle Manage operator bundle metadata
cleanup Clean up an Operator deployed with the 'run' subcommand
completion Load completions for the specified shell
create Scaffold a Kubernetes API or webhook
edit Update the project configuration
generate Invokes a specific generator
help Help about any command
init Initialize a new project
olm Manage the Operator Lifecycle Manager installation in your cluster
pkgman-to-bundle Migrates packagemanifests to bundles
run Run an Operator in a variety of environments
scorecard Runs scorecard
version Print the operator-sdk version
Flags:
-h, --help help for operator-sdk
--plugins strings plugin keys to be used for this subcommand execution
--project-version string project version (default "3")
--verbose Enable verbose logging
There is not an automated way to remove APIs via the operator-sdk.
There are a couple ways to do it. If you're operator is fairly simple, you could just scaffold a new operator and copy the code you want into it.
Otherwise, you'll have to remove it by hand. I created a dummy operator, commited it, and then added a new API to get this diff which can be used to see what you'll need to delete. (This is using the master branch, it may be different depending on the version you are using.)
diff --git a/PROJECT b/PROJECT
index ca36be5..0bb71be 100644
--- a/PROJECT
+++ b/PROJECT
## -16,4 +16,13 ## resources:
kind: Memcached
path: github.com/example/memcached-operator/api/v1alpha1
version: v1alpha1
+- api:
+ crdVersion: v1
+ namespaced: true
+ controller: true
+ domain: example.com
+ group: cache
+ kind: Memcached2
+ path: github.com/example/memcached-operator/api/v1alpha1
+ version: v1alpha1
version: "3"
diff --git a/api/v1alpha1/zz_generated.deepcopy.go b/api/v1alpha1/zz_generated.deepcopy.go
index 7730cf5..8211ded 100644
--- a/api/v1alpha1/zz_generated.deepcopy.go
+++ b/api/v1alpha1/zz_generated.deepcopy.go
## -51,6 +51,95 ## func (in *Memcached) DeepCopyObject() runtime.Object {
return nil
}
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *Memcached2) DeepCopyInto(out *Memcached2) {
+ *out = *in
+ out.TypeMeta = in.TypeMeta
+ in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
+ out.Spec = in.Spec
+ out.Status = in.Status
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Memcached2.
+func (in *Memcached2) DeepCopy() *Memcached2 {
+ if in == nil {
+ return nil
+ }
+ out := new(Memcached2)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *Memcached2) DeepCopyObject() runtime.Object {
+ if c := in.DeepCopy(); c != nil {
+ return c
+ }
+ return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *Memcached2List) DeepCopyInto(out *Memcached2List) {
+ *out = *in
+ out.TypeMeta = in.TypeMeta
+ in.ListMeta.DeepCopyInto(&out.ListMeta)
+ if in.Items != nil {
+ in, out := &in.Items, &out.Items
+ *out = make([]Memcached2, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Memcached2List.
+func (in *Memcached2List) DeepCopy() *Memcached2List {
+ if in == nil {
+ return nil
+ }
+ out := new(Memcached2List)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *Memcached2List) DeepCopyObject() runtime.Object {
+ if c := in.DeepCopy(); c != nil {
+ return c
+ }
+ return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *Memcached2Spec) DeepCopyInto(out *Memcached2Spec) {
+ *out = *in
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Memcached2Spec.
+func (in *Memcached2Spec) DeepCopy() *Memcached2Spec {
+ if in == nil {
+ return nil
+ }
+ out := new(Memcached2Spec)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *Memcached2Status) DeepCopyInto(out *Memcached2Status) {
+ *out = *in
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Memcached2Status.
+func (in *Memcached2Status) DeepCopy() *Memcached2Status {
+ if in == nil {
+ return nil
+ }
+ out := new(Memcached2Status)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *MemcachedList) DeepCopyInto(out *MemcachedList) {
*out = *in
diff --git a/config/crd/kustomization.yaml b/config/crd/kustomization.yaml
index 8b7bb5b..5d83219 100644
--- a/config/crd/kustomization.yaml
+++ b/config/crd/kustomization.yaml
## -3,17 +3,20 ##
# It should be run by config/default
resources:
- bases/cache.example.com_memcacheds.yaml
+- bases/cache.example.com_memcached2s.yaml
#+kubebuilder:scaffold:crdkustomizeresource
patchesStrategicMerge:
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix.
# patches here are for enabling the conversion webhook for each CRD
#- patches/webhook_in_memcacheds.yaml
+#- patches/webhook_in_memcached2s.yaml
#+kubebuilder:scaffold:crdkustomizewebhookpatch
# [CERTMANAGER] To enable cert-manager, uncomment all the sections with [CERTMANAGER] prefix.
# patches here are for enabling the CA injection for each CRD
#- patches/cainjection_in_memcacheds.yaml
+#- patches/cainjection_in_memcached2s.yaml
#+kubebuilder:scaffold:crdkustomizecainjectionpatch
# the following config is for teaching kustomize how to do kustomization for CRDs.
diff --git a/config/samples/kustomization.yaml b/config/samples/kustomization.yaml
index 42654aa..9c62d32 100644
--- a/config/samples/kustomization.yaml
+++ b/config/samples/kustomization.yaml
## -1,4 +1,5 ##
## Append samples you want in your CSV to this file as resources ##
resources:
- cache_v1alpha1_memcached.yaml
+- cache_v1alpha1_memcached2.yaml
#+kubebuilder:scaffold:manifestskustomizesamples
diff --git a/controllers/suite_test.go b/controllers/suite_test.go
index 97d4bfb..ffce919 100644
--- a/controllers/suite_test.go
+++ b/controllers/suite_test.go
## -65,6 +65,9 ## var _ = BeforeSuite(func() {
err = cachev1alpha1.AddToScheme(scheme.Scheme)
Expect(err).NotTo(HaveOccurred())
+ err = cachev1alpha1.AddToScheme(scheme.Scheme)
+ Expect(err).NotTo(HaveOccurred())
+
//+kubebuilder:scaffold:scheme
k8sClient, err = client.New(cfg, client.Options{Scheme: scheme.Scheme})
diff --git a/main.go b/main.go
index b2bedfd..443397e 100644
--- a/main.go
+++ b/main.go
## -85,6 +85,13 ## func main() {
setupLog.Error(err, "unable to create controller", "controller", "Memcached")
os.Exit(1)
}
+ if err = (&controllers.Memcached2Reconciler{
+ Client: mgr.GetClient(),
+ Scheme: mgr.GetScheme(),
+ }).SetupWithManager(mgr); err != nil {
+ setupLog.Error(err, "unable to create controller", "controller", "Memcached2")
+ os.Exit(1)
+ }
//+kubebuilder:scaffold:builder
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
I'm not sure if this is something that we could add to the operator-sdk right now, but it would be worth filing an issue, which we will discuss at our triage meeting. https://github.com/operator-framework/operator-sdk/issues/new?assignees=&labels=&template=feature-request.md&title=

RegWrite not writing to the registry

ValueType := A_Args[1]
KeyName := A_Args[2]
ValueName := A_Args[3]
ValueData := A_Args[4]
Loop, %0%
params .= A_Space %A_Index%
; https://autohotkey.com/docs/Run#RunAs
full_command_line := DllCall("GetCommandLine", "str")
if !(A_IsAdmin or RegExMatch(full_command_line, " /restart(?!\S)")) {
try {
if A_IsCompiled
Run *RunAs "%A_ScriptFullPath%" "%params%" /restart
else
Run *RunAs "%A_AhkPath%" /restart "%A_ScriptFullPath%" "%params%"
}
ExitApp
}
RegWrite, % ValueType, % KeyName, % ValueName, % ValueData
Why is RegWrite not writing to the registry when I pass parameters to the script?
A_LastError codes
Code 87 means an invalid parameter. What are you passing to RegWrite?
Here's one function I use for debugging. If isCondition is true it shows a custom error message and stops everything.
fAbort(isCondition, sFuncName, sNote, dVars:="") {
If isCondition {
sAbortMessage := % sFuncName ": " sNote
. "`n`nA_LineNumber: """ A_LineNumber """`nErrorLevel: """ ErrorLevel """`nA_LastError: """ A_LastError """`n"
For sName, sValue in dVars
sAbortMessage .= "`n" sName ": """ sValue """"
MsgBox, 16,, % sAbortMessage
ExitApp
}
}
After a RegWrite it could be used like this:
fAbort(ErrorLevel ; 1, if RegWrite unsuccessful.
, "Script or function name here" ; Could use A_ThisFunc for current function name.
, "Registry write unsuccessful." ; Your custom message here.
, { x: "blabla", y: 13 } ; Additional vars you want to see in the msgbox.
)

How to update postgres JSONB with variable params in golang?

I have a table in cockroachdb/postgres as below:
column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden
+-------------+-----------+-------------+----------------+--------------------+-----------+-----------+
id | STRING | false | NULL | | {primary} | false
student | JSONB | true | NULL | | {} | false
(2 rows)
id | student
+--------------------+----------------------------------------------+
1 | {"name": "Albert"}
2 | {"name": "Bob", "state": "CA"}
I am trying to update student with variable number of params - for example, some times update age, some times update age and country etc. How do we do this in golang?
Here is what I have so far, but it does not work.
package main
import (
"database/sql"
"fmt"
_ "github.com/lib/pq"
)
var (
DB = "testdb"
DBUSER = "testuser"
TESTTABLE = "ttab"
CONNSTR = "dbname=" + DB + " user=" + DBUSER + " host=localhost port=26257 sslmode=disable"
DBDRIVER = "postgres"
)
var (
NAME = "name"
AGE = "age"
STATE = "state"
COUNTRY = "country"
)
func update(si map[string]string, id string) error {
silen := len(si)
if silen == 0 {
return fmt.Errorf("si cannot be empty")
}
keys := []string{}
values := []string{}
for k, v := range si {
keys = append(keys, k)
values = append(values, v)
}
db, err := sql.Open(DBDRIVER, CONNSTR)
if err != nil {
return err
}
defer db.Close()
sqlCmd := "UPDATE " + TESTTABLE + " SET student = student || jsonb_object($1, $2) WHERE id = $3"
_, err := db.Exec(sqlCmd, keys, values, id)
return err
}
func main() {
s := make(map[string]string)
s[AGE] = "22"
s[COUNTRY] = "USA"
if err := updateFast3DB(s3, "1"); err != nil {
fmt.Printf("err: %v\n", err)
}
}
Error:
[root#bin]# ./updatedb
update error: sql: converting argument $1 type: unsupported type []string, a slice of string
[root#bin]#
For anyone coming to this later, make sure to read libpq's documentation on passing array values to the driver.
To pass an array into the libpq driver, you must wrap the slice in a pq.Array first. In the example above, the code should look like this:
_, err := db.Exec(sqlCmd, pq.Array(keys), pq.Array(values), id)

cannot capture NumberFormatException in a unit test

i have a unit test which have to fail at purpose, but I cannot capture it, so it is weird.
This is how it looks the csv file:
curva;clase;divisa;rw
AED_FXDEP;OIS;AED;240,1000
ARS :Std;6m;ARS;240
AUD_CALMNY_DISC;OIS;AUD;169.7056275
AUD_DEPO_BBSW;6m;AUD;169.7056275
AUD_DEPO_BBSW;6m;AUD;
And this is the content of the json schema file:
{"type" : "struct","fields" : [ {"name" : "curve","type" : "string","nullable" : false}, {"name":"class", "type":"string", "nullable":false}, {"name":"currency", "type":"string", "nullable":false}, {"name":"rw", "type":"string","nullable":false} ]
I think it is self explainable, the last line of the csv has an empty field and that is not permitted, the exception is clear, NumberFormatException because you can create a number with an empty value. I want to catch the exception in the unit test, why I can't reach it?
This is the code that provokes the exception:
try{
val validateGenericFile : Boolean = CSVtoParquet.validateGenericCSV(pathCSVWithHeaderWithErrors,
pathCurvasJsonSchemaWithDecimal,
_nullValue,
_delimiter,
sc,
sqlContext)
//never reach!
Assert.assertTrue(validateGenericFile)
} catch {
case e:NumberFormatException => Assert.assertTrue("ERROR! " + e.getLocalizedMessage,false)
case ex:Exception => Assert.assertTrue("ERROR! " + ex.getLocalizedMessage,false)
} finally {
println("Done testValidateInputFilesFRTBSTDES436_WithErrors!")
}
the method validateGenericCSV looks like:
val myDfWithCustomSchema = _sqlContext.read.format("com.databricks.spark.csv").
option("header", "true").
option("delimiter", _delimiter).
option("nullValue", _nullValue).
option("mode","FAILFAST").
schema(mySchemaStructType).
load(fileToReview)
var finallyCorrect : Boolean = true
var contLinesProcessed = 1
try{
//this line provokes the exception!
val myArray = myDfWithCustomSchema.collect
var contElementosJson = 0
var isTheLineCorrect: Boolean = true
myArray.foreach { elem =>
println("Processing line with content: " + elem)
for (myElem <- myList) {
val actualType = myElem.`type`
val actualName = myElem.name
val actualNullable = myElem.nullable
if (contElementosJson == myList.size) {
contElementosJson = 0
}
if (actualType == "string") {
val theField = elem.getString(contElementosJson)
val validatingField: Boolean = theField.isInstanceOf[String]
isTheLineCorrect = validatingField && !((theField == "" || theField == null) && !actualNullable)
contElementosJson += 1
if (!isTheLineCorrect){
finallyCorrect=false
println("ATTENTION! an empty string chain. " + "Check this field " + actualName + " in the csv file, which should be a " + actualType + " according with the json schema file, can be nullable? " + actualNullable + " isTheLineCorrect? " + isTheLineCorrect)
}
} else if (actualType == "integer") {
val theField = elem.get(contElementosJson)
val validatingField: Boolean = theField.isInstanceOf[Integer]
isTheLineCorrect = validatingField && !((theField == "" || theField == null) && !actualNullable)
contElementosJson += 1
if (!isTheLineCorrect){
finallyCorrect=false
println("ATTENTION! an empty string chain. " + "Check this field " + actualName + " in the csv file, which should be a " + actualType + " according with the json schema file, can be nullable? " + actualNullable + " isTheLineCorrect? " + isTheLineCorrect)
}
} else if (actualType.startsWith("decimal")) {
val theField = elem.get(contElementosJson)
val validatingField: Boolean = theField.isInstanceOf[java.math.BigDecimal]
isTheLineCorrect = validatingField && !((theField == "" || theField == null) && !actualNullable)
contElementosJson += 1
if (!isTheLineCorrect){
finallyCorrect=false
println("ATTENTION! an empty string chain. " + "Check this field " + actualName + " in the csv file, which should be a " + actualType + " according with the json schema file, can be nullable? " + actualNullable + " isTheLineCorrect? " + isTheLineCorrect)
}
} else {
println("Attention! se está intentando procesar una columna del tipo " + actualType + " que no está prevista procesar. Comprobar.")
}
} //for
contLinesProcessed += 1
} //foreach))
} catch {
//NEVER REACHED! why????
case e:NumberFormatException => throw e
case ex:Exception => throw ex
}
Why the NumberFormatException is never reached within in validateGenericCSV method?
UPDATE
i have modified these lines:
case e:NumberFormatException => Assert.assertTrue("ERROR! " + e.getLocalizedMessage,true)
case ex:Exception => Assert.assertTrue("ERROR! " + ex.getLocalizedMessage,true)
for these lines:
case e:NumberFormatException => Assert.assertTrue("ERROR! " + e.getLocalizedMessage,false)
case ex:Exception => Assert.assertTrue("ERROR! " + ex.getLocalizedMessage,false)
The same error, my problem is that I cannot reach to the catch sentences when the exception happens!
Thank you
When inspecting the stack trace, we can see the following:
ERROR! Job aborted due to stage failure: Task 1 in stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1.0 (TID 2, localhost): java.lang.NumberFormatException
Spark is a distributed computing framework. The NumberFormatException is taking place remotely at one of the executors while processing a task. Spark gets a TaskFailure from that executor and propagates the exception wrapped in a org.apache.spark.SparkException to the action that triggered the materialization of the computation: the .collect() method in this specific case.
If we would like to get the reason behind the failure, we can use ex.getCause.
In practical terms we will have something like this snippet:
catch {
case ex:Exception if ex.getCause.getClass == classOf[NumberFormatException] => Assert.fail("Number parsing failed" + e.getLocalizedMessage)
case ex:Exception => Assert.fail(...)
}
The test won't fail because Assert.assertTrue(...,true) does not fail. assertTrue fails if the second parameter is false but not when it's true.