Problem loading resources in multiple builds - scala

I'm a beginner learning scala/sbt. For multiple build projects, like this example:
|-- Bar
| |-- build.sbt
| +-- src
| |-- main
| | |-- java
| | |-- resources
| | | +-- config
| | | +-- app.properties
| | +-- scala
| | +-- Bar.scala
| +-- test
| |-- java
| +-- resources
|-- Foo
| |-- build.sbt
| +-- src
| |-- main
| | |-- java
| | |-- resources
| | +-- scala
| | +-- Foo.scala
| +-- test
| |-- java
| +-- resources
|-- build.sbt
|-- project
| |-- Build.scala
In Bar.scala, I tried to load files from resources but its not finding it:
val resourcesPath = getClass.getResource("config/app.properties")
println(resourcesPath.getPath)
>> Exception in thread "main" java.lang.NullPointerException

Related

resolve choice for Glue dataframe not working

I have a Glue data frame with the following structure, due to some historical data we have differences in the structure.
When I try to change the structure the resolveChoice is not working.
|-- logs: array
| |-- element: struct
| | |-- ...
| | |-- target: struct
| | | |-- ...
| | | |-- details: struct
| | | | |-- ...
| | | | |-- reviews: struct
| | | | | |-- comment: string
| | | | | |-- generalReason: choice
| | | | | | |-- array
| | | | | | | |-- element: array
| | | | | | | | |-- element: choice
| | | | | | | | | |-- int
| | | | | | | | | |-- string
| | | | | | |-- string
| | | | | | |-- struct
| | | | | | | |-- deleted: long
| | | | | | | |-- brandId: string
| | | | | | | |-- brand: string
| | | | | | | |-- name_JP: string
| | | | | | | |-- status: boolean
| | | | | | | |-- modifiedDate: string
| | | | | | | |-- name_EN: string
| | | | | | | |-- id: string
| | |-- ...
|-- SK: string
|-- PK: string
The code I am running:
case_logs_temp = case_logs.resolveChoice(
specs=[
("logs.target.details.reviews.generalReason", "project:struct")
],
transformation_ctx="case_logs_resolveChoice",
)
# case_logs.printSchema()
case_logs_temp.printSchema()
The printschema looks exactly like the one above.
Any ideas how to resolve the issue?
The following code works:
specs=[
("logs[].target.details.reviews.generalReason", "project:struct")
],
transformation_ctx="case_logs_resolveChoice",
)
Thanks to issue In AWS Glue, how do I apply resolveChoice to a struct element within an array in a DynamicFrame?

Flutter Dependency Audit

So I have this fully created app that uses a few plugins. When the app is compiled in either iOS or Android I would like to audit and list which external libraries belong to which plugin.
I noticed some undesired libraries on my builds (the specific libraries do not matter) but tracking down which plugin is slow and time consuming (looking at platform code, plugin yaml files etc)
Is there a way to list the external dependencies related to each plugin on the console?
Thanks
In your command line, please run:
flutter pub deps
Output:
Dart SDK 2.7.0
Flutter SDK 1.12.13+hotfix.5
flutter_news 1.0.0+1
|-- build_runner 1.7.2
| |-- args...
| |-- async...
| |-- build 1.2.2
| | |-- analyzer...
| | |-- async...
| | |-- convert...
| | |-- crypto...
| | |-- glob...
| | |-- logging...
| | |-- meta...
| | '-- path...
| |-- build_config 0.4.1+1
| | |-- checked_yaml 1.0.2
| | | |-- json_annotation...
| | | |-- source_span...
| | | '-- yaml...
| | |-- json_annotation...
| | |-- meta...
| | |-- path...
| | |-- pubspec_parse...
| | '-- yaml...
| |-- build_daemon 2.1.2
| | |-- built_collection 4.3.0
| | | |-- collection...
| | | '-- quiver...
| | |-- built_value 7.0.0
| | | |-- built_collection...
| | | |-- collection...
| | | |-- fixnum 0.10.11
| | | '-- quiver...
| | |-- http_multi_server...
| | |-- logging...
...
For platform specific audits, you really have to review each plugin you're adding (at least for the 3rd party ones).
Android: How do I show dependencies tree in Android Studio?
Android: Look for the plugin's android/app/build.gradle file.
iOS: Look for the plugin's ios/Podfile.
More on:
https://dart.dev/tools/pub/cmd/pub-deps
Do you mind sharing your current pubspec.yaml file? So we could further help.

Scala subproject getting code coverage for outer project

I have a project structure similar to the diagram below. The Bar project is my test code. I was wondering if I could get some code coverage against the src/main code FROM the Bar project code?
|-- Bar
| |-- build.sbt
| +-- src
| |-- main
| | |-- java
| | |-- resources
| | +-- scala
| | +-- Bar.scala
| +-- test
| |-- java
| +-- resources
|-- build.sbt
|-- project
| |-- Build.scala
|
+-- src
|-- main
| |-- java
| |-- resources
| +-- scala
| +-- Hello.scala
+-- test
|-- java
|-- resources
+-- scala
+-- HelloTest.scala
If the Bar project relies on methods in the src/main your Bar tests will cover them both

Spark Data frame throwing error when trying to query nested column

I'm having a Dataframe with below schema. This is basically an XML file, which I have converted to Dataframe for further processing. I trying to extract _Date column, but looks like some type mismatch is happening
df1.printSchema
|-- PlayWeek: struct (nullable = true)
| |-- TicketSales: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- PlayDate: array (nullable = true)
| | | | |-- element: struct (containsNull = true)
| | | | | |-- BoxOfficeDetail: array (nullable = true)
| | | | | | |-- element: struct (containsNull = true)
| | | | | | | |-- VisualFormatCd: struct (nullable = true)
| | | | | | | | |-- Code: struct (nullable = true)
| | | | | | | | | |-- _SequenceId: long (nullable = true)
| | | | | | | | | |-- _VALUE: double (nullable = true)
| | | | | | | |-- _SessionTypeCd: string (nullable = true)
| | | | | | | |-- _TicketPrice: double (nullable = true)
| | | | | | | |-- _TicketQuantity: long (nullable = true)
| | | | | | | |-- _TicketTax: double (nullable = true)
| | | | | | | |-- _TicketTypeCd: string (nullable = true)
| | | | | |-- _Date: string (nullable = true)
| | | |-- _FilmId: long (nullable = true)
| | | |-- _Screen: long (nullable = true)
| | | |-- _TheatreId: long (nullable = true)
| |-- _BusinessEndDate: string (nullable = true)
| |-- _BusinessStartDate: string (nullable = true)
I need to extract _Date column but its throwing below error
scala> df1.select(df1.col("PlayWeek.TicketSales.PlayDate._Date")).show()
org.apache.spark.sql.AnalysisException: cannot resolve 'PlayWeek.TicketSales.PlayDate[_Date]' due to data type mismatch: argument 2 requires integral type, however, '_Date' is of string type.;
at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:65)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:57)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:335)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:335)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:334)
Any help would be appreciated.

Org-mode: Verbatim Environments

Say I would like to have some text in a verbatim environment in org-mode where table shortcuts are disabled.
For example, consider the following text:
|-- 05102013
| |-- 1826
| |-- 6500
| |-- 6501
| |-- 6502
| |-- 6503
| `-- readme
If I put it within an EXAMPLE literal folder:
#+BEGIN_EXAMPLE
|-- 05102013
| |-- 1826
| |-- 6500
| |-- 6501
| |-- 6502
| |-- 6503
| `-- readme
#+END_EXAMPLE
and I accidentally press <TAB> on any line in the text above. org-mode automatically re-organizes the text to make it look like a table:
|------------+---------|
| | -- 1826 |
| | -- 6500 |
| | -- 6501 |
| | -- 6502 |
| | -- 6503 |
| `-- readme | |
which I don't want. Does org-mode provide any environments or blocks in which the automatic table-creation mechanism is disabled?
You can wrap your text in a source block like this:
#+begin_src text
|-- 05102013
| |-- 1826
| |-- 6500
| |-- 6501
| |-- 502
| |-- 6503
| `-- readme
#+end_src
TAB inside the block will not reformat your text as a table, but will insert spaces to the next tab stop.
If this still annoys you, you may try c instead of text, where TAB will try (and fail) to auto indent instead of adding spaces.
I was gonna propose the same thing as Juancho, except that the specified language would be "fundamental" (instead of "text"), so (almost) nothing would happen.
You can use both Juancho or fniessen suggest, however you can use example environments if you use C-c ' first to edit the content of the block rather than directly within the org buffer. Example environments are opened as fundamental buffers as well.