Say I would like to have some text in a verbatim environment in org-mode where table shortcuts are disabled.
For example, consider the following text:
|-- 05102013
| |-- 1826
| |-- 6500
| |-- 6501
| |-- 6502
| |-- 6503
| `-- readme
If I put it within an EXAMPLE literal folder:
#+BEGIN_EXAMPLE
|-- 05102013
| |-- 1826
| |-- 6500
| |-- 6501
| |-- 6502
| |-- 6503
| `-- readme
#+END_EXAMPLE
and I accidentally press <TAB> on any line in the text above. org-mode automatically re-organizes the text to make it look like a table:
|------------+---------|
| | -- 1826 |
| | -- 6500 |
| | -- 6501 |
| | -- 6502 |
| | -- 6503 |
| `-- readme | |
which I don't want. Does org-mode provide any environments or blocks in which the automatic table-creation mechanism is disabled?
You can wrap your text in a source block like this:
#+begin_src text
|-- 05102013
| |-- 1826
| |-- 6500
| |-- 6501
| |-- 502
| |-- 6503
| `-- readme
#+end_src
TAB inside the block will not reformat your text as a table, but will insert spaces to the next tab stop.
If this still annoys you, you may try c instead of text, where TAB will try (and fail) to auto indent instead of adding spaces.
I was gonna propose the same thing as Juancho, except that the specified language would be "fundamental" (instead of "text"), so (almost) nothing would happen.
You can use both Juancho or fniessen suggest, however you can use example environments if you use C-c ' first to edit the content of the block rather than directly within the org buffer. Example environments are opened as fundamental buffers as well.
Related
I have a table consisting of main ID, subID and two timestamps (start-end).
+-------+---------------------+---------------------+---------------------+
|main_id|sub_id |start_timestamp |end_timestamp |
+-------+---------------------+---------------------+---------------------+
| 1| 1 | 2021/06/01 19:00 | 2021/06/01 19:10 |
| 1| 2 | 2021/06/01 19:01 | 2021/06/01 19:10 |
| 1| 3 | 2021/06/01 19:01 | 2021/06/01 19:05 |
| 1| 3 | 2021/06/01 19:07 | 2021/06/01 19:09 |
My goal is to pick all the records with the same mainID (different subIDs) and perform a logical AND on the timestamp column (the goal is to find periods, where all the subIDs were active).
+-------+---------------------------+---------------------------+
|main_id| global_start | global_stop |
+-------+---------------------------+---------------------------+
| 1| 2021/06/01 19:01 | 2021/06/01 19:05 |
| 1| 2021/06/01 19:07 | 2021/06/01 19:09 |
Thanks!
Partial solution
This kind of logic is probably really difficult to implement in pure Spark. Built-in functions are not enough for that.
Also, the expected output is 2 lines, but a simple group by main_id should output only one line. Therefore, the logic behind is not trivial.
I would advice you to group your data by main_id and process them with python, using an UDF.
# Agg your data by main_id
df2 = (
df.groupby("main_id", "sub_id")
.agg(
F.collect_list(F.struct("start_timestamp", "end_timestamp")).alias("timestamps")
)
.groupby("main_id")
.agg(F.collect_list(F.struct("sub_id", "timestamps")).alias("data"))
)
df2.show(truncate=False)
+-------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|main_id|data |
+-------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|1 |[[3, [[2021/06/01 19:01, 2021/06/01 19:05], [2021/06/01 19:07, 2021/06/01 19:09]]], [1, [[2021/06/01 19:00, 2021/06/01 19:10]]], [2, [[2021/06/01 19:01, 2021/06/01 19:10]]]]|
+-------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
df2.printSchema()
root
|-- main_id: long (nullable = true)
|-- data: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- sub_id: long (nullable = true)
| | |-- timestamps: array (nullable = true)
| | | |-- element: struct (containsNull = true)
| | | | |-- start_timestamp: string (nullable = true)
| | | | |-- end_timestamp: string (nullable = true)
Once this step achieved, you can process the column data with Python and perform your logical AND.
#F.udf # Add required type depending on function return
def process(data):
"""
data is a complex type (see df2.printSchema()) :
list(dict(
"sub_id": value_of_sub_id,
"timestamps": list(dict(
"start_timestamp": value_of_start_timestamp,
"end_timestamp": value_of_end_timestamp,
))
))
"""
... # implement the "logical AND" here.
df2.select(
"main_id",
process(F.col("data"))
)
I hope this can help you or others to build a final solution.
Run out of ideas on how to solve the following issue. A table in the Glue data catalog has this schema:
root
|-- _id: string
|-- _field: struct
| |-- ref: choice
| | |-- array
| | | |-- element: struct
| | | | |-- value: null
| | | | |-- key: string
| | | | |-- name: string
| | |-- struct
| | | |-- value: null
| | | |-- key: choice
| | | | |-- int
| | | | |-- string
| | | |-- name: string
If I try to resolve the ref choice using
resolved = (
df.
resolveChoice(
specs = [('_field.ref','cast:array')]
)
)
I lose records.
Any ideas on how I could:
filter the DataFrame on whether _field.ref is an array or struct
convert struct records into an array or vice-versa
I was able to solve my own problem by using
resolved_df = ResolveChoice.apply(df, choice = "make_cols")
This will save array values in a new ref_array column and struct values in ref_struct column.
This allowed me to split the DataFrame by
resolved_df1 = resolved_df.filter(col("ref_array").isNotNull()).select(col("ref_array").alias("ref"))
resolved_df2 = resolved_df.filter(col("ref_struct").isNotNull()).select(col("ref_struct").alias("ref"))
After either converting the array to structs only (using explode()) or converting structs to an array using array(), recombine them
So I have this fully created app that uses a few plugins. When the app is compiled in either iOS or Android I would like to audit and list which external libraries belong to which plugin.
I noticed some undesired libraries on my builds (the specific libraries do not matter) but tracking down which plugin is slow and time consuming (looking at platform code, plugin yaml files etc)
Is there a way to list the external dependencies related to each plugin on the console?
Thanks
In your command line, please run:
flutter pub deps
Output:
Dart SDK 2.7.0
Flutter SDK 1.12.13+hotfix.5
flutter_news 1.0.0+1
|-- build_runner 1.7.2
| |-- args...
| |-- async...
| |-- build 1.2.2
| | |-- analyzer...
| | |-- async...
| | |-- convert...
| | |-- crypto...
| | |-- glob...
| | |-- logging...
| | |-- meta...
| | '-- path...
| |-- build_config 0.4.1+1
| | |-- checked_yaml 1.0.2
| | | |-- json_annotation...
| | | |-- source_span...
| | | '-- yaml...
| | |-- json_annotation...
| | |-- meta...
| | |-- path...
| | |-- pubspec_parse...
| | '-- yaml...
| |-- build_daemon 2.1.2
| | |-- built_collection 4.3.0
| | | |-- collection...
| | | '-- quiver...
| | |-- built_value 7.0.0
| | | |-- built_collection...
| | | |-- collection...
| | | |-- fixnum 0.10.11
| | | '-- quiver...
| | |-- http_multi_server...
| | |-- logging...
...
For platform specific audits, you really have to review each plugin you're adding (at least for the 3rd party ones).
Android: How do I show dependencies tree in Android Studio?
Android: Look for the plugin's android/app/build.gradle file.
iOS: Look for the plugin's ios/Podfile.
More on:
https://dart.dev/tools/pub/cmd/pub-deps
Do you mind sharing your current pubspec.yaml file? So we could further help.
I'm a beginner learning scala/sbt. For multiple build projects, like this example:
|-- Bar
| |-- build.sbt
| +-- src
| |-- main
| | |-- java
| | |-- resources
| | | +-- config
| | | +-- app.properties
| | +-- scala
| | +-- Bar.scala
| +-- test
| |-- java
| +-- resources
|-- Foo
| |-- build.sbt
| +-- src
| |-- main
| | |-- java
| | |-- resources
| | +-- scala
| | +-- Foo.scala
| +-- test
| |-- java
| +-- resources
|-- build.sbt
|-- project
| |-- Build.scala
In Bar.scala, I tried to load files from resources but its not finding it:
val resourcesPath = getClass.getResource("config/app.properties")
println(resourcesPath.getPath)
>> Exception in thread "main" java.lang.NullPointerException
I have a project structure similar to the diagram below. The Bar project is my test code. I was wondering if I could get some code coverage against the src/main code FROM the Bar project code?
|-- Bar
| |-- build.sbt
| +-- src
| |-- main
| | |-- java
| | |-- resources
| | +-- scala
| | +-- Bar.scala
| +-- test
| |-- java
| +-- resources
|-- build.sbt
|-- project
| |-- Build.scala
|
+-- src
|-- main
| |-- java
| |-- resources
| +-- scala
| +-- Hello.scala
+-- test
|-- java
|-- resources
+-- scala
+-- HelloTest.scala
If the Bar project relies on methods in the src/main your Bar tests will cover them both