Wyam - how do I order content in a pipeline? - wyam

I try out to generate a static HTML file from different .md files with Wyam, but my pipelines dont create the content in the right order.
I'm set the orderby and thenby statement of the file attributes to get the parts of my HTML result in the order of my input folder structure.
My input folder has the following structure:
Input
│ help.md
│
├───1_Intro
│ 1_general.md
│
├───2_examples
│ 1_general.md
│ 2_examples.md
│
└───3_appendix
1_glossar.md
2_sources.md
So here is my wyam.conf file for with my pipeline:
// Load Wyam Modules
#n Wyam.Html
#n Wyam.Markdown
#n Wyam.Yaml
// Setting culture
System.Globalization.CultureInfo.DefaultThreadCurrentCulture = System.Globalization.CultureInfo.CreateSpecificCulture("en-EN");
Pipelines.Add("Help",
ReadFiles("help.md"),
Append(
ReadFiles("*/{*,!help}.md"),
OrderBy((d, c) => d["SourceFileBase"]).ThenBy((d, c) => d["SourceFileName"]),
FrontMatter(Yaml()),
Markdown()
),
FrontMatter(Yaml()),
Markdown(),
Combine(),
WriteFiles(".html")
).WithProcessDocumentsOnce();
I was expecting to get a HTML file were the content of my .md files is in order like this:
1.1 General
Lorem ipsum...
2.1 About my Examples
...
2.2 Examples
...
3.1 Glossar
...
3.2 Sources
...
But after running Wyam with my pipeline I get results like this:
3.1 Glossar
Lorem ipsum...
2.1 About my Examples
...
2.2 Examples
...
3.2 Sources
...
1.1 General
...
Every time I run Wyam with my pipeline I got different results, some are in the order I wanted to create, but the most results look utterly random to me.
Could someone tell me what I'm doring wrong?

Related

preserve existing code for arbitrary scalafmt settings

I'm trying to gently introduce scalafmt to a large existing codebase and I want it to make virtually no changes except for a handful of noncontroversial settings the whole team can agree on.
With some settings like maxColumn I can override the default of 80 to something absurd like 5000 to have no changes. But with other settings I have to make choices that will modify the existing code like with continuationIndent.callSite. The setting requires a number which would aggressively introduce changes on the first run on our codebase.
Is there anything I can do in my scalafmt config to preserve all my code except for a few specific settings?
EDIT: I will also accept suggestions of other tools that solve the same issue.
Consider project.includeFilters:
Configure which source files should be formatted in this project.
# manually include files to format.
project.includeFilters = [
regex1
regex2
]
For example, say we have project structure with foo, bar, baz, etc. packages like so
someProject/src/main/scala/com/example/foo/*.scala
someProject/src/main/scala/com/example/bar/*.scala
someProject/src/main/scala/com/example/baz/qux/*.scala
...
Then the following .scalafmt.conf
project.includeFilters = [
"foo/.*"
]
continuationIndent.callSite = 2
...
will format only files in foo package. Now we can proceed to gradually introduce formatting to the codebase package-by-package
project.includeFilters = [
"foo/.*"
"bar/.*"
]
continuationIndent.callSite = 2
...
or even file-by-file
project.includeFilters = [
"foo/FooA\.scala"
"foo/FooB\.scala"
]
continuationIndent.callSite = 2
...

How to specific Scala style rules for a specific file/directory?

I have some cucumber stepDef steps which are more than more than 120 characters in length, I want to exclude all stepDef files from Scala style warning.
Is there a way to to exclude a specific files/directories, using xml tag only for FileLineLengthChecker condition?
Wrapping the entire file in the following comment filter in effect excludes the file from FileLineLengthChecker rule:
// scalastyle:off line.size.limit
val foobar = 134
// scalastyle:on line.size.limit
line.size.limit is the ID of FileLineLengthChecker rule.
Multiple rules can be switched off simultaneously like so
// scalastyle:off line.size.limit
// scalastyle:off line.contains.tab
// ...

How to embed Markdown in Scala code?

I know that in Scala we have sbt plugins which allow us to execute code embedded in Markdown, plugins like tut and sbt-site. But how can we do the opposite? I would like to embed Markdown in Scala code as part of comments.
E.g.:
// number.sc file
// # Markdown code
// some text
1 + 1
//res0: Int = 2
And the code will than be converted to markdown.
Here is how you add numbers:
```scala
scala> 1 + 1
res0: Int = 2
```
Does someone know if something similar already exists?

Configuring ScalaDoc task in Gradle to generate aggregated documentation

I have a multi module scala project in Gradle. Everything works great with it, except the ScalaDoc. I would like to generate a single 'uber-scaladoc' with all of the libraries cross-linked. I'm still very new to groovy/gradle, so this is probably a 'me' problem. Any assistance getting this setup would be greatly appreciated.
build.gradle (in the root directory)
// ...
task doScaladoc(type: ScalaDoc) {
subprojects.each { p ->
// do something here? include the project's src/main/scala/*?
// it looks like I would want to call 'include' in here to include each project's
// source directory, but I'm not familiar enough with the Project type to get at
// that info.
//
// http://www.gradle.org/docs/current/dsl/org.gradle.api.tasks.scala.ScalaDoc.html
}
}
The goal here would be able to just run 'gradle doScalaDoc' at the command line and have the aggregate documentation show up.
Thanks in advance.

Extend multiple sources / indexes

I have many web pages that are clones of each other. They have the exact same database
structure, just different data in different databases (each clone is for a different country so everything is
separated).
I would like to clean up my sphinx config file so that I don't duplicate the same queries
for every site.
I'd like to define a main source (with db auth info) for every clone, a common source for
every table I'd like to search, and then sources&indexes for every table and every clone.
But I'm not sure how exactly I should go about doing that.
I was thinking something among this lines:
index common_index
{
# charset_type, stopwords, etc
}
source common_clone1
{
# sql_host, sql_user, ...
}
source common_clone2
{
# sql_host, sql_user, ...
}
# ...
source table1
{
# sql_query, sql_attr_*, ...
}
source clone1_table1 : ???
{
# ???
}
# ...
index clone1_table1 : common_index
{
source: clone1_table1
#path, ...
}
# ...
So you can see where I'm confused :)
I though I could do something like this:
source clone1_table1 : table1, common_clone1 {}
but it's not working obviously.
Basically what I'm asking is; is there any way to extend two sources/indexes?
If this isn't possible I'll be "forced" to write a script that will generate my sphinx config file to ease maintenance.
Apparently this isn't possible (don't know if it's in the pipeline for the future). I'll have to resort to generating the config file with some sort of script.
I've created such a script, you can find it on GitHub: sphinx generate config php