Include non-Racket source in Scribble files - racket

I'm writing documentation, and part of it includes small programs written in languages other than Racket. I can of course include them inline (using #verbatim), but I'd like to be able to at least minimally test/run them, so it'd be much more convenient to store them in separate files and just include the source.
What's the easiest way do that? i.e., I'd like to do something like:
#verbatim|{#include-file{path/to/file.ext}}| (though of course that doesn't quite work) and have the content included, literally. I thought that Ben Greenman's https://gitlab.com/bengreenman/scribble-include-text would do this, but it's behaving oddly, probably because there are character sequences in the file that are not playing well.

Related

Alternative to do complex operations in text files using notepadpp?

I'm looking for a practical way to make complex operations in textual files.
From time to time I have the need to develop a whole application (usually in C++ or C#.Net) just to tweak textual configuration files (as .ini, .xml, .txt, etc).
Today I have the need to modify a .txt file with a well-known pattern of setting values to variables. I need to change the value of one specific variable (that appears many times in the file) by multiplying it for a constant (I first thought to use notepadpp + regex backreference but as I found in this thread: How to do a calculation using regex backreference in notepadpp? it seems to beimpossible).
Just when I thought to start developing another heavy desktop tool to accomplish this trivial task I thought if this is the way everyone smarter than me actually do this kind of thing. I thought there could be a notepadpp plugin that allow for complex operations in text using some kind of scripting language but I couldn't find any.
Thanks in advance.

Why word refactor was chosen for change of program or its part?

My question is partly liguistic, but very related to programming (of almost anything, web pages or anything else).
I would like to know why word refactor was chosen for changing of program or its part, if else word probably would be more exact and better describing done change.
IDEs (for example NetBeans or Eclipse) use this word only for renaming of any part of chosen program (project), including moving of file to else place (from view of any OS it is probably only renaming).
But renaming is not about changing of factor (because it is something that is not changed when it is renamed).
Closer to meaning of word refactor (as changing of factor) is manual rewriting of any part, when rewritten part has changed behaviour (but not what program does from outer view - as is written in topic What is refactoring and what is only modifying code?).
The word "Refactoring" is derived from mathematics where you find an equivalent expression by applying factoring again. The equivalent expression does not change the final outcome but it is much easier to understand, use, or reuse.
There are many refactoring techniques and renaming is one of them. Other techniques include extract method, extract class, move method, move class, pull/push method to super/sub-class and many more.

Elisp strict mode

I am new to Emacs Lisp and the feeling is like it lacks strictness (and namespaces, and more...).
To be more comfortable with it I need a
way to make interpreter/byte compiler complain a lot if I use deprecated or obsolete function or variable (even better - hide them). Why this is not looks so simple and removing corresponding .el packages will not work is obvious - they may be needed by some legacy code.
Also, if it is possible, turning off all aliases would be nice. In my opinion they are there only for backwards compatibility, which I do not need. Because of setting this one globally can ruin something, I hope there is something like use strict in JavaScript, which can be applied to the inner body, so the effect is neatly localized.
Do not get me wrong, I think that global namespace of the Elisp is like a dump and if it could be any cleaner, why not?
To put in one sentence: how to make Elisp global namespace obsoleteless and deprecateless, as slim as possible?
I don't know of an Emacs Lisp linter that is built in to Emacs.
I do two things for my own code to try to ensure some level of cleanliness.
First, I make sure that byte-compiling the code doesn't give any errors or warnings. The byte compiler does a certain amount of checking.
Second, I enable lexical binding. This lets the byte compiler detect a few more possible warnings.
This is about the best you can do with the built-in tools. If you want to go further you could write your own tree walker to perform whatever other tests you like.

Translating longer texts (view and email templates) with gettext

I'm developing a multilingual PHP web application, and I've got long(-ish) texts that I need to translate with gettext. These are email templates (usually short, but still several lines) and parts of view templates (longer descriptive blocks of text). These texts would include some simple HTML (things like bold/italic for emphasis, probably a link here or there). The templates are PHP scripts whose output is captured.
The problem is that gettext seems very clumsy for handling longer texts. Longer texts would generally have more changes over time than short texts — I can either change the msgid and make sure to update it in all translations (could be lots of work and very error-prone when the msgid is long), or I can keep the msgid unchanged and modify only the translations (which would leave misleading outdated texts in the templates). Also, I've seen advice against including HTML in gettext strings, but avoiding it would break a single natural piece of text into lots of chunks, which will be an even bigger nightmare to translate and reassemble, and I've also seen advice against unnecessary splitting of gettext strings into separate msgids.
The other approach I see is to ignore gettext altogether for these longer texts, and to separate those blocks in external subtemplates for each locale, and just include the one for the current locale. The disadvantage is that I'm separating the translation effort between gettext .po files and separate templates located in a completely different location.
Since this application will be used as a starting point for other applications in the future, I'm trying to come up with the best approach for the long term. I need some advice for best practices in such scenarios. How have you implemented similar cases? What turned out to work and what turned out a bad idea?
Here's the workflow I used, on a very heavily-trafficked site that had about several dozen long-ish blocks of styled textual content, translated into six languages:
Pick a text-based markup language (we used Markdown)
For long strings, use fixed message IDs like "About_page_intro_markdown" that:
describes the intent of the text
makes clear that it will be interpreted in markdown format
Have our app render "*_markdown" strings appropriately, making sure to allow only a few safe HTML tags
Build a tool for translators that:
shows them their Markdown rendered in realtime (sort of like the Markdown dingus)
makes it easy for them to see the now-authoritative base language translation of the text (since that's no longer in the msgid)
Teach translators how to use the new workflow
Pros of this workflow:
Message IDs don't change all the time
Because translators are editing in a safe higher-level syntax, hard to mess up HTML
Non-technical translators found it very easy to write in Markdown, vs. HTML
Cons of this workflow:
Having static unchanging message IDs means changes in the text need to be transmitted out of band (which we'd do anyway, as long text can raise questions about tone or emphasis)
I'm very happy with the way this workflow operated for our website, and would absolutely recommend it, and use it again. It took a couple of days to get started, but it was easy to build, train, and launch.
Hope this helps, and good luck with your project.
I just had this particular problem, and I believe I solved it in an elegant way.
The problem: We wanted to use Gettext in PHP, and use primary language strings as keys translations. However, for large blocks of HTML (with h1, h2, p, a, etc...) I'd either have to:
Create a translation for each tag with content.
or
Put the entire block with tags in one translation.
Neither of those options appealed to me, so this is what I did:
Keep simple strings ("OK","Add","Confirm","My Awesome App") as regular Gettext .po entries, with the original text as the key
Write content (large text blocks) in markdown, and keep them in files.
Example files would be /homepage/content.md (primary / source text), /homepage/content.da-DK.md, /homepage/content.de-DE.md
Write a class that fetches the content files (for the current locale) and parses it. I then used it like:
<?=Template::getContent("homepage/content")?>
However, what about dynamic large text? Simple. Use a templating engine. I decided on Smarty, and used it in my Template class.
I could now use templating logic.. within markdown! How awesome is that?!
Then came the tricky part..
For content to look good, at times you need to structure your HTML differently. Consider a campaign area with 3 "feature boxes" beneath it. The easy solution: Have a file for the campaign area, and one for each of the 3 boxes.
But I could do better than that.
I wrote a quick block parser, so I would write all the content in one file, and then render each block seperately.
Example file:
[block campaign]
Buy this now!
=============
Blaaaah... And a smarty tag: {$cool}
[/block]
[block feature 1]
Feature 1
---------
asdasd you get it..
[/block]
[block feature 2] ...
And this is how I would render them in the markup:
<?php
// At the top of the document...
// Class handles locale. :)
$template = Template::getContent("homepage/content", [
"cool" => "Smarty variable! AWESOME!"
]);
?>
...
<title><?=_("My Awesome App")?></title>
...
<div class="hero">
<!-- Template data already processed! :) -->
<?=$template->renderBlock("campaign")?>
</div>
<div class="featurebox">
<?=$template->renderBlock("feature 1")?>
</div>
<div class="featurebox">
<?=$template->renderBlock("feature 2")?>
</div>
I'm afraid I can't provide any source code, as this was for a company project, but I hope you get the idea.
gettext wasn't really designed for translating large pieces of text.
fwiw I've included basic HTML (strong, a, etc) in gettext strings as I was confident our translators knew what they were doing (mostly right) and that the translations would be well tested.
I've tried the approach of breaking up the text into one string per paragraph. Roughly as it looks odd if there's one paragraph of English in the middle of the text. Where one of those strings have changed this has meant that we have had to wait for translations before releasing a new version, which has slowed us down. On the plus side it's easy for translators to see which part of the text has changed. This approach worked well for the one application I've tried it with.
Splitting some text out into external locations also worked, but it caused management overhead, rather than just a .po file or two, there was a whole bunch of other text that had to be manually compared to the English version and updated accordingly. This is doable if you remember to provide notes to your translators explaining where and what the difference was in the English version.
I'm still not sold on either approach myself.

How can I combine Catalyst and ngettext?

I'm trying to get my head around i18n with Catalyst. As far as I understood the matter, there are two ways to make translations with Perl: Maketext and Gettext. However, I have a requirement to support gettext's .po format so basically I'm going with gettext.
Now, I've found Catalyst::Plugin::I18n and thus Locale::Maketext::Lexicon, which does what I want most of the time. However, it doesn't generate proper pluralization forms, i.e. properly writing msgid_plural and msgstr[x] into the .pot file. This happens probably because Maketext depends on its bracket notation [quant,_1...] and thus has to have the same notation in the translation.
Yet another solution might be using some direct gettext port like Locale::Messages, however this would mean rewriting C::P::I18n.
Does anybody have a proper solution for this problem apart from rewriting several modules? Anything that combines proper gettext with all its features and Catalyst will do.
You will probably get a better answer on the mailing list:
http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
I assume you've also read this:
http://www.catalystframework.org/calendar/2006/18