Change "and" between Authors in Citation Style Files (csl) - ms-word

I tried a lot to change the word "and" between two Authors using a citation programm (Papers2, Mac) and a specific citation style file (.csl), but my efforts doesnt work.
What I want to do is changing the "and" in the German "und", in both the citation inline and the reference list:
[Shaw and Riha, 2012]
Shaw, S. B., and S. J. Riha (2012), Title, J.
Hydrol., 434-435(C), 46–54, doi:10.1016/j.jhydrol.2012.02.034.
Does anybody know how I can configure this delimiter-word in the style file?
Thanks in advance!
Micha

Probably the best and easiest way to do this is to set the "default-locale" of this style to "de-DE" (for German), which should automatically result in the use of "und" instead of "and". See http://citationstyles.org/downloads/specification.html#the-root-element-cs-style .
Which style are you using?

It is likely that the CSL file was not properly loaded into Papers. As #RintzeZelle suggested, please make sure to change both the ID and the title in your new style. To override a built-in style in Papers (as coming from the official repo), you need to keep both the title and the ID, or else change both to create a separate style. In your case, it makes sense to have a separate style for the German version. I suggest to use the id http://www.zotero.org/styles/american-geophysical-union-german and the title American Geophysical Union (German).

Related

How to style DITA xrefs in structured Framemaker

In the middle of a conversion project from unstructured Framemaker to DITA-compliant, structured Framemaker. Customer wants xrefs to be underlined in the output. Seems straightforward enough, but I've been all over the documentation and all over the internet and can't find what I need. The EDD file shows that we should be using the "link.external" style, which makes perfect sense, but for the life of me I can't figure out where link.external is defined. I've found one piece of documentation in all my searching that sort of comes close to what I need, but the process for styling an xref, according to this document, is long and laborious. I just can't believe that applying a simple style to an element is so hard. Where would I look for the definition of the "link.external" style (or any other style, for that matter)? What obvious point am I missing?
You apply the style in the Cross-Reference panel using building blocks in the cross-reference format(s).
For example:
Section 2.3.4, Volcanoes.
would be styled using the x-ref format below:
Section <$paranumonly>, <Emphasis><$paratext>.
Therefore, to underline all of the x-refs, create an underline character format such as Underline, and use it in a building block within every x-ref format that you have.
<Underline>“<$paratext>” on page\ <$pagenum>
The change only applies to the x-ref, not to the following text.

Is there a way to embed an editable buffer within a buffer in Emacs?

Let's say I have a file, who.txt, and in that file is just the word "joe".
And in another file, won.txt with the text "won the lottery".
Now, I want to insert-file who.txt into the beginning of won.txt and the result should be:
"joe won the lottery."
Is there a way to replace joe in that sentence with sam and have it auto update who.txt?
The short answer: no.
The long answer: yes. When inserting the text from who.txt (using insert-file-contents), you could add a modification-hooks text property on it with a function that updates who.txt on modifications of the propertized region.
But that doesn't feel like a good idea. What's the problem you are trying to solve. There's probably a better way.
From your question, it is not clear why you need this functionality. If you want to embed an editable file, the freex minor mode may be useful. https://code.google.com/p/emacs-freex/ You can watch the screencast and see if it fits your needs.

Translating longer texts (view and email templates) with gettext

I'm developing a multilingual PHP web application, and I've got long(-ish) texts that I need to translate with gettext. These are email templates (usually short, but still several lines) and parts of view templates (longer descriptive blocks of text). These texts would include some simple HTML (things like bold/italic for emphasis, probably a link here or there). The templates are PHP scripts whose output is captured.
The problem is that gettext seems very clumsy for handling longer texts. Longer texts would generally have more changes over time than short texts — I can either change the msgid and make sure to update it in all translations (could be lots of work and very error-prone when the msgid is long), or I can keep the msgid unchanged and modify only the translations (which would leave misleading outdated texts in the templates). Also, I've seen advice against including HTML in gettext strings, but avoiding it would break a single natural piece of text into lots of chunks, which will be an even bigger nightmare to translate and reassemble, and I've also seen advice against unnecessary splitting of gettext strings into separate msgids.
The other approach I see is to ignore gettext altogether for these longer texts, and to separate those blocks in external subtemplates for each locale, and just include the one for the current locale. The disadvantage is that I'm separating the translation effort between gettext .po files and separate templates located in a completely different location.
Since this application will be used as a starting point for other applications in the future, I'm trying to come up with the best approach for the long term. I need some advice for best practices in such scenarios. How have you implemented similar cases? What turned out to work and what turned out a bad idea?
Here's the workflow I used, on a very heavily-trafficked site that had about several dozen long-ish blocks of styled textual content, translated into six languages:
Pick a text-based markup language (we used Markdown)
For long strings, use fixed message IDs like "About_page_intro_markdown" that:
describes the intent of the text
makes clear that it will be interpreted in markdown format
Have our app render "*_markdown" strings appropriately, making sure to allow only a few safe HTML tags
Build a tool for translators that:
shows them their Markdown rendered in realtime (sort of like the Markdown dingus)
makes it easy for them to see the now-authoritative base language translation of the text (since that's no longer in the msgid)
Teach translators how to use the new workflow
Pros of this workflow:
Message IDs don't change all the time
Because translators are editing in a safe higher-level syntax, hard to mess up HTML
Non-technical translators found it very easy to write in Markdown, vs. HTML
Cons of this workflow:
Having static unchanging message IDs means changes in the text need to be transmitted out of band (which we'd do anyway, as long text can raise questions about tone or emphasis)
I'm very happy with the way this workflow operated for our website, and would absolutely recommend it, and use it again. It took a couple of days to get started, but it was easy to build, train, and launch.
Hope this helps, and good luck with your project.
I just had this particular problem, and I believe I solved it in an elegant way.
The problem: We wanted to use Gettext in PHP, and use primary language strings as keys translations. However, for large blocks of HTML (with h1, h2, p, a, etc...) I'd either have to:
Create a translation for each tag with content.
or
Put the entire block with tags in one translation.
Neither of those options appealed to me, so this is what I did:
Keep simple strings ("OK","Add","Confirm","My Awesome App") as regular Gettext .po entries, with the original text as the key
Write content (large text blocks) in markdown, and keep them in files.
Example files would be /homepage/content.md (primary / source text), /homepage/content.da-DK.md, /homepage/content.de-DE.md
Write a class that fetches the content files (for the current locale) and parses it. I then used it like:
<?=Template::getContent("homepage/content")?>
However, what about dynamic large text? Simple. Use a templating engine. I decided on Smarty, and used it in my Template class.
I could now use templating logic.. within markdown! How awesome is that?!
Then came the tricky part..
For content to look good, at times you need to structure your HTML differently. Consider a campaign area with 3 "feature boxes" beneath it. The easy solution: Have a file for the campaign area, and one for each of the 3 boxes.
But I could do better than that.
I wrote a quick block parser, so I would write all the content in one file, and then render each block seperately.
Example file:
[block campaign]
Buy this now!
=============
Blaaaah... And a smarty tag: {$cool}
[/block]
[block feature 1]
Feature 1
---------
asdasd you get it..
[/block]
[block feature 2] ...
And this is how I would render them in the markup:
<?php
// At the top of the document...
// Class handles locale. :)
$template = Template::getContent("homepage/content", [
"cool" => "Smarty variable! AWESOME!"
]);
?>
...
<title><?=_("My Awesome App")?></title>
...
<div class="hero">
<!-- Template data already processed! :) -->
<?=$template->renderBlock("campaign")?>
</div>
<div class="featurebox">
<?=$template->renderBlock("feature 1")?>
</div>
<div class="featurebox">
<?=$template->renderBlock("feature 2")?>
</div>
I'm afraid I can't provide any source code, as this was for a company project, but I hope you get the idea.
gettext wasn't really designed for translating large pieces of text.
fwiw I've included basic HTML (strong, a, etc) in gettext strings as I was confident our translators knew what they were doing (mostly right) and that the translations would be well tested.
I've tried the approach of breaking up the text into one string per paragraph. Roughly as it looks odd if there's one paragraph of English in the middle of the text. Where one of those strings have changed this has meant that we have had to wait for translations before releasing a new version, which has slowed us down. On the plus side it's easy for translators to see which part of the text has changed. This approach worked well for the one application I've tried it with.
Splitting some text out into external locations also worked, but it caused management overhead, rather than just a .po file or two, there was a whole bunch of other text that had to be manually compared to the English version and updated accordingly. This is doable if you remember to provide notes to your translators explaining where and what the difference was in the English version.
I'm still not sold on either approach myself.

Adding words to spellchecker dictionary in NetBeans?

My NetBeans dictionary is kind of... illiterate? It's flagging words like "website" and the "doesn" part of doesn't. I right-clicked expecting to see your standard Add to dictionary... option but found none. I browsed the menus and also found nothing.
How do I educate my NetBeans spellchecker?
It looks like the spell checker is a relatively recent addition. There are basic instructions on how to change the dictionary here.
Adding an unknown word to the dictionary requires alt + enter while the cursor is on the 'misspelled' word. This might take care of the most glaring omissions.
If it highlights just 'doesn', then it probably isn't aware of English-style contractions (i.e., it doesn't know that words can span across an apostrophe). Until that is fixed, I would recommend just adding 'doesn' as a separate word using the above method.

Text editor with autocomplete while typing normal text

There's a lot of Text editors which support autocomplete during programming, but I want one which can autocomplete while typing normal text as I see a lot of repetition of words I type. Any emacs fans who have implemented this ?
Try the builtin dabbrev-expand; it's bound to M-/.
Also see Predictive Mode if you fancy the more flashy stuff.
pabbrev-mode (predictive abbreviation) works by examining previously written text. Unlike dynamic abbreviation, the text is analyzed during idle time (which enables quick lookup of potential abbreviations). Pabbrev looks at word frequency to suggest the most common expression.
From the documentation, this is what it might look like as you typed the keys pred.
p[oint]
pr[ogn]
pre[-command-hook]
pred[ictive]
I love hippie-expand!
The Zeus editor has a non-programming auto complete feature (i.e. Alt + Space) that takes the current user input, searches the current file for words starting with that input and displays them in a drop down list.
vim has such a feature http://vim.wikia.com/wiki/VimTip4
i think ultra edit has a simmilar feature
You could set up auto-complete mode with a dictionary as your source. I have been very tempted to do this myself.
predictive-mode auto-completes from a dictionary of words. It learns which words you use most often, and can also automatically learn new words as you type if desired.
It's very fast, fast enough that turning on auto-completion doesn't cause any noticeable lag when typing, even with a large dictionary.
(Disclaimer: I'm the author of predictive-mode)