Is it possible to control the lengths of answer boxes in the Moodle from R-exams? The image below shows that the boxes have different lengths which probably correspond to those of the hidden answers. My answers contain a mixture of numeric and string answers. The boxes for the string answers appear to be smaller than those of numeric answers. I would like to standardise these boxes and make them all have equal lengths. Thank you!
Short answer:
Good question! Yes, it is possible. You can set an extra metainformation tag numwidth to TRUE. Then the width of the cells is adjusted to the same width. So in R/Markdown exercises you just add
exextra[numwidth,logical]: TRUE
And in R/LaTeX exercises:
\exextra[numwidth,logical]{TRUE}
A worked example is provided at: http://www.R-exams.org/templates/fourfold2/.
Details:
This used to be an undocumented feature for numeric sub-items in cloze exercises but has been documented in R/exams 2.4-0. The analogous stringwidth specification was also added.
In addition to the logical specification you can also use a numeric value for the maximum width (e.g., exextra[numwidth,numeric]: 8) or a number in a character value that has the desired width (e.g., exextra[numwidth,character]: 99999999).
Example:
A simple demo R/Markdown exercise using a numeric width specification with a mix of numeric and string questions is:
Question
========
List the first or first six numbers and letters, respectively.
| | Numbers | Letters |
|:--------|:------------|:------------|
| First | ##ANSWER1## | ##ANSWER3## |
| First 6 | ##ANSWER2## | ##ANSWER4## |
Answerlist
----------
*
*
*
*
Meta-information
================
exname: fixedwidth
extype: cloze
exclozetype: num|num|string|string
exsolution: 1|123456|A|ABCDEF
exextra[numwidth,numeric]: 9
exextra[stringwidth,numeric]: 9
Importing the XML output from exams2moodle("fixedwidth.Rmd") into Moodle yields blank cells with equal width:
Only when providing feedback after filling in the responses, Moodle still renders the cells somewhat differently. But I think that this cannot be controlled through modifications in the Moodle XML code.
Related
please, I am working on a PoC for Person Real-time Identification, and one of the critical aspects of it is to support both minor misspelling and phonetic variations of First, Middle, and Last name. Like HarinGton == HarrinBton or RaphEAl == RafAEl. It's working for longer names, but it's a bit more imprecise for names like Lee and John.
I am using Double Metaphone through dmetaphone() and dmetaphone_alt() in PostgreSQL 13.3 (Supabase.io). And although I appreciate Double Metaphone it has a (too?) short string as the outcome. metaphone() has parameters to make the resulting phonetic representation longer. I investigated dmetaphone() and couldn't find anything other than the default function.
Is there a way of making dmetaphone() and dmetaphone_alt() return a longer phonetic representation similar to metaphone()'s, but with a ALT variation?.
Any help would be much appreciated.
Thanks
Looking at the postgres docs for these features you don't have parametric control over the length of the encoded string for Double Metaphone. In the case of single Metaphone, you can only truncate the output string:
max_output_length sets the maximum length of the output metaphone code; if longer, the output is truncated to this length.
However you may get much better results by using Trigram Similarity or Levenshtein Distance on the encoded output from either of the metaphone methods - this can be a more powerful way to handle phonetic permutations using Metaphones.
Example
Consider all the spelling permutations possible for the artist Cyndi Lauper, using double metaphone with trigram similarity we can achieve 100% similarity between the incorrect string cindy lorper and the correct spelling:
SELECT similarity(dmetaphone('cindy lorper'), dmetaphone('cyndi lauper'));
yields: similarity real: 1 (ie: 100% similarity)
Which means the encodings are identical for both input strings using Double Metaphone. When using Metaphone, they're slightly different. All of the following yield SNTLRPR
SELECT metaphone('cyndy lorper',10);
SELECT metaphone('sinday lorper', 10);
SELECT metaphone('cinday laurper', 10);
SELECT metaphone('cyndi lauper',10);
yields: SNTLPR which is only one character different to SNTLRPR
You can also use Levenshtein Distance to calculate it, which gives you a filterable parameter to work with:
SELECT levenshtein(metaphone('sinday lorper', 10), metaphone('cyndi lauper', 10));
yields: levenshtein integer: 1
It's working for longer names, but it's a bit more imprecise for names
like Lee and John.
It's a bit difficult to see exactly what you're having trouble with - without a more complete reprex.
SELECT similarity(dmetaphone('lee'), dmetaphone('leigh'));
SELECT similarity(dmetaphone('jon'), dmetaphone('john'));
both yield: similarity real: 1 (ie: 100% similarity)
Edit: here's a easy to follow guide for fuzzy matching with postgres
I got a pretty complex markdown table with plenty of columns.
I want to keep linter (in my case it's makdownlint) happy and keep lines pretty and in 80 characters limits. But headers data is complex so my table looks like this
| fooooooooooooo | baaaaaaaaar | foooooooooooo | baaaaaaaaar | fooooooooooo |
|----------------|-------------|---------------|-------------|--------------|
|1|2|3|4|5|
We result of that table is that I need and looks ok on GitHub
I'm not sure this is a great idea, but is there any way to split table cells between line in source, but keep rendered data the same?
Something like this:
| fooooooooooooo |\
| baaaaaaaaar \
| foooooooooooo \
| baaaaaaaaar \
| fooooooooooo |
In short: No.
GitHub's spec does not provide for breaking a row across lines. Of note is the description of rows:
Each row consists of cells containing arbitrary text, in which inlines
are parsed, separated by pipes (|). A leading and trailing pipe is
also recommended for clarity of reading, and if there’s otherwise
parsing ambiguity. Spaces between pipes and cell content are trimmed.
Block-level elements cannot be inserted in a table.
Of course, while that doesn't specifically support it, it also doesn't explicitly exclude breaking a row across multiple lines. However, notice that the syntax does not offer any way (outside of a line break) to define when one row ends and another row begins (unlike the header row, which requires a "deliminator row" to divide it from the body of the table). As you cannot define the division between rows, then the only way the parser can determine when one row ends and another begins is with a line break.
And then we have this issue:
The remainder of the table’s rows may vary in the number of cells. If
there are a number of cells fewer than the number of cells in the
header row, empty cells are inserted. If there are greater, the excess
is ignored:
In other words, the parser can not count columns to determine if the next line is a continuation of the previous row or a new row.
Finally, elsewhere the spec states that:
A backslash at the end of the line is a hard line break:
There are some exceptions for specific types of content, but tables are not mentioned at all in the backslash escapes section of the spec and therefore do not fit any of those exceptions. Thus, using a backslash escape at the end of the line only reinforces the fact that the line ends a row, which is the opposite of your desired effect.
So, no, there is no way to wrap a table row across multiple lines.
For comparison consider MultiMarkdown, which had support for the same table syntax long before GitHub offered it. MultiMarkdown's documentation plainly states:
Cell content must be on one line only
This behavior matches PHP Markdown Extra, which first introduced the syntax. In fact, I'm not aware of any implementation of this specific table syntax which supports any way for one row to be defined on multiple lines.
sorry for this rather simple question, however there is yet too little documentation about the usage of Microsoft's OpenSource AI library CNTK.
I continue to witness people setting the reader's feature start to 1, while setting the labels start to 0. But should both of them be always 0, as informations does in computer science always start from the zero point? Wouldn't we lose one piece of information this way?
Example of CIFAR10 02_BatchNormConv
features=[
#dimension = 3 (rgb) * 32 (width) * 32(length)
dim=3072
start=1
]
labels=[
dim=1
start=0
labelDim=10
labelMappingFile=$DataDir$/labelsmap.txt
]
Update: New format
Microsoft has recently updated this, in order to get rid of these confusion and make the CNTK Definition Language more readable.
Instead of having to define the start of the values within the line, you can now simply define the type of data in the dataset itself:
|labels <tab seperated values> | features <tab seperated values> [EndOfLine/EOL]
if you want to reverse the order of features and lables you can simply go for:
|features <tab seperated values> | labels <tab seperated values> [EndOfLine/EOL]
You only have still to define the dim value, in order to specify the amount of values you want to input.
Note: There's no | at the end of the row. EOL indicates the end of the row.
For more information visit the CNTK Wiki on this topic.
You are misunderstanding how the reader works. The UCIFastReader works on a file which contains tab separated feature vector. Each line in this file corresponds to an entry (an image in this case), as well as its classification.
So, dim tells the reader how many columns to read, while start tells the reader from which column to start reading.
So, if you had an image of size 2x2, with a 2 labels for each, your file could be of the form <image_pixel_columns><label_columns>:
0 0 0 0 0 0
0 0 1 0 1 0
...
So the first 4 entries in the line are your features (image pixel values), and the last two are your labels. Your reader would be of the form:
reader=[
readerType=UCIFastReader
file=$DataDir$/Train.txt
randomize=None
features=[
dim=4
start=0
]
labels=[
dim=2
start=4
labelDim=10
labelMappingFile=$DataDir$/labelsmap.txt
]
]
It's just that all examples given have the label placed in the first column.
I read an interesting article about English and phonetics - and would like to see if my newfound knowledge can be applied in TSQL to generate a fuzzy result set. In one of my applications, there is a table containing words, which I extracted from a word list. It is literally a one-column table -
Word |
------
A
An
Apple
...
their
there
Is there an built-in function in SQL Server to Select a word which Sounds The same, even though it is spelled different? (The globalization settings are on en-ZA - as last time I checked)
SELECT Word FROM WordTable WHERE Word = <word that sounds similar>
SoundEx()
SOUNDEX converts an alphanumeric string to a four-character code that is based on how the string sounds when spoken.
Difference()
Returns an integer value that indicates the difference between the SOUNDEX values of two character expressions.
SELECT word
, SoundEx(word) As word
, SoundEx(word_that_sounds_similar) As word_that_sounds_similar
, Difference(SoundEx(word), SoundEx(word_that_sounds_similar)) As how_similar
FROM wordtable
WHERE Difference(SoundEx(word), SoundEx(word_that_sounds_similar)) <= 1 /* quite close! */
The value returned by Difference() indicates how similar the two words are.
A value of 0 indicates a strong match and a value of 4 means slim-to-no match.
I'm trying to add a new row to existing table within MS Word document. I use POI 3.10, hwpf library. But after execution of this program, the file is crashed, the MS Word rise an warning message. All content looks strange, and not formatted.
A sample is below:
InputStream fin = new FileInputStream(args[0]);
POIFSFileSystem fs = new POIFSFileSystem(fin);
HWPFDocument doc = new HWPFDocument(fs);
Range range = doc.getRange();
range.getParagraph(269).insertAfter("TEST");
doc.write(new FileOutputStream("SOME PATH"));
Maybe something additional should be updated(SI, DSI for eg.), because new CharacterRun is added?
Word97/2000 format DOC files are only supported in simple variants by HWPF. Tables are already critical. If Word rejects a file, which you created/modified with the library, you are probably out of luck.
I developed a custom library based on Apache's HWPF codebase some time ago for a customer. That custom library added support for many features and could reliably write Word files. Doing all these things right was a lot of work. So just fixing little things is not possible. You would have to spend several man months.
Would it be an option for you to create empty rows in Word and just fill them with HWPF?
EDIT : Likely functional work around:
Pre-fill the table with markers:
+--------+-------------+------------------------------------+
| Rev 1 | 2014-01-01 | Created document |
+--------+-------------+------------------------------------+
| Rev 2 | 2014-01-02 | Corrected flow chart |
+--------+-------------+------------------------------------+
| $REVMRK| $REVDATE## | $REVTEXT########################## |
+--------+-------------+------------------------------------+
| $REVMRK| $REVDATE## | $REVTEXT########################## |
+--------+-------------+------------------------------------+
| $REVMRK| $REVDATE## | $REVTEXT########################## |
+--------+-------------+------------------------------------+
| $REVMRK| $REVDATE## | $REVTEXT########################## |
+--------+-------------+------------------------------------+
Make sure the markers are long enough. (Adding text in HWPF so that the addresses of paragraphs change in tables may cause trouble.)
To fill a row do this:
Find the set of markers for one row
Each marker must include all subsequent #-characters
Prepare your text to be filled in for each marker
Make sure your text only uses ASCII characters (see below)
Make sure your text is not longer than the revision markers
Fill the revision marks with the new content and fill the remaining #-characters with spaces
If the software does not find a new row, someone must add new row templates in Word.
The reason for some restrictions:
no insert: A lot of things are stored with addresses into the text stream (lots of internal extra tables which contain address references over the text content). This applies to paragraph borders, character formatting, table marks, bookmarks, graphics references etc. Some things are covered by HWPF, a lot are not. If you insert text, the addresses may shift and the Word file may get corrupted.
ASCII range: Sequences of text are stored as 1 byte per character or 2 bytes per character. When doing it right, inserting a non-ASCII character in a 1-byte range requires converting that range to a 2-byte range. This does not always work well in HWPF and it leads to address shifting (see "no insert" above).
You may get away with some restrictions if your document is "simpler". For instance: No textboxes is better. No embedded drawings is better. No nested tables is better. However, usually the restrictions are so tight, that you could as well use a plain text document and give it a .doc extension.
Let me know, if you need more details.