I am trying to use exams package to create my Moodle exams. I want to create a cloze question whit 3 numeric and one string sub-types but I am having problems with the exams2moodle().
Here is a simplification of my code:
```{r data generation, echo = FALSE, results = "hide"}
## DATA GENERATION
options(scipen = 999)
#here in my version, I generated the data and solutions, but I simplified the code for a better understanding
cambio_delta <- 20.1
r2 <- 0.97
y0_1 <- 19.56
sol_str<- "Not possible"
```
Question
========
Here goes the question speech
Answerlist
----------
* Question 1 (this is numeric)
* Question 2 (this is numeric)
* Question 3 (this is numeric)
* Question 4 (this is STRING, the answer suppose to be "Not possible")
Meta-information
================
extype: cloze
exclozetype: num|num|num|string
exsolution: `r 100*r2`|`r cambio_delta`|`r y0_1`|`r sol_str`
extol: 0.05|0.05|0.05
exname: regresion
When I knit this in the Rmarkdown, it works well, but not with exams2moodle():
exams2moodle("regresion.Rmd", n = 8, name = "Exam reg")
I get the warning message:
Error in split.default(solutionlist, gr) :
first argument must be a vector
I will appreciate any suggestion! Thank you!
I put the R/Markdown exercise into a file regresion.Rmd and then ran your code with both exams 2.3-6 (the current CRAN release version at the time of writing) and 2.4-0 (the current R-Forge devel version). Everything worked fine without error and the exercises worked as intended after import into Moodle.
I suggest that you update your version of the exams package and if necessary of R itself. Then you should be fine.
Related
I've solutions of a cloze question, sol4=FALSE and sol5=TRUE but when I export the exercise to moodle with exsolution: r mchoice2moodle(sol4)|r mchoice2moodle(sol5), both solutions equal TRUE (corresponding to c and d)! Do you know this is some kind of Moodle bug or what am I doing wrong?
You cannot represent logical questions/answers by multiple-choice questions of length 1 in Moodle. In R/exams, mchoice questions are allowed to have this format and hence no error is generated. But in Moodle, the corresponding MULTIRESPONSE type needs to have at least two choice and at least one of these needs to be true.
Therefore, instead of using a multiple-choice question of length 1, I would use a single-choice question of length 2 like this:
Question
========
The p-value is: ##ANSWER1##
The null hypothesis is: ##ANSWER2##
Answerlist
----------
*
* not rejected
* rejected
Meta-information
================
exname: Hypothesis test
extype: cloze
exclozetype: num|schoice
exsolution: 0.123|10
Then you get a simple drop-down menu with both choices rather than just a single check box. This also has the advantage that you can clearly distinguish between not answering and not true.
I am working with Tensorflow Extended, preprocessing data and among this data are date values (e.g. values of the form 16-04-2019). I need to apply some preprocessing to this, like the difference between two dates and extracting the day, month and year from it.
For example, I could need to have the difference in days between 01-04-2019 and 16-04-2019, but this difference could also span days, months or years.
Now, just using Python scripts this is easy to do, but I am wondering if it is also possible to do this with Tensorflow? It's important for my use case to do this within Tensorflow, because the transform needs to be done in the graph format so that I can serve the model with the transformations inside the pipeline.
I am using Tensorflow 1.13.1, Tensorflow Extended and Python 2.7 for this.
Posting from similar issue on tft github.
Here's a way to do it:
import tensorflow_addons as tfa
import tensorflow as tf
from typing import TYPE_CHECKING
#tf.function(experimental_follow_type_hints=True)
def fn_seconds_since_1970(date_time: tf.string, date_format: str = "%Y-%m-%d %H:%M:%S %Z"):
seconds_since_1970 = tfa.text.parse_time(date_time, date_format, output_unit='SECOND')
seconds_since_1970 = tf.cast(seconds_since_1970, dtype=tf.int64)
return seconds_since_1970
string_date_tensor = tf.constant("2022-04-01 11:12:13 UTC")
seconds_since_1970 = fn_seconds_since_1970(string_date_tensor)
seconds_in_hour, hours_in_day = tf.constant(3600, dtype=tf.int64), tf.constant(24, dtype=tf.int64)
hours_since_1970 = seconds_since_1970 / seconds_in_hour
hours_since_1970 = tf.cast(hours_since_1970, tf.int64)
hour_of_day = hours_since_1970 % hours_in_day
days_since_1970 = seconds_since_1970 / (seconds_in_hour * hours_in_day)
days_since_1970 = tf.cast(days_since_1970, tf.int64)
day_of_week = (days_since_1970 + 4) % 7 #Jan 1st 1970 was a Thursday, a 4, Sunday is a 0
print(f"On {string_date_tensor.numpy().decode('utf-8')}, {seconds_since_1970} seconds had elapsed since 1970.")
My two cents on the broader underlying issue, here the question is computing time differences, for which we want to do these computations on tensors. Then the question becomes "What are the units of these tensors?" This is a question of granularity. "The next question is what are the data types involved?" Start with a string likely, end with a numeric. Then the next question becomes is there a "native" tensorflow function that can do this? Enter tensorflow addons!
Just like we are trying to optimize training by doing everything as tensor operations within the graph, similarly we need to optimize "getting to the graph". I have seen the way datetime would work with python functions here, and I would do everything I could do avoid going into python function land as the code becomes so complex and the performance suffers as well. It's a lose-lose in my opinion.
PS - This op is not yet implemented on windows as per this, maybe because it only returns unix timestamps :)
I had a similar problem. The issue because of an if-check with in TFX that doesn't take dates types into account. As far as I've been able to figure out, there are two options:
Preprocess the date column and cast it to an int (e.g. calling toordinal() on each element) field before reading it into TFX
Edit the TFX function that checks types to account for date-like types and cast them to ordinal on the fly.
You can navigate to venv/lib/python3.7/site-packages/tfx/components/example_gen/utils.py and look for the function dict_to_example. You can add a datetime check there like so
def dict_to_example(instance: Dict[Text, Any]) -> tf.train.Example:
"""Converts dict to tf example."""
feature = {}
for key, value in instance.items():
# TODO(jyzhao): support more types.
if isinstance(value, datetime.datetime): # <---- Check here
value = value.toordinal()
if value is None:
feature[key] = tf.train.Feature()
...
value will become an int, and the int will be handled and cast to a Tensorflow type later on in the function.
Probably a duplicate to Ashley's post (but I can't comment -yet ;) ).
I have the same issue when trying to add a column to a sub-selection/sample of my initial FITS_rec (based on numpy's recarray); all the rows reappear (AND the filling of this new column doesn't seem to be respected...). "hdu_sliced._get_raw_data()" proposed by Vlas Sokolov is a solution that is working very fine for me, but I was wondering:
1) What are "the better ways" suggested by Iguananaut? I certainly need someone to just google it for me; the newbie me is feeling stuck :$ (Staying in a FITS_rec would be required).
2) Is that an expected behaviour? Meaning, are we really wanting to work on a "masked array" which would a copy of our original array? What is worrying me the most is the "collapse" of the values in the new computed column. See below:
# A nice FITS_rec
a1 = np.array(['NGC1001', 'NGC1002', 'NGC1003'])
a2 = np.array([11.1, 12.3, 15.2])
col1 = fits.Column(name='target', format='20A', array=a1)
col2 = fits.Column(name='V_mag', format='E', array=a2)
cols = fits.ColDefs([col1, col2])
hdu = fits.BinTableHDU.from_columns(cols)
ori_rec=hdu.data
ori_rec
`
FITS_rec([('NGC1001', 11.1), ('NGC1002', 12.3), ('NGC1003', 15.2)],
dtype=(numpy.record, [('target', 'S20'), ('V_mag', '
# Sub-selection
bug=ori_rec[ori_rec["V_mag"]>12.]
bug
FITS_rec([('NGC1002', 12.3), ('NGC1003', 15.2)],
dtype=(numpy.record, [('target', 'S20'), ('V_mag', '
So far so good...
# Let's add a new column
col0=bug.columns
col1 =fits.ColDefs([fits.Column(name='new',format='D',array=bug.field("V_mag")+1.)])
newbug = fits.BinTableHDU.from_columns(col0 + col1).data
FITS_rec([('NGC1001', 11.1, 13.30000019), ('NGC1002', 12.3, 16.20000076),
('NGC1003', 15.2, 0. )],
dtype=(numpy.record, [('target', 'S20'), ('V_mag', '
...AND ... the values of the new column for NGC1002 and NGC1003 are correct but in the row of NGC1001 and NGC1002 respectively... :|
Any enlightenment will be welcomed :)
This is a confusing problem, and it stems from the fact that there are many layers of legacy classes and data structures in astropy.io.fits (stemming back from earlier versions of PyFITS). For example, you can see in your example that hdu.data is a FITS_rec object, which is like a Numpy recarray (itself a soft-deprecated legacy class), but it also has a .columns attribute (as you've noted):
>>> bug.columns
ColDefs(
name = 'target'; format = '20A'
name = 'V_mag'; format = 'E'
)
This in turn actually holds references back to the original arrays from which you described the columns. For example:
>>> bug.columns['target'].array
chararray(['NGC1001', 'NGC1002', 'NGC1003'],
dtype='|S20')
You can see here that even though bug is a "slice" of your original table, the arrays referenced through bug.columns are still contain the original, unsliced array data. So when you do something like in your original post
>>> col0 = bug.columns
>>> col1 = fits.ColDefs([fits.Column(name='new',format='D',array=bug.field("V_mag")+1.)])
it's doing its best here to figure out the intent, but col0 here has no idea that bug is a slice of the original table anymore, it only has the original "coldefs" with the full columns to rely on here.
Most of these classes, including FITS_rec, Column, and especially ColDefs almost never need to be used directly anymore. Unfortunately not all of the documentation has been updated to reflect this fact, and there are a lot of older tutorials and example code that show usage of these classes. Nobody with the requisite expertise has been able to take the time to update the docs and clarify this point.
On occasion Column is useful if you already have columnar data with each column in a separate array, and you want to build a table from it and give some specific FITS attributes to the table columns. But I have redesigned much of the API so that you can take native Python data structures like Numpy arrays and save them to FITS files without worrying about the details of how FITS is implemented or annoying things like FITS data format codes in many cases.
This work is slightly incomplete, because it seems if you want to define a FITS table from some columnar arrays, you still need to use the Column class and specify a FITS format at a minimum (but you never need to use ColDefs directly):
>>> hdu = fits.BinTableHDU.from_columns([fits.Column(name='target', format='20A', array=a1), fits.Column(name='V_mag', format='E', array=a2)])
>>> hdu.data
FITS_rec([('NGC1001', 11.1), ('NGC1002', 12.3), ('NGC1003', 15.2)],
dtype=(numpy.record, [('target', 'S20'), ('V_mag', '<f4')]))
However, you can also work with Numpy structured arrays directly, and I usually find that simpler personally, as it allows you to ignore most FITS-isms and just focus on your data, for those cases where it's not important to finely tweak the FITS-specific stuff. For example, to define a structured array for your data, there are several ways to go about that, but you might try:
>>> nrows = 3
>>> data = np.empty(nrows, dtype=[('target', 'S20'), ('V_mag', np.float32)])
>>> data['target'] = a1
>>> data['V_mag'] = a2
>>> data
array([('NGC1001', 11.100000381469727), ('NGC1002', 12.300000190734863),
('NGC1003', 15.199999809265137)],
dtype=[('target', 'S20'), ('V_mag', '<f4')])
and then you can instantiate a BinTableHDU directly from this array:
>>> hdu = fits.BinTableHDU(data)
>>> hdu.data
FITS_rec([('NGC1001', 11.1), ('NGC1002', 12.3), ('NGC1003', 15.2)],
dtype=(numpy.record, [('target', 'S20'), ('V_mag', '<f4')]))
>>> hdu.header
XTENSION= 'BINTABLE' / binary table extension
BITPIX = 8 / array data type
NAXIS = 2 / number of array dimensions
NAXIS1 = 24 / length of dimension 1
NAXIS2 = 3 / length of dimension 2
PCOUNT = 0 / number of group parameters
GCOUNT = 1 / number of groups
TFIELDS = 2 / number of table fields
TTYPE1 = 'target '
TFORM1 = '20A '
TTYPE2 = 'V_mag '
TFORM2 = 'E '
Likewise when it comes to things like masking and slicing and adding new columns, working directly with the native Numpy data structures is best.
Or, as suggested in the answers to other question, use the Astropy Table API and don't mess with low-level FITS stuff at all if you can help it. Because as I discussed, it contains several layers of legacy interfaces that make things confusing (and that long term should probably be cleaned up, but it's hard to do because code that uses them in some way or another are pervasive). The Table API was designed from the ground-up to make table manupulations, including things like masking rows and adding columns, relatively easy. Whereas the old PyFITS APIs never quite worked for many simple cases.
I hope this answer was edifying--I know it's maybe a bit long and confusing. If there is anything specific I can clear up let me know.
I come from PHP/JS/AS3/... this kind languages. Now I'm learning basic for Libreoffice and I'm kind of struggling to find how to get something similar as associative array I use to use with others languages.
What I'm trying to do is to have this kind structure:
2016 => October => afilename.csv
2016 => April => anotherfilename.csv
with the year as main key, then the month and some datas.
More I try to find informations and more I confuse, so if someone could tell me a little bit about how to organise my datas I would be so pleased.
Thanks!
As #Chrono Kitsune said, Python and Java have such features but Basic does not. Here is a Python-UNO example for LibreOffice Writer:
def dict_example():
files_by_year = {
2016 : {'October' : 'afilename.csv',
'November' : 'bfilename.csv'},
2017 : {'April' : 'anotherfilename.csv'},
}
doc = XSCRIPTCONTEXT.getDocument()
oVC = doc.getCurrentController().getViewCursor()
for year in files_by_year:
for month in files_by_year[year]:
filename = files_by_year[year][month]
oVC.getText().insertString(
oVC, "%s %d: %s\n" % (month, year, filename), False)
g_exportedScripts = dict_example,
Create a file with the code above using a text editor such as Notepad or GEdit. Then place it here.
To run it, open Writer and go to Tools -> Macros -> Run Macro, and find the file under My Macros.
I'm not familiar with LibreOffice (or OpenOffice.org) BASIC or VBA, but I haven't found anything in the documentation for any sort of associative array, hash, or whatever else someone calls it.
However, many modern BASIC dialects allow you to define your own type as a series of fields. Then it's just a matter of using something like
Dim SomeArray({count|range}) As New MyType
I think that's as close as you'll get without leveraging outside libraries. Maybe the Python-UNO bridge would help since Python has such a feature (dictionaries), but I wouldn't know for certain. I also don't know how it would impact performance. You might prefer Java instead of Python for interfacing with UNO, and that's okay too: there's the java.util.HashMap type. Sorry I can't help more, but the important thing to remember is that any BASIC code in tends to live up to the meaning of the word "basic" in English without external assistance.
This question was asked a long ago, but answers are only half correct.
It is true that LibreOffice Basic does not have a native associative array type. But the LibreOffice API provides services. The com.sun.star.container.EnumerableMap service will meet your needs.
Here's an example:
' Create the map
map = com.sun.star.container.EnumerableMap.create("long", "any")
' The values
values = Array( _
Array(2016, "October", "afilename.csv"), _
Array(2016, "April", "anotherfilename.csv") _
)
' Fill the map
For i=LBound(values) to UBound(values)
value = values(i)
theYear = value(0)
theMonth = value(1)
theFile = value(2)
If map.containsKey(theYear) Then
map2 = map.get(theYear)
Else
map2 = com.sun.star.container.EnumerableMap.create("string","string")
map.put(theYear, map2)
End If
map2.put(theMonth, theFile)
Next
' Access to an element
map.get(2016).get("April") ' anotherfilename.csv
As you see, the methods are similar to what you can find in more usual languages.
Beware: if you experience IllegalTypeException you might have to use CreateUNOValue("<type>", ...) to cast a value into the declared type. (See this very old issue https://bz.apache.org/ooo/show_bug.cgi?id=121192 for an example.)
I am using the tm and wordcloud packages in R 2.15.1.
I am trying to make a word cloud from a DTM. Here is the code:
library(wordcloud)
thedtmsparse = inspect(sparse)
trymatrix = t(thedtmsparse)
colnames(trymatrix) = c()
comparison.cloud(trymatrix, max.words=15, random.order=FALSE)
I get the following error from the last command:
Error in strwidth(words[i], cex = size[i], ...) : invalid 'cex' value
I have used the same code on another DTM where it worked fine and I got the word cloud.
Can somebody tell me a fix for the error?
Ravi
You haven't provided reproducible code (probably a big reason no one answered your question), so I can only venture to guess what the problem might be.
I faced this same error, so I'll share my experience. The problem was I had NA's instead of 0's in my term document matrix. Simply fixing that fixed that problem.