R/exams: Open-ended questions with exams2moodle - moodle

My goal is to create a question using R/exams and Moodle including a few plots generated in the Rmd exercise file. The students should describe the plots verbally and then the exercise is graded manually.
Is it possible to use exams2moodle to create such an open-ended free text question for Moodle? There is no extype for it. In the documentation the only hint is:
"In order to generate free text questions in moodle one may specify extra parameters via \exextra. Currently the following options are supported:".
I have tried to add \exextra parameters to the metainformation, but it did not change anything.

A worked example can be found in the essayreg exercise shipped within the package, see: http://www.R-exams.org/templates/essayreg/
And you are right that this is not very well documented. The reason is that we have used somewhat different exextra tags for the Moodle export and for the QTI 2.1 export. We habe to improve and unify that in one of the next R/exams versions.
Also, another pointer, in case this is useful to anyone reading the question: Another useful strategy for asking about the interpretation of (statistical) graphics is in multiple-choice format. Let the participants judge some statements about the graphic that can either be approximately correct or clearly wrong. Of course, with open-ended question you can catch more nuances but with multiple-choice questions you can automatically assess a much larger number of participants. Or participants can self-assess in practice quizzes etc. Examples for this are the boxplots and scatterplot exercises:
http://www.R-exams.org/templates/boxplots/
http://www.R-exams.org/templates/scatterplot/

Related

Approach for extracting relevant text using Azure Cognitive Search

Context:
I have a set of documents in SharePoint. I have set up Azure Cognitive Search (Standard tier) with data sources (SharePoint), index and indexers. I have also added a semantic configuration.
Outcome:
Ask a question, and have the search find and return relevant sections from the documents. I will use these sections to feed into OpenAI to construct a cohesive result.
I would like to replicate this Microsoft demo: https://www.youtube.com/watch?v=3t3qZu1Dy1k&t=572s It seems to me to create this 'demo' each document content is very small and they could easily be combined to pass into OpenAI.
My experience so far:
The results return the documents and rank them, which seems OK - however it returns a short 'caption' and the full text. The caption is not necessarily related to my question - and can therefore not be used for the next step. The full document is far too big to be used in OpenAI.
I have managed to get Semantic answers - however the question has to be so precise to get a result, and the associated text is limited.
What I would like:
I would like the search to return sub-sections of the document, where the results of my question may be. If that is not supported, I feel I need an entirely new approach.
Any ideas? Thanks in advance for your time.
The demo you refer to works by feeding documents to Azure Cognitive Search. A query is then formulated as a question that uses the Semantic Search functionality to return a set of potential semantic answers extracted from the content in the index.
These potential semantic answers are then fed as a prompt to OpenAI's text completion service: https://beta.openai.com/docs/guides/completion
First, you must ensure you can get good semantic answers. Inspect the content you have indexed and verify that it contains content that could semantically be an answer to the questions you test with. Good content should have declarations of facts. I.e., statements that could be used verbatim as an answer to a question. Examples:
The capital of France is Paris.
Forecast for 2022 is expected to be 22%.
The semantic functionality in Azure Search will only respond with a text section containing a potential answer to your question. If you can't get this step to work, you have to work on improving that. Either via semantic configuration, choice of content, or by making sure you process your content so that the items in your index contain the relevant content in the correct properties.
Ensure your content is indexed and mapped to properties in a sensible way
Work with the semantic configuration until you get sensible results
Once the previous two steps are ok, submit to OpenAI
I have tested the semantic text on two different data sets. Both were a combination of website content, PDF- and Word documents, etc. The topic and volume of content were essentially the same. From one data set, I could get excellent semantic answers. But, the other data set was disappointing.
My conclusion was that the content in the good data set was formulated and structured in a way that fits a semantic scenario. The other data set would often have logic and meaning presented in tables and layouts. As a human reading the content on paper, you would understand it. But, semantically, it would not make as much sense.

Security of smart contract data not returned by a view function

I've been looking through some of the NEAR demos and came across the one regarding posting quizzes that can be answered for a reward.
Source code here: https://github.com/Learn-NEAR/NCD-02--riddles
Video here: https://www.youtube.com/watch?v=u4jP2a2mbiI
My question is related to how secure the answer hash is. In the current implementation, the answer hash is returned with the quizzes, but I imagine it would be better if that wasn't the case. Even then, if the hash was stored on the NEAR network without it being returned by any view functions, how secure would that be? If there was code in this contract to only give a certain number of guesses per account before denying additional attempts, would someone be able to get the hash through some other means and then have as many chances to answer as they want by locally hashing answers with sha256 and seeing if one matches?
Thanks,
Christopher
for sure all data on chain is public so storing anything means sharing it with the world
one reasonable way to handle something like this would be to store the hash but accept the raw string and then hash it to compare the two for a possible win
if you choose a secure hashing algorithm then it would be nearly impossible to guess the required input string based on seeing the hash
update: it was poined out to me that this answer is in incomplete or misleading because if the set of possible answers is small then this would still be a bad design because you could just quickly hash all the possible answers (eg. in a multiple choice question) and compare those hashes with the answer
heads up!
everything in that GitHub org that starts with NCD is a student project submitted after just a week of learning about NEAR
so there is a huge pile of mistakes there just waiting to be refactored and commented on by experts in the community
the projects that are presented for study all start with the prefix sample
those are the ones we generated to help students explore the possibilities of contracts on the NEAR platform along with all our core contracts, Sputnik contracts and others
sign up to learn more about NEAR Certified Developer Programs here: https://near.training

How to break up large document into smaller answer units on Retrieve and Rank?

I am still very new to Retrieve and Rank, and Document Conversion services, so I have been playing around with that lately.
I encountered a problem where when I upload a large document (100+ pages) - Retrieve and Rank would help me automatically break it up into answer units, which is great and helpful.
However, some questions only require ONE small line in the big chunks of answer units, is there a way that I can manually break further down the answer units that Retrieve and Rank service has provided me?
I heard that you can do it through JavaScript, but is there a way to do it through the UI?
I am contemplating to manually break up the huge doc into multiple smaller documents, but that could potentially lead to 100s of them - which is probably the last option that I'd resort to.
Any help or suggestions is greatly appreciated!
Thank you all!
First off, one clarification:
Retrieve and Rank does not break up your documents into answer units. That is something that the Document Conversion Service does when your conversion target is ANSWER_UNITS.
Regarding your question:
I don't fully understand exactly what you're trying to do, but if the answer units that are produced by default don't meet your requirements, you can customize different steps of the conversion process to adjust the produced answer units. Take a look at the documentation here.
Specifically, you want to make sure that the heading levels (for Word, PDF or HTML, depending on your document type) are defined in a way that
they detect the start of each answer unit. Then, make sure that the heading levels that you defined (h1, h2, h3, etc.) are included in the selector_tags list within the answer_units section.
Once your custom Document Conversion Service configuration produces the answer units you are looking for, you will be ready to send them to Retrieve and Rank to be indexed.

Drawing a chart over a sum of two columns in Dynamics CRM

I'd like to draw a line chart based on two columns - let's call the regarded fields cats and dogs. I know I could create a third field called animals and populate it but that seems to me as an ugly workaround.
I'm pretty sure there's no way to achieve that via the GUI so I'm hoping that editing the produced XML will open that possibility. As far I could understand this discussion, it's not possible but since it's old, I'm hoping that it's become possible since then.
Any luck on this one?
It's a bit unclear exactly what you are going for with this question. This is an excellent blog on how to modify your chart's fetchxml to extend the crm chart capabilities (and it may or may not answer your question depending on exactly what you mean by "draw a line based on two columns"): https://crmchartguy.wordpress.com/2015/06/21/compare-this-year-to-last-year-with-a-dynamics-crm-chart/

Core Data Structure - avoiding circular reference?

I just wanted to validate my data structure.
It seems a bit convoluted to me, maybe it can be simplified?
Questions are grouped into chapters.
For each question, only one answer per session is possible.
The purpose is to be able to compare / analyze answers to the same questions (by different users or by the same users at different times, i.e. with different sessions).
A template, being a collection of chapters & questions, should not have to be replicated, if chapters and questions are the same.
(That would be necessary if Answer did not have a relationship to Session.)
Is the relationship from Answer back to Session the right strategy?
What else would you improve to simplify the model?
Thank you!
EDIT
Follow-up clarification:
The Answer is not static (e.g. "right" answer, "solution"), but some text the user inputs. It is more like a "questionnaire" than a "quiz". The answer has quantitative attributes that can be analyzed.
As stated, one question can have only one answer within a session. Because questions can indirectly belong to more than one session (via (NSSet*) question.chapter.template.sessions), they could have more than one answers and thus need a to-many relationship.
The typical scenario: User starts a new session with a certain template and fills out the answers. Then he can look at the analysis of the results and compare those with the results of other sessions that use the same template.
EDIT 2
The snapshot of the data model including attributes
honestly, this is what I would do instead of your structure, but I don't know what the purpose of the each entity because I'm not able to find out from their simple names.
this is just an idea to resolve the loop.
you can still reach all templates and all answers from the session, not directly but it does not make your life much harder.
UPDATE:
at the first and second sight, for me, it seems the Session entity is just an extra entity only here. honestly you would not need it, if you concatenate with the Template (aka Questionnaire) entity.
you have to add a many-to-many relationship between the Template and User (you can do it, don't worry about it). using this way, from each template you can reach all answers as well, and you won't have any loop.
Despite the really helpful effort by the part of #holex - the best way still seems to be to stick with my design. The simplifications I had hoped for have not materialized.