Scorm 2004 3rd Ed - Limit Attempts - scorm2004

I am trying to limit the number of attempts that a user can take to pass a quiz, regardless of content launch number.
I found a precondition rule, but i think that is only for the current launch attempt.
Has anyone successfully done this? The sco is in captivate.

Which LMS are you using?
Depending on the LMS, you may be able to limit the number of attempts.

Typically with SCORM 2004 you can use the imss:sequencing, and imss:limitConditions. Example:
<imsss:sequencing>
<!-- Optional: Limit attempts (managed by LMS), and or attemptAbsoluteDurationLimit equates to cmi.max_time_allowed -->
<imsss:limitConditions attemptLimit="1" />
<imsss:objectives>
<imsss:primaryObjective satisfiedByMeasure="true">
<!-- equates to cmi.scaled_passing_score of 60% -->
<imsss:minNormalizedMeasure>0.6</imsss:minNormalizedMeasure>
</imsss:primaryObjective>
</imsss:objectives>
</imsss:sequencing>
If the LMS doesn't support this, you would have to either place the data within the student attempt, but this gets dicy too.
Here is why-
If the student, or sco ever sets the 'cmi.exit' to '', 'normal', 'logout' or 'timeout' that is effectively the end of the attempt. In SCORM 2004 the LMS has a responsibility to create a new attempt and your current student data will be reset. Meaning, you either have to set a cookie or local storage value in order to keep track of this reliably. But, thats not even going to reliably work due to the student using another device. So in a way this is as I stated 'dicy'.
In SCORM 1.2 this got handeled differently by LMS Vendors. The spec didn't make it extremely clear how the LMS would manage modes and statuses so many run in a combination of ways which makes figuring out how to react to being in 'review' mode for example harder to rely on if the LMS lets you keep setting values after you've been graded. Most LMS vendors will tell you to manage your own attempts and may even keep you in an eternal attempt. I could go on about this for pages, but if you need anything more detailed here let me know.

Related

H5P sending results to Moodle

I am very much puzzled about using H5P in Moodle.
The idea is great, obviously, yet I cannot make it work as I expected.
My principles/idea:
There are bunch of activities in each course
Each activity can build up several Student's skills, say Creative Thinking or Problem Solving
After finishing each activity Student, based on the result, can go
to the next activity or re-do it if failed
For the testing purposes I set up 3 outcomes (0-30 > NO pass, 30-70 >
1 point, 71-100 > 2 points) in the H5P module - this one is working
fine.
The outcome should be passed to Moodle, so the course can then decide
what to do: pass with 1 or 2 points, or fail and request to do the
activity again
This outcome then will be added to Student's skillset
Say, I have this basic crossword. After finishing it the Student can achieve two Skills mentioned above yet still this depends on the outcome, eg. result 1 means 0 in Creative Thinking and +1 in Problem Solving, and/or 2 means +1 in Creative Thinking and +2 in Problem Solving.
The activity itself works as expected, as I mentioned above, see the images (note ONE point circled):
and , but then nothing happens.
The student is NOT taken to the next activity, all s/he can do is to retry same activity over and over again.
Is it possible to force Moodle/H5P to act as described above?
For the testing purposes I used two 'activities': one being 'h5p' itself and the other being 'lesson' with same h5p modules being added inside, see the image:
I run this all on WAMP
I tried to follow xAPI https://h5p.org/documentation/x-api
which resulted in js error:
Sorry for the long post - tried to cover everything.
If anybody knows the answers - this reply will be much appreciated.
Cheers,
Greg

How to stop timeout in service block

I am modeling ticket system with various SLA. The model must contain several service blocks with different reaction time ( from 2 to 32 hours). In the service block only working hours should be taken into account. So in the service block timeout should stop when non-workong hours and on the weekend. Could you please kindly tell me how i can realize it?
Thank you very much in advance!
I can think of two answers, one simplified but works in many cases, the other more advanced and probably more accurate:
Simplified approach: I would set the model in hours and keep everything running as is without any stop. So, at the end of the simulation, if the total time is 100 hours and you know that you have 8 hours/day with 5 days/week, then you'd know the total duration is 2.5 weeks. Of course, this might have limitations or might become more complex later on if you want day-specific actions (e.g. you want to differentiate between Monday, Tuesday, etc.)
Advanced more accurate approach: Create resources whose capacities are defined by schedule and assigned them to your services. Create a schedule and specify the working hours in that schedule. Check the below link to learn more about schedules. I call this the more advanced approach because you need to make sure the schedule is defined correctly and make sure all elements in the model are properly controlled (e.g. non-service blocks such as source, delays, etc.).
https://help.anylogic.com/topic/com.anylogic.help/html/data/schedule.html?resultof=%22%73%63%68%65%64%75%6c%65%73%22%20%22%73%63%68%65%64%75%6c%22%20
I personally would use the first approach if the model is rather simple and modeling working hours is enough for analysis. Otherwise, I'd go for option 2.
Finally, another option I'd like to highlight is the "suspend/resume" functions. I am only adding this because you asked "how to stop timeout". So these functions specifically stop and resume timeout. But you'll need to define the times at which they are executed (through an event for example).

Data syncing with pouchdb-based systems client-side: is there a workaround to the 'deleted' flag?

I'm planning on using rxdb + hasura/postgresql in the backend. I'm reading this rxdb page for example, which off the bat requires sync-able entities to have a deleted flag.
Q1 (main question)
Is there ANY point at which I can finally hard-delete these entities? What conditions would have to be met - eg could I simply use "older than X months" and then force my app to only ever displays data for less than X months?
Is such a hard-delete, if possible, best carried out directly in the central db, since it will be the source of truth? Would there be any repercussions client-side that I'm not foreseeing/understanding?
I foresee the number of deleted's growing rapidly in my app and i don't want to have to store all this extra data forever.
Q2 (bonus / just curious)
What is the (algorithmic) basis for needing a 'deleted' flag? Is it that it's just faster to check a flag rather than to check for the omission of an object from, say, a very large list. I apologize if it's kind of a stupid question :(
Ultimately it comes down to a decision that's informed by your particular business/product with regards to how long you want to keep deleted entities in your system. For some applications it's important to always keep a history of deleted things or even individual revisions to records stored as a kind of ledger or history. You'll have to make a judgement call as to how long you want to keep your deleted entities.
I'd recommend that you also add a deleted_at column if you haven't already and then you could easily leverage something like Hasura's new Scheduled Triggers functionality to run a recurring job that fully deletes records older than whatever your threshold is.
You could also leverage Hasura's permissions system to ensure that rows that have been deleted aren't returned to the client. There is documentation and examples for ways to work with soft deletes and Hasura
For your second question it is definitely much faster to check for the deleted flag on records than to have to try and diff the entire dataset looking for things that are now missing.

Best practices for editing data using forms in ms-access

So I've started learning access due to necessity, as the person who was in charge of it passed way and someone had to keep it going.
I noticed a very bad (at least IMO) behavior in all databases he created: Every single form was bound directly to a table or saved query. This way, if the user opened a form, he had to complete all the steps he was supposed to do, because if he closed the form mid process (or the computer froze, or anything of the sort), the actual data would be compromised as it would be half complete. This often times broke everything in the process chain, rendering sub-sequential steps impossible to be performed and forced me to correct data manually directly in the tables.
As I've start upgrading his stuff and developing my own, I've been trying to learn ways to allow the data to be edited in the form only, making it possible to cancel the process anytime or save the changes all at once in the end.
If the editions were simple, I discovered that I could create a recordset, copy relevant data to unbound fields in the form and, in the end, if the user chose to, copy the data from the form fields back to the recordset.
Other times more complex solutions were required, as I would need to edit several pieces of data at once in continuous forms, "save" them, run more code, maybe add fields to hold the information originated from that processing and so on. I then learned about using temporary tables, but did not like it, since it tended to bloat the db. I even went on to creating temporary databases during code execution that would host the temporary tables and be destroyed in the end, but that added too much unnecessary complexity.
Nowadays I'm using disconnected ADO recordsets to hold the temporary data and fields. It works but has its limitations.
So I'm wondering, what is the best way you - much more experienced than me - guys use to approach this kind of scenario? Is using in memory ADO recordsets really the best way around?
I think you are mixing two things that a form does that have completely different requirements. Editing existing records (and bound forms are great for that) and creating new records (where using a straight bound form can result in creating incomplete records). The way to approach it depends on many things but mainly to how much data is necessary for a new record to be considered "complete".
I usually do one of the following things:
Create an unbound popup modal form for adding new records where only the necessary fields are present. Once complete it loads the new record to the main form for further editing.
Use the above method except the form is not a popup one but a set of unbound fields in the footer or header of the main form.
Let the user create new records but bind validation on the OnClose (and/or other appropriate to your situation) event of the form that deletes the half-filled record if it does not validate.
Let users create new records in the bound form but have a 'cleanup' routine called either on a schedule or based on user actions.
Ultimately if your business process requires the majority of fields to be manually added/edited every time a new record is added or edited, you are better off using an unbound form.
This way, if the user opened a form, he had to complete all the steps
he was supposed to do, because if he closed the form mid process (or
the computer froze, or anything of the sort), the actual data would be
compromised as it would be half complete
No, if the computer freezes, then no data is saved to the table. This is the same if you used a disconnected reocrdset and a un-bound form.
If you use the before update event in the form that has some verification code and does a simple cancel = true, then the forms data is not saved nor is the table updated. Again, if you used a dis-connected record set and the user closes the form, you have to test the data – and again you can either choose to write out the data or not – this effect is ZERO difference from using a bound form to a table or a disconnected form.
If the editions were simple, I discovered that I could create a
recordset, copy relevant data to unbound fields in the form and, in
the end, if the user chose to, copy the data from the form fields back
to the recordset.
No you don’t need to do the above. The above achieves nothing and only racks up additional development hours and increases cost of the application. In near all cases in-bound forms increase development costs over that of a simple form bound to a table. So the original developer had the correct idea. You can control the update of the underlying table in near all cases to achieve the required verification. Forms only save and write the data out if the developer allows as such.
So Access forms when bound no more or less write incomplete data out to a table if you place verification code in the forms before update event. A half-filled bound form, or a half filled un-bound form with dis-connected reocrdset BOTH will not write their data if the computer freezes.
And BOTH types of forms will not write out data to table until such time your verification code has completed.
Access is not designed for un-bound forms, and tools like vb.net, or even VB6 had a whole bunch of cool wizards and support for un-bound forms. In access, we don't have those wizards. And when you use UN-bound forms then you loose tons of form events. You in effect get the worst of both worlds, since you lose use of form events and have no wizards or support for un-bound. Even just the several delete record events we have are rather amazing.
You lose use of me.dirty, on-insert, me.newReocrd, forms after update events - the list of features you toss out and lose is huge. And if you want a button to write data to the table (such as a save button on the form), then just go:
If me.Dirty = True then
me.Dirty = False ' this forces your verification code to run
End if
There are FEW use cases in which in-bound forms will benefit you, but they will cost you rather much in terms of development times.

Last Updated Date: Antipattern?

I keep seeing questions floating through that make reference to a column in a database table named something like DateLastUpdated. I don't get it.
The only companion field I've ever seen is LastUpdateUserId or such. There's never an indicator about why the update took place; or even what the update was.
On top of that, this field is sometimes written from within a trigger, where even less context is available.
It certainly doesn't even come close to being an audit trail; so that can't be the justification. And if there is and audit trail somewhere in a log or whatever, this field would be redundant.
What am I missing? Why is this pattern so popular?
Such a field can be used to detect whether there are conflicting edits made by different processes. When you retrieve a record from the database, you get the previous DateLastUpdated field. After making changes to other fields, you submit the record back to the database layer. The database layer checks that the DateLastUpdated you submit matches the one still in the database. If it matches, then the update is performed (and DateLastUpdated is updated to the current time). However, if it does not match, then some other process has changed the record in the meantime and the current update can be aborted.
It depends on the exact circumstance, but a timestamp like that can be very useful for autogenerated data - you can figure out if something needs to be recalculated if a depedency has changed later on (this is how build systems calculate which files need to be recompiled).
Also, many websites will have data marking "Last changed" on a page, particularly news sites that may edit content. The exact reason isn't necessary (and there likely exist backups in case an audit trail is really necessary), but this data needs to be visible to the end user.
These sorts of things are typically used for business applications where user action is required to initiate the update. Typically, there will be some kind of business app (eg a CRM desktop application) and for most updates there tends to be only one way of making the update.
If you're looking at address data, that was done through the "Maintain Address" screen, etc.
Such database auditing is there to augment business-level auditing, not to replace it. Call centres will sometimes (or always in the case of financial services providers in Australia, as one example) record phone calls. That's part of the audit trail too but doesn't tend to be part of the IT solution as far as the desktop application (and related infrastructure) goes, although that is by no means a hard and fast rule.
Call centre staff will also typically have some sort of "Notes" or "Log" functionality where they can type freeform text as to why the customer called and what action was taken so the next operator can pick up where they left off when the customer rings back.
Triggers will often be used to record exactly what was changed (eg writing the old record to an audit table). The purpose of all this is that with all the information (the notes, recorded call, database audit trail and logs) the previous state of the data can be reconstructed as can the resulting action. This may be to find/resolve bugs in the system or simply as a conflict resolution process with the customer.
It is certainly popular - rails for example has a shorthand for it, as well as a creation timestamp (:timestamps).
At the application level it's very useful, as the same pattern is very common in views - look at the questions here for example (answered 56 secs ago, etc).
It can also be used retrospectively in reporting to generate stats (e.g. what is the growth curve of the number of records in the DB).
there are a couple of scenarios
Let's say you have an address table for your customers
you have your CRM app, the customer calls that his address has changed a month ago, with the LastUpdate column you can see that this row for this customer hasn't been touched in 4 months
usually you use triggers to populate a history table so that you can see all the other history, if you see that the creationdate and updated date are the same there is no point hitting the history table since you won't find anything
you calculate indexes (stock market), you can easily see that it was recalculated just by looking at this column
there are 2 DB servers, by comparing the date column you can find out if all the changes have been replicated or not etc etc ect
This is also very useful if you have to send feeds out to clients that are delta feeds, that is only the records that have been changed or inserted since the data of the last feed are sent.