In moodle,I could see the default course progress for the courses in the moodle on the front end. But when tried to show the progress like 10% completed when chapter1 gets completed, 20% completed when chapter2 gets completed and so on. I could not find any module or could not figure out how to modify the code.
In other words:1. How to track the progress of course completion based on course subsections completion? Because default tracking based on courses based only.2. It is possible to track the courses without (refer https://i.stack.imgur.com/GUqwT.png) ticking the course completion checkbox?3. Based on the URL viewing of course sections, is it possible to track the course progress?Thanks in advance.
You can sometimes track specific page views and interactions via the mdl_logstore_standard_log table. Different modules/activities in Moodle log different types/amounts of data, but views of typical course topics/sections are usually logged regardless of completion.
For example, imagine a course with id=10 where you visit section/topic 3. The URL usually looks something like this: <yourdomain>/course/view.php?id=10§ion=3
In this case, the view should be logged in mdl_logstore_standard_log with an eventname value of \core\event\course_viewed. The course id should be in the courseid column and the section viewed should be in the "other" column, although that data is an array stored with PHP serialization, so it's helpful to use unserialize and array parsing functions to get the "3" quickly if needed.
Again, keep in mind different activities/modules log data differently - for example, an assignment activity is logged differently - but hope this helps you find what you need. Good luck!
Related
As an amateur coder for some years I have generally used sub forms when dealing with linked tables to make the transfer of ID field from primary to sub nice and simple...
However in my latest project the main form is a continuous form with a list of delivery runs (Date, RunName, RunCompleted) etc... Linked to this primary table is a delivery list containing (SKU of product, Qty etc...). I use a simple Relationship between the two tables.
Now, On the main (RUNS) form at the end of each row is a button that opens the DELIVERIES form and displays all records with matching RUNID
This is fine for displaying pre-existing data but when I want to add new records I have been using the following code attached to the OnCurrent event:
Me.RunID = DLookup("[RunID]", "tbl_BCCRuns", "RunID = " & Forms![frm_BCC_Runs_list]![RunID])
I have also used:
Forms![frm_BCC_Deliveries].Controls![RunID] = Forms![tbl_BCCRuns].Controls![RunID]
(Note: above done from memory and exact code may be incorrect but that's not the problem at hand)
Now... Both these options give me what I need however...
I find that as I am working on the database, or if you open certain forms in the right order (a bug I need to identify and fix clearly) you can open the DELIVERIES form without the filter (to view all deliveries for arguments sake) and the top entry (usually the oldest record) suddenly adopts the RUNID of the selected record back in the main form.
Now, my question is this, and the answer may be a simple "no" and that's fine, I'll move on...
Is there a better way, a way I am not familiar with or just don't know about due to my inconsistent Access progress, to transfer ID's to a form without risking contamination from improper use? Or do I just have to bite the bullet and make sure that there is just no possible way for that to happen?
In effort to alleviate the issue, I have created a Display Only form for viewing the deliveries but there are still times when I need to access the live historical data to modify other fields without wanting to modify the RUNID.
Any pointers greatly appreciated...
Since you only want to pull the RunID if the form is on a new record row, do a check to verify this is a new record.
If Me.NewRecord Then
Me.RunID = DLookup("[RunID]", "tbl_BCCRuns", "RunID = " & Forms![frm_BCC_Runs_list]![RunID])
End If
Could also consider a technique to synchronize parent and child forms when both are subforms on a main form (the main form does not have to be bound) https://www.fmsinc.com/MicrosoftAccess/Forms/Synchronize/LinkedSubforms.asp
So I've started learning access due to necessity, as the person who was in charge of it passed way and someone had to keep it going.
I noticed a very bad (at least IMO) behavior in all databases he created: Every single form was bound directly to a table or saved query. This way, if the user opened a form, he had to complete all the steps he was supposed to do, because if he closed the form mid process (or the computer froze, or anything of the sort), the actual data would be compromised as it would be half complete. This often times broke everything in the process chain, rendering sub-sequential steps impossible to be performed and forced me to correct data manually directly in the tables.
As I've start upgrading his stuff and developing my own, I've been trying to learn ways to allow the data to be edited in the form only, making it possible to cancel the process anytime or save the changes all at once in the end.
If the editions were simple, I discovered that I could create a recordset, copy relevant data to unbound fields in the form and, in the end, if the user chose to, copy the data from the form fields back to the recordset.
Other times more complex solutions were required, as I would need to edit several pieces of data at once in continuous forms, "save" them, run more code, maybe add fields to hold the information originated from that processing and so on. I then learned about using temporary tables, but did not like it, since it tended to bloat the db. I even went on to creating temporary databases during code execution that would host the temporary tables and be destroyed in the end, but that added too much unnecessary complexity.
Nowadays I'm using disconnected ADO recordsets to hold the temporary data and fields. It works but has its limitations.
So I'm wondering, what is the best way you - much more experienced than me - guys use to approach this kind of scenario? Is using in memory ADO recordsets really the best way around?
I think you are mixing two things that a form does that have completely different requirements. Editing existing records (and bound forms are great for that) and creating new records (where using a straight bound form can result in creating incomplete records). The way to approach it depends on many things but mainly to how much data is necessary for a new record to be considered "complete".
I usually do one of the following things:
Create an unbound popup modal form for adding new records where only the necessary fields are present. Once complete it loads the new record to the main form for further editing.
Use the above method except the form is not a popup one but a set of unbound fields in the footer or header of the main form.
Let the user create new records but bind validation on the OnClose (and/or other appropriate to your situation) event of the form that deletes the half-filled record if it does not validate.
Let users create new records in the bound form but have a 'cleanup' routine called either on a schedule or based on user actions.
Ultimately if your business process requires the majority of fields to be manually added/edited every time a new record is added or edited, you are better off using an unbound form.
This way, if the user opened a form, he had to complete all the steps
he was supposed to do, because if he closed the form mid process (or
the computer froze, or anything of the sort), the actual data would be
compromised as it would be half complete
No, if the computer freezes, then no data is saved to the table. This is the same if you used a disconnected reocrdset and a un-bound form.
If you use the before update event in the form that has some verification code and does a simple cancel = true, then the forms data is not saved nor is the table updated. Again, if you used a dis-connected record set and the user closes the form, you have to test the data – and again you can either choose to write out the data or not – this effect is ZERO difference from using a bound form to a table or a disconnected form.
If the editions were simple, I discovered that I could create a
recordset, copy relevant data to unbound fields in the form and, in
the end, if the user chose to, copy the data from the form fields back
to the recordset.
No you don’t need to do the above. The above achieves nothing and only racks up additional development hours and increases cost of the application. In near all cases in-bound forms increase development costs over that of a simple form bound to a table. So the original developer had the correct idea. You can control the update of the underlying table in near all cases to achieve the required verification. Forms only save and write the data out if the developer allows as such.
So Access forms when bound no more or less write incomplete data out to a table if you place verification code in the forms before update event. A half-filled bound form, or a half filled un-bound form with dis-connected reocrdset BOTH will not write their data if the computer freezes.
And BOTH types of forms will not write out data to table until such time your verification code has completed.
Access is not designed for un-bound forms, and tools like vb.net, or even VB6 had a whole bunch of cool wizards and support for un-bound forms. In access, we don't have those wizards. And when you use UN-bound forms then you loose tons of form events. You in effect get the worst of both worlds, since you lose use of form events and have no wizards or support for un-bound. Even just the several delete record events we have are rather amazing.
You lose use of me.dirty, on-insert, me.newReocrd, forms after update events - the list of features you toss out and lose is huge. And if you want a button to write data to the table (such as a save button on the form), then just go:
If me.Dirty = True then
me.Dirty = False ' this forces your verification code to run
End if
There are FEW use cases in which in-bound forms will benefit you, but they will cost you rather much in terms of development times.
When creating REST application which will return a collection of items (topic with collection of posts) with sorting from new to old ones.
If there will be HATEOAS principles performed and all content will be chunked on pages client will get a current page id, offset, data limits and links to first, current and next page for example.
There is no problem to get data from next page, but if somebody has been added content while client is reading current page - data will be pushed on the start of collection and last item of current page will be moved to the next page.
If you will just skip posts which already has been loaded before, you will get lower amount of items on the next page. There is a way to get a count of pushed items in start of list and increment offset.
What is a best practices for this?
Not using offsets indexes, but instead skip tokens that indicate the first value not to include (or first value to include) is a good technique provided the value can be unique for every item in your result set and is an orderable field based on the current sort. But it's not flawless. Usually this just doesn't matter.
If it really does matter you have to put IDs of everything that's in the first page in the call to 2nd page, and again and again. HATEOAS helps you do stuff like this...but it can get very messy and still things can pop up on page 1 given the current sorting when you make a request for page 5...what do you do with that?
Another trick to avoid dupes in a UI is to use the self or canonical link relationships to uniquely identify resources in a page and compare those to existing resources in the UI. Updating the UI with the latest matching resources is usually a simple task. This of course puts some burden on the client.
There is not a 1 size fits all solution to this problem. You have to design for the UX you intend to fulfill.
I have a task to create a database to track student results in a school. I came out with a set of relationships between the tables according to the 3 forms of normalisation(I hope I got it right. If not, please enlighten me).
One feature that I want to put in the Filemaker app is that when a teacher want to enter some assignment marks, he will just need to create a new submission record and all the student names in the class will appear.
I could not think how this feature can be done in Filemaker. I can only create a new submissions record and key in a student's score, then create another new record to do the same thing for a second student.
Can someone help? I am a teacher, not a Filemaker developer so please correct me if my database tables are done wrongly.
Update:
I will like the output to be like this
Spreadsheet is not suitable because it can't be used to search/sort easily.
I have a quick sample file here. It's an old sample and it uses a different (but similar) model. Basically the idea is that: You have a calculated field (I use a repeating field) to display the data. You also have a global repeating field that serves as an editing widget. Each time you go to a record you fill this field's reps with data from related records (using a OnRecordLoad trigger). This doesn't mean the field shows the same data for all records, because its conditional formatting rules are set to hide all data; so it only shows a piece of data when you actually enter one of its repetitions. This is the data that can be edited. And finally there's a trigger that fires each time you exit the field and posts your changes to the related table (adds, updates, or deletes).
The sample isn't quite complete because if there's fewer data columns than repetitions, you'd probably want to somehow lock the remaining repetitions; this part isn't done. Otherwise it works fairly well. In FM 12, however, it tends to freeze the app; I reported this to FMI, they acknowledged it, but I don't think it has been fixed already.
I keep seeing questions floating through that make reference to a column in a database table named something like DateLastUpdated. I don't get it.
The only companion field I've ever seen is LastUpdateUserId or such. There's never an indicator about why the update took place; or even what the update was.
On top of that, this field is sometimes written from within a trigger, where even less context is available.
It certainly doesn't even come close to being an audit trail; so that can't be the justification. And if there is and audit trail somewhere in a log or whatever, this field would be redundant.
What am I missing? Why is this pattern so popular?
Such a field can be used to detect whether there are conflicting edits made by different processes. When you retrieve a record from the database, you get the previous DateLastUpdated field. After making changes to other fields, you submit the record back to the database layer. The database layer checks that the DateLastUpdated you submit matches the one still in the database. If it matches, then the update is performed (and DateLastUpdated is updated to the current time). However, if it does not match, then some other process has changed the record in the meantime and the current update can be aborted.
It depends on the exact circumstance, but a timestamp like that can be very useful for autogenerated data - you can figure out if something needs to be recalculated if a depedency has changed later on (this is how build systems calculate which files need to be recompiled).
Also, many websites will have data marking "Last changed" on a page, particularly news sites that may edit content. The exact reason isn't necessary (and there likely exist backups in case an audit trail is really necessary), but this data needs to be visible to the end user.
These sorts of things are typically used for business applications where user action is required to initiate the update. Typically, there will be some kind of business app (eg a CRM desktop application) and for most updates there tends to be only one way of making the update.
If you're looking at address data, that was done through the "Maintain Address" screen, etc.
Such database auditing is there to augment business-level auditing, not to replace it. Call centres will sometimes (or always in the case of financial services providers in Australia, as one example) record phone calls. That's part of the audit trail too but doesn't tend to be part of the IT solution as far as the desktop application (and related infrastructure) goes, although that is by no means a hard and fast rule.
Call centre staff will also typically have some sort of "Notes" or "Log" functionality where they can type freeform text as to why the customer called and what action was taken so the next operator can pick up where they left off when the customer rings back.
Triggers will often be used to record exactly what was changed (eg writing the old record to an audit table). The purpose of all this is that with all the information (the notes, recorded call, database audit trail and logs) the previous state of the data can be reconstructed as can the resulting action. This may be to find/resolve bugs in the system or simply as a conflict resolution process with the customer.
It is certainly popular - rails for example has a shorthand for it, as well as a creation timestamp (:timestamps).
At the application level it's very useful, as the same pattern is very common in views - look at the questions here for example (answered 56 secs ago, etc).
It can also be used retrospectively in reporting to generate stats (e.g. what is the growth curve of the number of records in the DB).
there are a couple of scenarios
Let's say you have an address table for your customers
you have your CRM app, the customer calls that his address has changed a month ago, with the LastUpdate column you can see that this row for this customer hasn't been touched in 4 months
usually you use triggers to populate a history table so that you can see all the other history, if you see that the creationdate and updated date are the same there is no point hitting the history table since you won't find anything
you calculate indexes (stock market), you can easily see that it was recalculated just by looking at this column
there are 2 DB servers, by comparing the date column you can find out if all the changes have been replicated or not etc etc ect
This is also very useful if you have to send feeds out to clients that are delta feeds, that is only the records that have been changed or inserted since the data of the last feed are sent.