How to integrate scorm player with Moodle database - moodle

I am building a website that uses Moodle APIs,basically a Moodle wrapper. I have gotten to the point where I need to play scorm files. How do I integrate the scorm files to send the tracked data to moodle database using the original moodle source code.
I have tried writing some code that tracks the course but its not sufficient and robust enough, therefore not capturing the course progress very well.
What are the specific moodle source files I need to point to from my front end to capture the course progress?

Here are a couple options:
Option 1 - Quick way to commit SCORM-compliant data to Moodle if the SCO is already loaded in Moodle: mod_scorm_insert_scorm_tracks web service. Here's info. from the Moodle Tracker: Detail on mod_scorm_insert_tracks
The SCORM course/content must already be loaded to Moodle so it has a SCO ID in the Moodle DB - data passed is related to a specific SCO ID.
Option 2: If needing to build a SCORM-compliant course/content experience from scratch:
High level, the content/course experience is typically developed as a SCORM-compliant content package and programmed to communicate SCORM-compliant course progress data to the LMS via the SCORM API. The SCORM API is provided by the LMS (Moodle in this case) and is in javascript.
Your course/content needs to include standard SCORM files like imsmanifest.xml and must be loaded as a SCORM package to Moodle. Once it is loaded to Moodle, you'll have a course ID/SCO ID/etc. that you can use when passing data for it.
The data sent from the content experience/course must conform to the SCORM data model for whatever version of SCORM you choose to use (e.g. 1.2, 2004, etc.)
Here's a good summary of the data models by version, which also gives a good sense of what you can track: SCORM data models
Moodle supports 1.2 and most of 2004: Moodle SCORM support FAQ
The content/course must include code to correctly locate and use the SCORM API JS object(s) that is used to pass/receive data.
The SCORM JS objects are provided by the page in the LMS used to play back the course/content (sometimes called "the player" or "player window"). In Moodle, this is handled by mod/scorm/player.php.
The player.php file in Moodle loads the SCORM JS API when the page is rendered, making it available to the course. The course file then calls the objects/functions/etc. that pass the data.
The exact API object your content will use to send/receive data depends on the SCORM data model version you are using. Content/courses can pass data that is compliant with any version of SCORM, but must use the API object for that version in order to ensure the data is handled correctly.
In Moodle, the objects are available in mod/scorm/datamodels/ - there are also PHP files in there which get/set data and are called by player.php depending on the version of SCORM used by your content.
Here's additional detail on the runtime API: SCORM runtime detail
Alternately, there are course authoring tools like Storyline, Captivate, Elucidat, Lectora, etc. which package content to automatically communicate SCORM data. These are great for many types of content.
Good luck with it!

Related

How to load SCORM file from CDN in Moodle

I am working in LMS application using Moodle 3.8. We are having training with SCORM files. SCORM files are having 500 MB. We are facing issue in loading time when 30k+ user accessing the application. TO improve the performance. Planning to move the SCORM files in CDN. Is there anyway to access the SCORM from CDN? How to configure the course file from CDN?
Regards
Girija
SCORM does not work across different domains as it tries to invoke methods on the parent window/frame. You would need a wrapper frame residing on the CDN's domain that forwards API calls e.g. using window.postMessage but this is not supported by Moodle as far as I know.

Fetch all metadata of Salesforce

I've been trying to implement a way to download all the changes made by a particular user in salesforce using PowerShell script & create a package The changes could be anything whether it can be added or modified, Apex classes, profiles, Account, etc based on the modified by the user, component ID, timestamp, etc. below is the URL that exposes the API. The URL Does not explain any way to do this by using a script.
https://developer.salesforce.com/docs/atlas.en-us.api_meta.meta/api_meta/meta_listmetadata.htm
Does anyone know how I can implement this?
Regards,
Kramer
Salesforce orgs other than scratch orgs do not currently provide source tracking, which makes it possible to pinpoint user changes in metadata and extract only those changes. This is done by an SFDX/Metadata API client, like Salesforce DX or CumulusCI (disclaimer: I'm on the CumulusCI team).
I would not try to implement a Metadata API client in PowerShell; instead, harness one of the existing tools to do so.
Salesforce orgs other than scratch orgs don't provide source tracking at present. To identify user changes, you can either
Attempt to extract all metadata and diff it against your version control, which is considerably harder than it sounds and is implemented by a variety of commercial DevOps tools for Salesforce (GearSet, Copado, etc).
Have the user manually add components to a Change Set or Unmanaged Package, and use a Metadata API client as above to retrieve the contents of that package. (Little-known fact, a Change Set can be retrieved as a package!)
To emphasize: DevOps on Salesforce does not work like other platforms. Working on the Metadata API requires a fair amount of time investment and specialization. Harness the existing work of the Salesforce community where you can, but be aware that the task you are laying out may be rather more involved than you think and it's not necessarily something you can just throw together from off-the-shelf components.

Change CKAN API Interface - are there limitations on the API?

I've looked around the site to see if there are any people who have changed the CKAN API interface so that instead of uploading documents and databases, they can directly type onto the site, but I haven't found any use cases.
Currently, we have a page where people upload data sets through excel forms that they've filled out, but we want to make it a bit more user friendly by changing the API so that they can fill out a form on the page rather than downloading the template, filling it out and then uploading it.
Does CKAN have the ability to support this? If so, are there any examples or use cases of websites that have use forms rather than uploads?
This is certainly possible.
I'm not aware of any existing extensions that provide that functionality, but you can check the official list of CKAN extensions if there's anything that fulfills your needs.
If there is no existing extension that suits you then you could write your own, see the extension guide for details on how to do that.
Adding an API function to CKAN's API is possible, but probably not what you want in this case: the web UI usually does not interact with CKAN via the API but via Flask/Pylons controllers. Hence, you would add your add controller which first serves your form and then processes the submitted inputs.
You can take a look at the ckanext-pages extension, which does exactly that (for editing static pages instead of datasets, but your code would be similar).

How to integrate localization (i18n) so that it scales with a React application?

I am currently looking at various i18n npm packages and most seem to insist that the translations are stored in a flat file, e.g. .json formatted file. My questions is whether this has a performance overhead that would be greater then storing the languages in a database, e.g. MongoDB.
For example, if I have 10,000 translations (we will assume that in this particular application only one language file will be needed at a time, i.e. most will be using the application in English and some users may want to set the application to use a different language.) then this will equate to approximately 200kb of data to download before the application can even start being used.
In a React application, a suggested design pattern is to load data using container components, that then pass data to 'dumb' child components. So, would it not make sense to also load translations in the same manner, i.e. group the translations into usage, or by component, so that the data is sent down the wire only when needed, say, from a call to MongoDB?
I would integrate it in your API. That means you can create e.g. a REST or GraphQL API, which handles this for you. In i18n, it is often reasonable to store the data in a hierarchy. This means you can split your translations in different categories (like pages) and simply request those translations, which you really need.
I really like the way of doing it in the react-starter-kit. In this example, you find how they handle it with a GraphQL API and only request those translations, which are really required for rendering the page. Hope this helps.
Important files of the i18n implementation of the react-starter-kit:
GraphQL Query: https://github.com/kriasoft/react-starter-kit/blob/feature/react-intl/src/data/queries/intl.js
Example component implementation: https://github.com/kriasoft/react-starter-kit/blob/feature/react-intl/src/components/Header/Header.js
Of course if you have this amount of translations, I would use a database for a better system usage (in the react starter kit, they use simple file storage which is not really usable with so many translations). A mongodb would be there my first choice, but maybe this is only my own preference of flexibility and own knowledge.
Obviously, you don't want each and every language to be loaded on the client. My understanding of the pattern you described is to use a container component to load the relevant language for the whole app on startup.
When a user switches language, your container will load the relevant language file from the server.
This should work just fine for a small/medium app but has a drawback : you'll need another request to the server after the JS code has loaded to load the i18n data.
Another way to solve this is to use code splitting (and possibly server side rendering) techniques which could allow this workflow :
Server builds a small bundle containing a portion of the i18n data
Client loads the rest of your app code and associated i18n data on demand, as the user navigates through your app
If not yet done having a look at https://react.i18next.com/ might be a good advice. It is based on i18next: learn once - translate everywhere.
Your code will look something like:
<div>{t('simpleContent')}</div>
<Trans i18nKey="userMessagesUnread" count={count}>
Hello <strong title={t('nameTitle')}>{{name}}</strong>, you have {{count}} unread message. <Link to="/msgs">Go to messages</Link>.
</Trans>
Comes with samples for:
- webpack
- cra
- expo.js
- next.js
- storybook integration
- razzle
- dat
- ...
https://github.com/i18next/react-i18next/tree/master/example
Beside that you should also consider workflow during development and later for your translators -> https://www.youtube.com/watch?v=9NOzJhgmyQE

Download SCORM from Moodle

I am a newbie to SCORM.
We need to crawl e-learning portals and index data found in SCORM 1.2 objects. Is there a way to download these SCORM objects from Moodle and subsequently read them?
If crawling is out of the question and we can acquire the SCORM objects, is it possible read the contents of the SCORM objects? E.g., we'd like to extract text from these objects.
Nope, SCORM content is E-Learning content, it is not downloadable..
It was developed to be used on web pages only, so there's no way to extract data..
Unless you pen it down on paper!