If RDFlib is not available for Micropython, is there any other library available for Micropython to work with RDF?
No, It appears there is no RDF library for MicroPython.
There are links on the top of the Awesome MIcropython page to search all the usual places and none came up with a result for MicroPython + RDF. :-(
Since RDF is a set of rules to follow and can be represented in as JSON-LD. Manually implementing RDF in JSON would be my suggestion as a path to explore. You would then use the JSON library builtin to MicroPython.
Since you mentioned Pycom, keep in mind, you're using their port of MicroPython and should look to their documentation and forums for additional help.
Related
I am trying to reference the use of PyEphem in my code. Their website shows that data sources are listed below "Documentation", but isn't there anymore. Does anyone know where they take their data from?
Thanks
There were several sources involved in creating PyEphem’s little star catalog, some providing the detailed positions and other supplying the popular names. They are detailed in the star.py module’s docstring, which you can view on your own system by running pydoc ephem.stars or can view online by looking at the source code on GitHub:
https://github.com/brandon-rhodes/pyephem/blob/fc11ddf3f2748db2fa4c8b0d491e169ed3d09bfc/ephem/stars.py#L8
I will also plan to add it to the documentation on the web, so that it’s more easily discoverable.
My team works with Scala.js and I wanna use Material-UI. But those two's code types look very different. Most of the examples in Material-UI seem to be based on regular Javascript.
I've tried to search about that, but the information is very limited, so I couldn't get any useful information. Is there anyone using Material-UI with Scala.js?
When using most JS libraries from Scala.js, you need what is called a "facade" -- a strongly-typed Scala wrapper that describes how to use the weakly-typed JavaScript library.
There appear to be several Material UI facades, but most of them look a bit half-baked. I'd guess that the most mature is the one in Chandu's scalajs-react-components project -- in general, Chandu has done more with React in Scala.js than most folks. The general top-level page for the project can be found here.
I've recently started studying OpenEars speech recognition and it's great! But I also need to support speech recognition and dictation in other languages such as Russian, French and German.I've found that here are available various acoustic and language models.
But I cannot really understand - is that enough what I need to integrate extra language support in application?
Question is - what steps should I take in order to successfully integrate, for example russian, in Open Ears?
As far as I understood - all acoustic and language models for english language in Open Ears demo is located in folder hub4wsj_sc_8k . Same files can be found in voxforge language archives. So I just replaced them in demo. One thing is different - in demo English language, there also was a sendump 2MB large file, which is not located in voxforge language archives.There are two other files used in Open Ears demo:
OpenEars1.languagemodel
OpenEars1.dic
These I replaced with:
msu_ru_nsh.lm.dmp
msu_ru_nsh.dic
as .dmp is similar to .languagemodel. But application is crashing without any error.
What am I doing wrong? Thank You.
From my comments, reposted as an answer:
[....] Step 1 for issues like this is to turn on OpenEarsLogging and verbosePocketsphinx, which will give you very fine-grained info on what is going wrong (search your console output for the words error and warning to save time). Instructions on doing this can be found in the docs. Feel free to bring questions to the OpenEars forums [....]: http://politepix.com/forums/openears You might also want to check out this thread: http://politepix.com/forums/topic/other-languages
The solution:
To follow up for later readers, after turning on logging we got this working by using the mixture_weights file as a substitute for sendump and by making sure that the phonetic dictionary used the phonemes that were present in the acoustic model rather than the English-language phonemes.
The full discussion in which we accomplished this troubleshooting can be read here: http://www.politepix.com/forums/topic/using-russian-acoustic-model/
UPDATE: Since OpenEars 1.5 was released this week, it is possible to pass the path to any acoustic model as an argument to the main listening method, and there is a much more standardized method for packaging and referencing any acoustic model so you can have many acoustic models in the same app. The info in this forum post supersedes the info in the discussion I linked to in this answer: http://www.politepix.com/forums/topic/creating-an-acoustic-model-bundle-for-openears-1-5-and-up/ I left the rest of the answer for historical reasons and because there may be details in that discussion that are still useful, but it can be skipped in favor of the new link.
Here's the deal. We've got a bunch of test questions that have been exported from another system ... and they aren't in a SCORM compliant format.
Even if they were, we really need to get all of this data into a real learning content authoring tool.
The incumbent tool is Articulate, and as any search of the Articulate support site shows, there's no way to actually import a test question into Articulate.
Since we've got a lot of data that we'd prefer not to re-key, my question is, what's a good course authoring tool that can generate a SCORM 2004 assessment, and has a good import from flat file function for its question data?
Googling isn't really getting me too far.
Thanks!
SCORM is used to create SCOs (shareable content objects, aka 'lessons' or 'courses') which may optionally contain questions, but SCORM isn't a quiz/assessment framework. Because it isn't an assessment framework, there is no importer for turning an XML file into a SCORM assessment.
If you can't get Articulate to work for you, then you'll probably need to roll your own SCORM SCO and build a quiz system for it (with the ability to import your custom XML files). Ideally, each quiz question would be set up as an interaction (using cmi.interactions) in SCORM.
You may want to look at some open-source SCORM SCO building tools, such as eXe and Reload, though I'm not sure how helpful they'll be for you.
Sorry I don't know of any easier solutions.
EDIT:
BTW there's a workaround for importing XML into in Articulate: import the XML containing the questions into Quizmaker 2, then import your Quizmaker 2 presentation into Quizmaker '09. Not the easiest, but still easier than building your own SCO. See http://www.articulate.com/forums/articulate-quizmaker/3239-securing-quizmaker-xml-file.html
Disclaimer - I haven't worked with IMS-QTI personally, I just know of it.
You may want to take a look at IMS-QTI, and see if that format would work for you. IMS-QTI stands for IMS Question and Test Interoperability. There may be other formats, but IMS-QTI is the only one I'm aware of, and I'm sure there would be tools out there which support it.
That would change your search to finding a tool which supports IMS-QTI, and you may have better luck with that. :-)
There are some general examples of what kinds of questions it supports in the implementation guide.
Hope that helps!
I don't think Captivate or Articulate supports question import in any easy workflow. Your honest fastest route might be to author your own SCORM package format that will import questions from XML or JSON. Then write a converter to put your CSV content into XML or JSON. There are lots of SCORM API wrappers out there to use, and you'll have more control over any issues you find with LMS vs authorware interpretations of SCORM if you just build your own player.
This feature is now available in Claro:
http://feedback.dominknow.com/knowledgebase/articles/312552-how-do-i-import-test-questions-from-excel
Some modules on CPAN are excellently documented, others.... not so much, but it's usually easy to discern how to use a module via prior art (e.g modules/tests that used the module you're looking to use). I'm wondering what the best way is to find code that uses the code you're looking to use.
example
I want to use (maybe?) Dist::Zilla::App::Tester for something, but the author has elected not to write any documentation on how to use it, some I'm wondering what path of least resistance is to find code that already uses it.
please don't answer for this module
Give a man a fish; you have fed him for today. Teach a man to fish; and you have fed him for a lifetime
Try Google Code Search, trying to search for strings like "use Dist::Zilla::App::Tester" (quotes are important).
Use CPANTS - The CPAN Testing Service web site.
Search for the distribution
Click Other dists requiring this
Here is the page for Dist-Zilla
As an aside, you can always read the source by hitting the Source button on the top of the page on search.cpan.org. In this case, the package doesn't have much code to begin with. Also, many big modules these days have ::Cookbooks ::Manuals or ::Tutorials Dist-Zilla has one too
My guess is ::Tester just supplies the dzil test command through its test_dzil sub.
One option is to use Google Code Search (Google for that phrase for a link :) ); unioned with pure googling. Search for "use my::module::name" string.
If the module name is not something well-searchable (e.g. too many hits), may be combine with "
For searches over CPAN, I suggest CPAN Grep over Google code search.
For more complex searches, I'd write a very small program using CPAN::Visitor and a minicpan.
For quick dependency checking, I'd use the not-perfect-but-very-good CPANDB.