Zephyr Porting to an SoC based on already supported CPU - porting

Is there any video or pdf tutorial which shows step by step guide on how to do Zephyr porting to a new SoC. I know there is a page on Zephyr website, https://docs.zephyrproject.org/latest/hardware/porting/arch.html#architecture-porting-guide
but this does not give me detailed view of what files to create where and what should be their content etc.
Any reference to such a guide will be very nice. Thanks a lot in advance.

Video or PDF - probably no. Sources of Zephyr speaks for itself.
According to tags you have Cortex-M based SoC. So Architecture Porting Guide is not for you.
First of all you shold understand what is Devicetree. There is whole folder of DTs of ARM based SoCs. Take several for reference.
Next step. Is there drivers for peripherals of your SoC? If no, you should write drivers. For reference take a look at drivers folder. While driver development you shold think about DT and use DT related macroses. If you already have HAL, you just need to write glue code.
You should "connect" driver code and Devicetree of SoC: use bindings.
SoC needs linker script and maybe SoC related initialization code. For references look at soc folder.
Now you have Zephyr for your SoC!
In the end you get on board level. Write dts for SoC featured board. Examples here.

Related

Data Source for PyEphem?

I am trying to reference the use of PyEphem in my code. Their website shows that data sources are listed below "Documentation", but isn't there anymore. Does anyone know where they take their data from?
Thanks
There were several sources involved in creating PyEphem’s little star catalog, some providing the detailed positions and other supplying the popular names. They are detailed in the star.py module’s docstring, which you can view on your own system by running pydoc ephem.stars or can view online by looking at the source code on GitHub:
https://github.com/brandon-rhodes/pyephem/blob/fc11ddf3f2748db2fa4c8b0d491e169ed3d09bfc/ephem/stars.py#L8
I will also plan to add it to the documentation on the web, so that it’s more easily discoverable.

I do not see a Speech Input Source script in my mrtk (mixed reality toolkit) in the project search bar in unity

I am trying to implement voice commands into my unity project to eventually be used in the HoloLens. At the moment, I am simply trying to make a cube change colors using the speech input handler script and speech input source script. I have the handler script but I can't find the source script anywhere. How do I obtain the source script? Why do I not have it? I am using Unity 2018.4.12f1 and I am using the Mixed Reality Toolkit. If you need additional info to help me please ask!
In versions after MRTK2, SpeechInputSource is no longer needed. Instead, keyword service (e.g., Windows Speech Input Manager) must be added to the input system's data providers. Please check out the SpeechInputExample scene to understand how to use speech input.
The guide you are reading may be outdated, please read the official documentation to learn how to use Speech function in the latest version of MRTK.

Add a new language to OpenEars

I've recently started studying OpenEars speech recognition and it's great! But I also need to support speech recognition and dictation in other languages such as Russian, French and German.I've found that here are available various acoustic and language models.
But I cannot really understand - is that enough what I need to integrate extra language support in application?
Question is - what steps should I take in order to successfully integrate, for example russian, in Open Ears?
As far as I understood - all acoustic and language models for english language in Open Ears demo is located in folder hub4wsj_sc_8k . Same files can be found in voxforge language archives. So I just replaced them in demo. One thing is different - in demo English language, there also was a sendump 2MB large file, which is not located in voxforge language archives.There are two other files used in Open Ears demo:
OpenEars1.languagemodel
OpenEars1.dic
These I replaced with:
msu_ru_nsh.lm.dmp
msu_ru_nsh.dic
as .dmp is similar to .languagemodel. But application is crashing without any error.
What am I doing wrong? Thank You.
From my comments, reposted as an answer:
[....] Step 1 for issues like this is to turn on OpenEarsLogging and verbosePocketsphinx, which will give you very fine-grained info on what is going wrong (search your console output for the words error and warning to save time). Instructions on doing this can be found in the docs. Feel free to bring questions to the OpenEars forums [....]: http://politepix.com/forums/openears You might also want to check out this thread: http://politepix.com/forums/topic/other-languages
The solution:
To follow up for later readers, after turning on logging we got this working by using the mixture_weights file as a substitute for sendump and by making sure that the phonetic dictionary used the phonemes that were present in the acoustic model rather than the English-language phonemes.
The full discussion in which we accomplished this troubleshooting can be read here: http://www.politepix.com/forums/topic/using-russian-acoustic-model/
UPDATE: Since OpenEars 1.5 was released this week, it is possible to pass the path to any acoustic model as an argument to the main listening method, and there is a much more standardized method for packaging and referencing any acoustic model so you can have many acoustic models in the same app. The info in this forum post supersedes the info in the discussion I linked to in this answer: http://www.politepix.com/forums/topic/creating-an-acoustic-model-bundle-for-openears-1-5-and-up/ I left the rest of the answer for historical reasons and because there may be details in that discussion that are still useful, but it can be skipped in favor of the new link.

Parse data from image? [duplicate]

I am doing an app in which I require a business card reader I googled alot but BBY is the only solution which I was able to find out. Can anybody help me out with some opensource library which can be tweaked or used directly as a business card reader.
Please enlighten me on this.
you can look into the Tesseract open source engine... its pretty good for image processing.. i mean it will extract the text out of the image but then you will have to process it to extract name ,phone numbers and other details.
this guy has explained how to use it in iOS .. http://tinsuke.wordpress.com/2011/11/01/how-to-compile-and-use-tesseract-3-01-on-ios-sdk-5/
We started an open source project to build a Javascript library (based on the OCR engine tesseract.js for the OCR part) that exctract the relevant data from a business card based on heuristic criteria.
The library (BCR Library, available on github) is usable in any html project (included mobile cordova, phone gap or ionic projects) just including it via script tag.
The library doesn't have any external api call and fully works offline.
I think that you should give a try to Covve Bussiness Card Scan API. The quality of the result is great in various languages. You can check a comparison analysis of similar services here.
[Disclosure] I'm part of the team developing the service.

iphone: business card reader sdk apart from ABBY

I am doing an app in which I require a business card reader I googled alot but BBY is the only solution which I was able to find out. Can anybody help me out with some opensource library which can be tweaked or used directly as a business card reader.
Please enlighten me on this.
you can look into the Tesseract open source engine... its pretty good for image processing.. i mean it will extract the text out of the image but then you will have to process it to extract name ,phone numbers and other details.
this guy has explained how to use it in iOS .. http://tinsuke.wordpress.com/2011/11/01/how-to-compile-and-use-tesseract-3-01-on-ios-sdk-5/
We started an open source project to build a Javascript library (based on the OCR engine tesseract.js for the OCR part) that exctract the relevant data from a business card based on heuristic criteria.
The library (BCR Library, available on github) is usable in any html project (included mobile cordova, phone gap or ionic projects) just including it via script tag.
The library doesn't have any external api call and fully works offline.
I think that you should give a try to Covve Bussiness Card Scan API. The quality of the result is great in various languages. You can check a comparison analysis of similar services here.
[Disclosure] I'm part of the team developing the service.