I have a kinda-lossless modem record in ogg. The main problem is that I don't know how to translate the data to some understandable form. Is there any modem tool capable of converting the data to tome meaningful format?
If not, what's the best practice to bruteforcefully find out what the original content before modulation was? I mean like generating possible data until the original data file and the generated one matches.
Thank you
I would ask this guy: http://www.whence.com/minimodem/
He has implemented early protocols in minimodem but might be aware of another tool for latest protocols.
Related
I'm looking for a way to represent chords in a MIDI file.
Note that I'm not looking to represent chord voicings. That can be trivially done with multiple note-on messages. But if I do that, then I have to do some sort of note-on to chord analysis every time I read the MIDI file back in, and that's a major nuisance especially since I already know the chord structures when I write the file.
Rather, I'm looking for something more akin to guitar tablature or fake books. That is, I want to record "C" or "Cm" or "I" or "I" or “iii7" at a particular point in time.
So my questions...
Is there a standard way to do this? (I'm not finding one, but I don't know the current spec thoroughly.)
Is there a non-standard way of doing this?
I'm considering using the "tag" facility of the lyric/display meta event. It appears as though I can invent {#chord=Cm} and that should be transparent to any reader, past, present, or future, who doesn't understand this usage. Am I reading the standard right? Would this be a reasonable, essentially private, non-standard extension?
The MIDI specification provides for values such as "note on" and "pitch value" (as seen here) which are only represented as integers.
Depending on the MIDI Type (there are 3), you should be able to save the chord values similarly to the way that you suggested. Karaoke files are created this way.
If you are using Windows, you could try something like Noteworthy Composer. The link also contains a suggestion for playback.
You are absolutely right, you can implement custom meta event and place such events before groups of NoteOn/NoteOff that represent a chord. I don't know what programming language you use, but for C# you can take a look at DryWetMIDI. It allows create custom meta events, read and write them. This article of the library docs shows how to do this.
I would like, for educational purposes, to read PLC symbols table by using libnodave (or any equivalent open-source like snap7).
Actually, when I read data from merkers, I must know in advance what kind of variable will be present in the DB, also due to the fact that libnodave reads raw bytes in sequences.
I'm searching a way to know in advance what kind of data was chosen by the plc programmer when storing data so, when I use raw bytes read, I can easily monitor variables and adapt my reading and visualization routine.
Thanks in advance.
A program in a S7-3xx/4xx PLC has no symbolic addressing downloaded. So Libnodave or Snap7 can't point to a symbol.
TIA and the S7-12xx/15xx PLC are different. They have symbols downloaded. But so far as i know Libnodave or Snap7 can not use these symbols yet.
A solution maybe is the export the Symboltable is Step7/TIA to an Excel or .scv file and read there the symbol with it's format and address information.
(Libnodave does not support S7-12xx/15xx, use Snap7 instead.)
I'm looking for a tool to convert a SBML model into a Matlab function. I've tried SBMLTranslate() function from libSBML but this returns a Matlab struct, not a function. Does anybody know if such tool exists? Thanks
There are at least three efforts in this direction:
Frank Bergmann offers an online service for SBML translation where you can upload an SBML file and it will generate a MATLAB file. The comments at the top of the generated MATLAB file explain how to use the results. The C++ source code is available on SourceForge.
Bergmann's code referenced above was used by Stanley Gu to create sbml2matlab, a Windows standalone program. Off-hand, I don't know whether Gu's version changed or enhanced the algorithm used by the Bergmann version, but it seems likely. (Note: Gu now works at Google and does not maintain this code anymore, as far as I know.)
The Systems Biology Format Converter (SBFC) is a framework written principally by Nicolas Rodriguez; it includes a collection of converters, one of which is an SBML-to-MATLAB converter. This converter is written in Java.
I have not compared the results of the translators myself yet, so cannot speak to the differences or quality of output. If you try them and have any feedback to relate, please let the authors know. Knowing what has or hasn't worked for real users will help improve things in the future.
A final caveat is that all of these have been research projects, so make sure to set your expectations accordingly. (This is not a criticism of the authors; the authors are very good – I know most of them personally – but the reality of academic development work is that we all lack the time and resources to make these systems comprehensive, hardened, polished, and documented to the degree that we wish we could.)
I've managed to finally build and run pocketsphinx (pocketsphinx_continuous). The problem I'm running into, is how to a improve accuracy. From what I understand, you can specify a dictionary file (-dict test.dic). So I took the default dictionary file and added some more pronunciations of the same words, for example:
pencil P EH N S AH L
pencil(2) P EH N S IH L
spaghetti S P AH G EH T IY
spaghetti(2) S P UH G EH T IY
Yet pocketsphinx still does not recognize either word at all. I know there is a jsgf file you can specify as well , but that seems more for phrases and grammar. How can I get pocketsphinx to recognize common words such as pencil and spaghetti?
thanks
-Mike
With something like this, you can't be certain, but I can offer the following suggestions:
Perhaps the language model somehow has low probabilities for "spaghetti" and "pencil". As you suggested, you could use a JSGF to test out how it does for recognition if it doesn't use the N-gram models, but instead does a simple grammar (give it like twenty words, including spaghetti and pencil). This way you can see if it is perhaps the language model which makes it difficult to recognize these words, and it can do okay if it considers all the words to have equal probability.
Perhaps you simply pronounce these words poorly, even with the alternative dictionary entries. Try either A. Testing other peoples' voices, or B. Adapting the acoustic model to your voice (see http://cmusphinx.sourceforge.net/wiki/tutorialam)
Also, what is it recognizing them as when it is failing? If possible, remove the words it misrecognizes as from the dictionary.
Again, for overall accuracy, only three things are going to really help you: restricting the grammar, adapting the accoustic model, and perhaps getting higher quality recording input.
To improve accuracy you may want to try adapting the acoustic model to your voice.
http://cmusphinx.sourceforge.net/wiki/tutorialadapt
To learn how to add new words: http://ghatage.com/tech/2012/12/13/Make-Pocketsphinx-recognize-new-words/
Make sure you put a tab (not a space) after the word and before the start of the pronunciation.
May be the problem is with Pocketsphinx. I too was not getting good results with Pocketsphinx. But I was getting very good accuracy with Sphinx4 (for a US speaker with a noise-cancelling microphone.) Therefore I did a comparison between the two using the same audio recordings. For pocketsphinx I used pocketsphinx_batch with the WSJ audio model and a small vocabulary language model and dictionary (created online with the CMU Cambridge language modelling toolkit.) For Sphinx4 I wrote a small Java program using the Sphinx4 library. The result was that Sphinx4 was much more accurate. All the gory details are at http://www.jaivox.com/pocketsphinx.html.
To achieve good accuracy with a pocketshinx:
Important! Check that your mic, audio device, file supports 16 kHz while the general model is trained with 16 kHz acoustic examples.
You should create your own limited dictionary you cannot use cmusphinx-voxforge-de.dic while accuracy is dramatically dropped.
You should create your own language model.
You can search for Jasper project on GitLab to see how it's implemented.
Also, please check the documentation
This is on the CMUSphinx website
"There are various phonesets to represent phones, such as IPA or SAMPA. CMUSphinx does not yet require you to use any well-known phoneset, moreover, it prefers to use letter-only phone names without special symbols. This requirement simplifies some processing algorithms, for example, you can create files with phone names as part of the filenames without any violating of the OS filename requirements.
A dictionary should contain all the words you are interested in, otherwise the recognizer will not be able to recognize them. However, it is not sufficient to have the words in the dictionary. The recognizer looks for a word in both the dictionary and the language model. Without the language model, a word will not be recognized, even if it is present in the dictionary."
https://cmusphinx.github.io/wiki/tutorialdict/
not java specific, but when I say OutputStream os = sock.getOutputStream();
is there a way to determine stream's encoding charset? or do I have to know encoding charset ahead of time to properly read it? This is for arbitrary socket connection.
There is ways to detect text encoding, for example web browsers do that.
This is an implementation in Python(Universal Encoding Detector) that might give you some help.
Edit:
Here is one for java: http://jchardet.sourceforge.net/
Edit2:
Here is another SO question: How can I detect the encoding/codepage of a text file
Streams do not have associated charsets. They just pass around arbitrary data. You have to know the data's charset ahead of time in order to interpret the data.