I'm looking for a way to represent chords in a MIDI file.
Note that I'm not looking to represent chord voicings. That can be trivially done with multiple note-on messages. But if I do that, then I have to do some sort of note-on to chord analysis every time I read the MIDI file back in, and that's a major nuisance especially since I already know the chord structures when I write the file.
Rather, I'm looking for something more akin to guitar tablature or fake books. That is, I want to record "C" or "Cm" or "I" or "I" or “iii7" at a particular point in time.
So my questions...
Is there a standard way to do this? (I'm not finding one, but I don't know the current spec thoroughly.)
Is there a non-standard way of doing this?
I'm considering using the "tag" facility of the lyric/display meta event. It appears as though I can invent {#chord=Cm} and that should be transparent to any reader, past, present, or future, who doesn't understand this usage. Am I reading the standard right? Would this be a reasonable, essentially private, non-standard extension?
The MIDI specification provides for values such as "note on" and "pitch value" (as seen here) which are only represented as integers.
Depending on the MIDI Type (there are 3), you should be able to save the chord values similarly to the way that you suggested. Karaoke files are created this way.
If you are using Windows, you could try something like Noteworthy Composer. The link also contains a suggestion for playback.
You are absolutely right, you can implement custom meta event and place such events before groups of NoteOn/NoteOff that represent a chord. I don't know what programming language you use, but for C# you can take a look at DryWetMIDI. It allows create custom meta events, read and write them. This article of the library docs shows how to do this.
Related
The title pretty much says it. What I really want is a layer mode that takes the alpha channel of the one below it and in all other respects behaves the same. The general question seems worth asking.
I'm skimming the docs, and it seems like layer modes are a fixed enum, but I wan't to be sure there isn't something I'm overlooking. I'll also take any alternative suggestions.
Thanks.
No - it is not possible to add new layer modes but for including your own modes inside GIMP source code.
However, layers are a bit more generic now, since they can be written as a GEGL operation - I'd have to check the source, but all that is needed is probably to write the proper GEGL operation (which is easy to derive from the other layer modes), and add the new operation to the enums. The big drawback of this approach as compared to plug-ins is that you can't share the layer mode with other GIMP users, and even worse: the XCF files you create with your custom mode will only be "readable" in your modified copy of GIMP.
An workaround is to write a plug-in that creates a new layer from two underlying layers, combining them as you like. You'd have to invoke it manually each time you updated each layer. You'd have to use Python-fu, instead of script-fu,a s the later does not give one access to pixel values.
For the simple case you describe, though, it seems like a sequence of "alpha-to-selection",
"selection-to-channel", "copy", "add-layer-mask", "paste" can do what you want without a need to copy pixels around in a high level language.
I've read quite a few articles giving a bit of background information on how Facebook implemented their Graph Search. All of which seem to just glance over the actual implementation details of the parser they are using.
Such as https://www.facebook.com/notes/facebook-engineering/under-the-hood-building-graph-search-beta/10151240856103920
From that page:
We combined various parsing techniques to build a substring parser:
suppose a user inputs, say, "friends New York" and that we have
defined a comprehensive set of all the potential page titles our
system can handle. Our parser could then generate exactly the Graph
Search titles that contain the user's input, including things like
"friends who live in New York" and "friends who have visited New
York." If we could find a way to appropriately rank those suggested
titles for the Graph Search typeahead, we would have a good start.
I'm really interested in learning about the methods one would use to tackle this problem. What Algorithm / Techniques would be used to write such a system ?
Any links would be much appreciated too.
I was thinking about implementing something similar.. wanted to ask Q here on SO and found that this is already asked..
Here is what I have been thinking to start with -
Assume facebook search engine "knows" about the underlying data store (a complex graph). So the search engine understands key words like "Friends", "Relative" and other such relationships and does not treat them like a trivial word in english language.
In such case, a good idea could be to parse the user input (using client side javascript) to a JSON and send it over to the search engine .. a couple of benefits .. the parsing can be done on client side, save network bandwidth by not sending unwanted data, server side handling for the parsed input as JSON is way better..etc
Lets call this JSON fbJSON.. because apart from being a JSON .. it adheres to a certain format.. You can create a spec for your format.. such that the JSON that is sent over to search engine necessarily contains some information.. this can make life a bit easier .. just like we have geoJSON etc..
Use an NLP program to parse the user input into fbJSON [I still have to think about this]
This is a broad approach upon which i m embarking upon.. the only bottleneck is point #4..because I do not have much experience with NLPs..
I'm struggling with the size of output files for large Modelica models. Off course, I can protect some objects in order to remove them completely from the result file. However, that gives rise to two problems:
it's not possible to redeclare protected objects
if i want to test my model in detail (eg for a short time period), i need to declare those objects publicly again in order to see their variables
I wonder if there's a trick to set the 'verbosity' of a Modelica model. Maybe what I would like is a third keyword next to public, protected, eg. transparent. Then, when setting up a simulation, I want be able to set the verbosity level to 1, or 2 with the following effect:
1--> consider all transparentelements as protected
2--> consider all transparentelements as public
This effect would propagate to all models and submodels.
I don't think this already exists. But is there an easy workaround?
Thanks,
Roel
As Michael Tiller wrote above, this is not handled the same way in all Modelica tools and there is no definite answer. To give an OpenModelica-specific answer, it's possible to use simulate(ModelName,outputFilter="regex"), to store only the variables that fully match the given regex (default is .*, matching any variable).
Roel,
I know several people wrestling with this issue. At the moment, all of this depends on the tool being used. I don't know how other tools handle filtering of results, but in Dymola you control it (as you point out) by giving the signals special qualifiers (e.g. protected).
One thing I've done in the past is to extend from a model and then add a bunch of output signals for things I'm interested in. Then you can select "Outputs" in Dymola to make sure those get in the results file. This is far from perfect because a) listing everything you want can get tedious and b) referencing protected variables is not strictly allowed (although Dymola lets you get away with it but issues a warning).
At Dassault, we are actively discussing this idea and hope to provide some better functionality along these lines. It isn't clear whether such functionality will be strictly tool specific or whether it will involve the language somehow. But if it is language related, we will (of course) work with the design group to formulate a specification that other tool vendors can support as well.
In SystemModeler, you go to the Settings tab in the Experiment Browswer in Simulation Center. Click on Output on the bottom and select which variables to store.
(The options are state variables, derivatives, algebraic variables, parameters, protected variables and if you mark the Store simulation log-option, you'll get some interesting statistics on events over time and function evaluations, opening another possibility to track down parts of the simulation and model that creates more evaluations)
I am not sure if this helps you, but in Dymola you can go to Simulation->Setup->Output and mark a checkbox saying "Store Protected variables". That way it is possible to declare most variables as protected: during normal simulation they are not stored, but when debugging your model, you just mark that checkbox and they are stored.
Of course that is not the same as your suggested keyword transparent, but maybe it helps a little...
A bit late, but in Dymola 2013 FD01 and later you can select which variables to store based on names (and model names) using the annotation __Dymola_selections, and even filter on user-defined tags - so you could create a tag name "transparent" in the model. See "Matching and variable selections" in the manual.
I've managed to finally build and run pocketsphinx (pocketsphinx_continuous). The problem I'm running into, is how to a improve accuracy. From what I understand, you can specify a dictionary file (-dict test.dic). So I took the default dictionary file and added some more pronunciations of the same words, for example:
pencil P EH N S AH L
pencil(2) P EH N S IH L
spaghetti S P AH G EH T IY
spaghetti(2) S P UH G EH T IY
Yet pocketsphinx still does not recognize either word at all. I know there is a jsgf file you can specify as well , but that seems more for phrases and grammar. How can I get pocketsphinx to recognize common words such as pencil and spaghetti?
thanks
-Mike
With something like this, you can't be certain, but I can offer the following suggestions:
Perhaps the language model somehow has low probabilities for "spaghetti" and "pencil". As you suggested, you could use a JSGF to test out how it does for recognition if it doesn't use the N-gram models, but instead does a simple grammar (give it like twenty words, including spaghetti and pencil). This way you can see if it is perhaps the language model which makes it difficult to recognize these words, and it can do okay if it considers all the words to have equal probability.
Perhaps you simply pronounce these words poorly, even with the alternative dictionary entries. Try either A. Testing other peoples' voices, or B. Adapting the acoustic model to your voice (see http://cmusphinx.sourceforge.net/wiki/tutorialam)
Also, what is it recognizing them as when it is failing? If possible, remove the words it misrecognizes as from the dictionary.
Again, for overall accuracy, only three things are going to really help you: restricting the grammar, adapting the accoustic model, and perhaps getting higher quality recording input.
To improve accuracy you may want to try adapting the acoustic model to your voice.
http://cmusphinx.sourceforge.net/wiki/tutorialadapt
To learn how to add new words: http://ghatage.com/tech/2012/12/13/Make-Pocketsphinx-recognize-new-words/
Make sure you put a tab (not a space) after the word and before the start of the pronunciation.
May be the problem is with Pocketsphinx. I too was not getting good results with Pocketsphinx. But I was getting very good accuracy with Sphinx4 (for a US speaker with a noise-cancelling microphone.) Therefore I did a comparison between the two using the same audio recordings. For pocketsphinx I used pocketsphinx_batch with the WSJ audio model and a small vocabulary language model and dictionary (created online with the CMU Cambridge language modelling toolkit.) For Sphinx4 I wrote a small Java program using the Sphinx4 library. The result was that Sphinx4 was much more accurate. All the gory details are at http://www.jaivox.com/pocketsphinx.html.
To achieve good accuracy with a pocketshinx:
Important! Check that your mic, audio device, file supports 16 kHz while the general model is trained with 16 kHz acoustic examples.
You should create your own limited dictionary you cannot use cmusphinx-voxforge-de.dic while accuracy is dramatically dropped.
You should create your own language model.
You can search for Jasper project on GitLab to see how it's implemented.
Also, please check the documentation
This is on the CMUSphinx website
"There are various phonesets to represent phones, such as IPA or SAMPA. CMUSphinx does not yet require you to use any well-known phoneset, moreover, it prefers to use letter-only phone names without special symbols. This requirement simplifies some processing algorithms, for example, you can create files with phone names as part of the filenames without any violating of the OS filename requirements.
A dictionary should contain all the words you are interested in, otherwise the recognizer will not be able to recognize them. However, it is not sufficient to have the words in the dictionary. The recognizer looks for a word in both the dictionary and the language model. Without the language model, a word will not be recognized, even if it is present in the dictionary."
https://cmusphinx.github.io/wiki/tutorialdict/
Is there a built-in way to get at POST/GET parameters in Racket? extract-binding and friends do what I want, but there's a dire note attached about potential security risks related to file uploads which concludes
Therefore, we recommend against their
use, but they are provided for
compatibility with old code.
The best I can figure is (and forgive me in advance)
(bytes->string/utf-8 (binding:form-value (bindings-assq (string->bytes/utf-8 "[field_name_here]") (request-bindings/raw req))))
but that seems unnecessarily complicated (and it seems like it would suffer from some of the same bugs documented in the Bindings section).
Is there a more-or-less standard, non-buggy way to get the value of a POST/GET-variable, given a field name and request? Or better yet, a way of getting back a collection of the POST/GET values as a list/hash/a-list? Barring either of those, is there a function that would do the same, but only for POST variables, ignoring GETs?
extract-binding is bad because it is case-insensitive, is very messy for inputs that return multiple times, doesn't have a way of dealing with file uploads, and automatically assumes everything is UTF-8, which isn't necessarily true. If you can accept those problems, feel free to use it.
The snippet you wrote works when the data is UTF-8 and when there is only one field return. You can define it is a function and avoid writing it many times.
In general, I recommend using formlets to deal with forms and their values.
Now your questions...
"Is there a more-or-less standard, non-buggy way to get the value of a POST/GET-variable, given a field name and request?"
What you have is the standard thing, although you wrongly assume that there is only one value. When there are multiple, you'll want to filter the bindings on the field name. Similarly, you don't need to turn the value into a string, you can leave it as bytes just fine.
"Or better yet, a way of getting back a collection of the POST/GET values as a list/hash/a-list?"
That's what request-bindings/raw does. It is a list of binding? objects. It doesn't make sense to turn it into a hash due to multiple value returns.
"Barring either of those, is there a function that would do the same, but only for POST variables, ignoring GETs?"
The Web server hides the difference between POSTs and GETs from you. You can inspect uri and raw post data to recover them, but you'd have to parse them yourself. I don't recommend it.
Jay