How can i automate many inlet/outlet relations? - max-msp-jitter

I'm trying to program a buffer to list in max msp patch.
My current solution have manual connections, but i really need to automate the process for many more patch cords.
I need a way to connect a Gate N to a Buddy N and then to a join N.

have a look at the scripting possibilities of the [thispatcher] object. That should definitely do it.
I'm trying to program a buffer to list
if you just need to get a 'buffer to list'-thing. have a look at the patch below. Not sure this is a good idea though.
<pre><code>
----------begin_max5_patcher----------
990.3oc2XtrbhiCEFdM7TnxKmhgVWrElYUO6m2fo5pKYPPTGegxVNIMc08y9
HcrMPR.4KwP5LavXgP9nO8et4eLchWT1SxBOzeg9WzjI+X5jIvP1AlTe+DuD
wSqhEEvz7RkOlE8MuYU+jV9jFFVihPQMitSnWcmJc6WykqzUKNkQmimgH9g1
KLe6mT7bL5K0+G0ZXcLq8eR4MKzlrTcpHQB+zemqDwM+RZYhJMVpAihbbvrR
cynzSVjB0dXQHVqnZ3pYp+9NYkE5EIR25Mq9J5K1Y8yoSseLqirYUVRhLU+J
3jHtWh9GUg1EfBV.LIjXuvoNATvn.Hra.sIeaDLs43AvhKnSTN0HL6t1mwAM
BcdfCDv5IBnmEAjdpQTlS2AIMRJi0phX0ZYtKBvgS+fkvExBvYIbw4Q.YwwU
J2..sL+qxTQTr7zC2lcEE2eOmWs8stGCb+KKJDakuROrOdUrT3hI9bLvDJ3R
3G.jI7BHg+tnJFFRtfGx9Xz17rxcnOuO1Le6yG4iWxcJapPTUTCR.wUvCh+n
vn9Fcc3RmKvox8cIVRvR3B2IQFmXIr2R9lYugXKW.PeBQvX7bGPhQf7vrPP9
3uvIjnuKtVahyDiKV9CDYdGDNfQ0RRHB4+ILo8p2.nvH71qdif+fV81kpXIc
S1uPZo6p2pAjODtgPc5JsbT.zxAoZLDJFJD8jA5xWdSAnhJ05rz1wWcwuTrK
5E1CFQbpVFSUxNo791UI04oIvtk6LoTeqjg8aZnEyBttx2r0dCIfyC24o+3z
4CYHAWFDUdT7fbSVdxubVgaXU49Pssz.PWv8OO.NT8VT4lMx7CH3To2yHPvY
I.u0vDN9RSDjwsUfMp3XSt4YHwtcweGcuPU3rYo5pWHg.uHXtqNCXe7aLn5.
uJHC5yEhjcEF2j1ADsJqMgE5Ls86RV6CRpW6gAFoQpk9x2OErSsi+bDVjUlu
pYcadMRni600FroREZkISzwIE9r4bmZ8ZY5osOuVUX6nF3z4eMH8xbHsXNrV
LmD05cYl7vEMuzHts9TxB3BixNb2wmynsAB5.NeAyup7j0A6gvtY1isYf14y
Mzd76BeVb6rGbGz+Ddec.V.Je+pNX4KOb2UvAXYW.J41ATZWsGR2AZcLDe7o
0h.2cE.JHIZaGr71Aztjv3EtUsBzZMYP8Kk6j6tF.k0EeL+9mjw10R0qdsYG
3ecjDixQvHCTZK1ysKmm0Wf2Fcn2LyIrKGV3al4zoBBFn0TUMpoyjGj4E0KI
XHlR2+VV9gVpLNOoU2BEH6kKeP0L+PXDQtoNcsoH8x7pZreJj6M09b94z+CC
9ZL7
-----------end_max5_patcher-----------
</code></pre>

Related

patch endpoint naming and realisation

We have rest resource
/tasks/{task-type}
and only GET methods available.
GET /tasks/{task-type}
GET /tasks/{task-type}/{id}
Task entity contains meta info like created, finished, status, ref key and try counts for scheduled tasks.
Now we faced with problem, when task may contains incorrect data and its execution always failed.
Due to scheduler invoked tasks every 5 min there are a lot of errors in logs and largest try counts around 500k. The solution i found is to limit try_count to five (for example). And now we need way to manual discard try-count to zero. So i found two solutions:
1.
PATCH /tasks/{task-type}/{id}/discard-try-count - no response body
This solution look pretty simple, but violates the REST convention, because we use action(verb) in naming. But if we need to change other fields, then we will make a lot of endpoints in this style.
2a.
PATCH /tasks/{task-type}/{id}
body:
{
"tryCounts": int
}
This looks like REST want to see it and we can easy add new fields to modify, but now client can set any value for tryCount.
2b
PATCH /tasks/{task-type}/{id}
body:
{
"tryCounts": int // validate that try count can be only zero
}
Differs from the previous one by the presence of validation.
This looks like the most reliable solution. Is it really the best fit?
The non-verb convention is not a standard, you can violate it if you want to, though it can be worked around with very simple stuff, just convert the verb into a noun and you will be ok, something like:
POST /tasks/{task-type}/{id}/try-count-discarding
Another way is setting the try count to zero:
PUT /tasks/{task-type}/{id}/try-count 0
Yet another solution is combining the two, which I like the most:
PATCH /tasks/{task-type}/{id}/try-count {"op": "reset"}
Or another variant:
PATCH /tasks/{task-type}/{id} {"op": "discard-try-count"}

Grok filter for a time counter HH:MM

I'm quite new to ELK and Grok-filtering, and I'm struggling with parsing this particular pattern in my grok filter.
I've used the grok debugger to try and solve this, but although I like the tool, I just get confused by the custom patterns.
Eventually, I hope to parse lots of log files sent by filebeat to logstash, then send the parsed logs to elasticsearch and display with kibana or some similar visualization tool.
The lines that I need to parse follow the following pattern:
1310 2017-01-01 16:48:54 [325:51] [326:49] [359:57] Some log info text
The first four digits is a log type identifier, and will be used for grouping. I've called the field "LogLineID".
The date is formatted YYYY-MM-DD HH:MM:SS, and is parsed ok. I called the field "LogDate".
But now the problem begins. Within the square brackets, I have counters, formatted as MM:SS if you like. I cannot for the life of me find a way to sort these out, but I need to compare these times, hence I want to store them as minutes and seconds, not just numbers.
The first is a counter "TimeSpent",
the second is a counter "TimeStarted" and
the third is a counter "TimeSinceDown".
Then, last, comes the info text, which I've managed to grok with simply applying %{GREEDYDATA:LogInfo}.
I notice that the amount of minutes could be far higher than the standard 60 minutes within an hour, so I may be barking up the wrong tree here trying to parse it with date patterns such as TIMESTAMP_ISO8601, but then, I don't really know how else to do this.
So, I came this far:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate}
and were as mentioned able to (by cutting away the square bracket parts) to parse the log info text with
%{GREEDYDATA:LogInfo}
to create a field LogInfo.
But that's were I'm stuck. Could someone please help me figure out the rest?
Massive thanks in advance.
PS! I also found %{NUMBER:duration}, but it could as far as I could tell only parse timestamps with dot, not colon..
grok regex expression can help you solve the problem.
but first I wanna make sure that do you mean [325:51] [326:49] [359:57] are the three component that you wanna to fetch? And it will returns the result like :
TimeSpent: 325:51
TimeStarted: 326:49
TimeSinceDown: 359:57
were i get the point , you can use my ways in on of the following suggestions:
define your own custom pattern files and add the pattern in your file.
just use the expression in filter part of logstash conf file
hope it will helps you
Ah, there was a space.. Actually, I was misleading myself and everybody in my question, as it was not actually that log line that was causing problems. I just took the first one, not realizing where the problem really were, but the one causing problems had a space within the brackets as such: [ 42:31]. There are also some parts where there are two spaces, so the way I managed to solve this was to include a %{SPACE} between the \[ and the %{NUMBER}:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate} \[%{SPACE}%{NUMBER:TimeSpentMinutes}\:%{NUMBER:TimeSpentSeconds}\] \[%{SPACE}%{NUMBER:TimeStartedMinutes}\:%{NUMBER:TimeStartedSeconds}\] \[%{SPACE}%{NUMBER:TimeSinceDownMinutes}\:%{NUMBER:TimeSinceDownSeconds}\] %{GREEDYDATA:LogText}
I still haven't solved the merging of minutes and seconds, but this I can also handle in a later stage.
Thanks to Lin Don for showing an interest in my problem, and sorry for not replying sooner.
Hope the solution will help others (or even myself) if their stuck on the same kind of problem.
Note to myself: Read the logs more carefully before grok'ing.. :)

Remove Matlab r2014b Plot Browser limit

Among many distressing graphics changes to r2014b, the Plot Browser now only displays a certain number of lines per plot (looks like the limit is 50). Any number of plots above this limit are not displayed in the Plot Browser - it just says "and 78 more..."
Is there anyway to remove the limit? I want to see all my lines in the plot browser.
Unfortunately the answer is currenty:
No you cannot remove this limit
This was already reported to mathworks a while ago, and here is the reply:
I am writing in reference to your Technical Support Case #01143663
regarding 'plot browser with "and xxxx more...." indication'.
Really interesting question (and even a bit surprising). Basically
this limitation has been introduced with MATLAB 2014b!
Our developers are aware of that, and they are working to solve it. An
enhancement/bug request has been already submitted, and I am going to
add this case to the list. However, I cannot guarantee you a release
date.
If you think this limitation is crucial for your work, I would
strongly suggest you to contact your account manager, which will have
a bit more influence on the developers than a simple engineer :).
Of course, if there is anything else I can do for you, please let me
know
So it seems that you will have to deal with this limitation if you keep using 2014b.
This was also an issue for me; in particular I wanted to see the DisplayName property of the graphics object in the list. I used a workaround where I created a callback function, so that when clicking on a data point, the DisplayName would be shown. This can be helpful if you have a plot of many lines and want to see the DisplayName of a particular one. You would first need to set the DisplayName property of the graphics objects for this to work, as it's empty by default. You could also use this to show other properties, such as Color or LineStyle, that are shown in the plot browser:
%Based on
%http://www.mathworks.com/help/matlab/ref/datacursormode.html
%'fig_h' is the figure handle
dcm_obj = datacursormode(fig_h);
set(dcm_obj,'UpdateFcn',#myupdatefcn)
And then include this function as a separate file on the Matlab path, or paste into the function you're currently writing, and include an extra 'end' at the end of that function:
function name = myupdatefcn(empt,event_obj)
% Customizes text of data tips
tar = get(event_obj,'Target');
name = get(tar,'DisplayName');
end

Unicode character usage statistics [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am looking for some statistical data on the usage of Unicode characters in textual documents (with any markup). Googling brought no results.
Background: I am currently developing a finite state machine-based text processing tool. Statistical data on characters might help searching for the right transitions. For instance latin characters are probably most used so it might make sense to check for those first.
Did anyone by chance gathered or saw such statistics?
(I'm not focused on specific languages or locales. Think general-purpose parser like an XML parser.)
To sum up current findings and ideas:
Tom Christiansen gathered such statistics for PubMed Open Access Corpus (see this question). I have asked if he could share these statistics, waiting for the answer.
As #Boldewyn and #nwellnhof suggested, I could run the analysis of the complete Wikipedia dump or CommonCrawl data. I think these are good suggestions, I'll probably go with the CommonCrawl.
So sorry, this is not an answer, but a good research direction.
UPDATE: I have written a small Hadoop job and ran it on one of the CommonCrawl segments. I have posted my results in a spreadsheet here. Below are the first 50 characters:
0x000020 14627262
0x000065 7492745 e
0x000061 5144406 a
0x000069 4791953 i
0x00006f 4717551 o
0x000074 4566615 t
0x00006e 4296796 n
0x000072 4293069 r
0x000073 4025542 s
0x00000a 3140215
0x00006c 2841723 l
0x000064 2132449 d
0x000063 2026755 c
0x000075 1927266 u
0x000068 1793540 h
0x00006d 1628606 m
0x00fffd 1579150
0x000067 1279990 g
0x000070 1277983 p
0x000066 997775 f
0x000079 949434 y
0x000062 851830 b
0x00002e 844102 .
0x000030 822410 0
0x0000a0 797309
0x000053 718313 S
0x000076 691534 v
0x000077 682472 w
0x000031 648470 1
0x000041 624279 #
0x00006b 555419 k
0x000032 548220 2
0x00002c 513342 ,
0x00002d 510054 -
0x000043 498244 C
0x000054 495323 T
0x000045 455061 E
0x00004d 426545 M
0x000050 423790 P
0x000049 405276 I
0x000052 393218 R
0x000044 381975 D
0x00004c 365834 L
0x000042 353770 B
0x000033 334689 E
0x00004e 325299 N
0x000029 302497 /
0x000028 301057 (
0x000035 298087 5
0x000046 295148 F
To be honest, I have no idea if these results are representative. As I said, I only analysed one segment. Looks quite plausible for me. One can also easily spot that the markup is already stripped off - so the distribution is not directly suitable for my XML parser. But it gives valuable hints on which character ranges to check first.
The link to http://emojitracker.com/ in the near-duplicate question I personally think is the most promising resource for this. I have not examined the sources (I don't speak Ruby) but from a real-time Twitter feed of character frequencies, I would expect quite a different result than from static web pages, and probably a radically different language distribution (I see lots more Arabic and Turkish on Twitter than in my otherwise ordinary life). It's probably not exactly what you are looking for, but if we just look at the title of your question (which probably most visitors will have followed to get here) then that is what I would suggest as the answer.
Of course, this begs the question what kind of usage you attempt to model. For static XML, which you seem to be after, maybe the Common Crawl set is a better starting point after all. Text coming out of an editorial process (however informal) looks quite different from spontaneous text.
Out of the suggested options so far, Wikipedia (and/or Wiktionary) is probably the easiest, since it's small enough for local download, far better standardized than a random web dump (all UTF-8, all properly tagged, most of it properly tagged by language and proofread for markup errors, orthography, and occasionally facts), and yet large enough (and probably already overkill by an order of magnitude or more) to give you credible statistics. But again, if the domain is different than the domain you actually want to model, they will probably be wrong nevertheless.

Quartz Get trigger description

Is there a function in the API that returns language description of a trigger? For example,
cron trigger expression="0 0 0 L * ?"
Language I'm looking for is "Fires at 0:00:00 every last day of the month" (Or something like that)
I want to use this to give feed back to user of what schedules are created. I'm hoping I don't have to parse the expression myself. I search Google but have came empty, maybe I'm not using the right words to search for.
I'm using the .NET version. But if there is a java function out there I could probably figure out how to port it.
Thanks in advance
There is a question similiar to this here I think your going to have to write something yourself for this.