How to get MIDI instrument model from device ID code? - midi

Is there any public database that allows to get the model name from the device ID code (returned in reply to the f0 7e 7f 06 01 f7 SysEx) ?

The MIDI Manufacturer's Association maintains a list of ID's on their site, seems you'll probably have to scrape it in order to get it into a database which you can query. There is no authoritative database of devices; making your own would likely take quite a bit of time. Also the list on the MMA's website is not updated terribly often...
Keep in mind that not all manufacturers bother to register their ID's, but at least this list is better than nothing.

Related

Does the shoretel database track who has silently monitored calls?

Is it possible to track in the shoretel database who has silently monitored others calls? If so where is the data stored? tables?
Here is a basic sample query that will get you a list of calls that were silent monitor calls. There is obviously a lot of refining to do based on exactly what details you are looking for. Feel free to PM me if you want help with something more specific.
SELECT `call`.SIPCallId AS `GUID`
, `call`.StartTime AS `StartTime`
, `call`.Extension AS `DN`
, `call`.DialedNumber
FROM `call`
LEFT JOIN connect ON (`call`.ID = connect.CallTableID)
WHERE connect.connectreason = 21
ORDER BY `call`.Extension, `call`.StartTime
The where clause here limits your rows to only those with a reason code of 21, silent monitor. Look at the values in the connectreason table for more details on what reason codes are tracked.
PLEASE NOTE that this is in the CDR Database (port 4309, username of 'st_cdrreport' and readonly password of 'passwordcdrreport') you don't want to accidentally write to the CDR database...

How does OPC order items?

I'm working on an OPC(DA) server that creates a collection of server items and sends them to an OPC client. Each item has a "name" value that determines the order that the items are displayed in. The name of each item is structured as:
Sites.<SiteID>.CurrentValue
So the data might look something like:
Sites.0001.CurrentValue
Sites.0002.CurrentValue
Sites.0003.CurrentValue
Etc.
Or in a tree format:
Sites:
0001:
CurrentValue
0002:
CurrentValue
0003:
CurrentValue
Etc.
Since the items are ordered by name and the only variable part of the item name is the site ID, the items are effectively ordered by site ID. The problem occurs when the OPC client displays the items. The order that they're displayed in is totally different:
Sites:
6219
13501
13502
4000
4001
626262
4002
4003
4004
4005
4006
4007
4008
0030
4009
0200
79791
Etc.
I've been trying to infer some kind of logical ordering system that would give this result, but I'm just not seeing anything. I have tried this with several OPC clients (Matrikon, dOPC, KEP) and they are all consistently presenting items in the above order, which leads me to believe there is some kind of universal OPC ordering system, but I've not been able to find anything.
My hope is that, if I can find out how OPC is ordering these items, I can order the items in the OPC server, such that they will get displayed in a logical order in the OPC client.
My server is Advosol-based (I don't have enough reputation to create a new tag).

How to avoid leaks when paging in a RESTful web service?

We have a RESTful web service that returns a collection of tickets. Because it's possible for the collection returned to be too large to be processed in a single gulp, we've added offset and limit query parameters. The idea is that we run the query, then skip the first offset records, then return the next limit records.
The problem is that this can leak tickets.
Suppose, for example, that there are eight tickets that need work, at the time the client first queries:
ID STATUS
00 needs work
01 needs work
02 needs work
03 needs work
04 needs work
05 needs work
06 needs work
07 needs work
If the client requests the tickets that need work, with an offset of 0 and a limit of 4, we'll return:
ID STATUS
00 needs work
01 needs work
02 needs work
03 needs work
If someone else, then does some work, changing some tickets to:
01 doesn't need work
02 doesn't need work
If the client then requests the tickets that need work, with an offset of 4 and a limit of 4, the results of the query will be:
ID STATUS
00 needs work
03 needs work
04 needs work
05 needs work
06 needs work
07 needs work
And after we skip the first four records, we'll return:
06 needs work
07 needs work
And tickets 04 and 05 will have been skipped.
If we go back to the ticket table on every subsequent paging request, we'll leak tickets, if tickets on earlier pages have been changed so that they fall out of the query results.
Part of me is wondering how important this is.
The client is going to request the needs work tickets on some sort of schedule. When there are more tickets than the limit, it will then page through the rest in multiple calls, incrementing offset on each call. If we do nothing, we will sometimes leak needs work tickets, but they will be picked up the next time the client requests new needs work tickets.
That is, the leaked tickets will only be leaked on this pass, they'll show up on the next.
But if it is important that we not leak tickets, I don't see any way of resolving it other than saving the identifiers of all of the needs work tickets during the first call, and then paging through the collection of identifiers, rather than through the tickets themselves.
We could, for example, when the client requests needs work tickets with an offset of zero, populate a second table with the ids of all of the tickets that need work, then return the first limit tickets that are in the second table. The next call, we use offset and limit against the second table, to determine which tickets to return.
The problem with this is we need to deal with multiple clients running simultaneously. So we need a primary key on the second table that we can match against a specific client, based on what is in the request.
I'd like to be able to manage this without putting additional burden on the client programmers. But I don't see how.
Is there any way for me to tell, by examining a request and its headers, that it came from the same client as an earlier request? I've not been able to find one.
We're currently returning paging information in the response headers:
Paging-offset: 0
Paging-limit: 4
Paging-returnedCount: 4
Paging-totalRecordCount: 54
What I'm thinking is that we might return a Paging-collection value, when we're paging, which would provide a key value into the second table. We could then require the client to provide the collection value when they make a request with offset != 0.
Does this seem reasonable? Do you think that this would put too great a burden on the client programmers?
How have other people solved this problem? Or do they just ignore it?
Is there any way for me to tell, by examining a request and its headers, that it came from the same client as an earlier request? I've not been able to find one.
You're not supposed to be able to - stateless protocol. In particular, if you are trying to do REST, you want the request to have all of the necessary information so that a new server can answer the request when the original server is busy.
But possibilities include giving each client its own resource to work with. There are a number of different ways you can match the request to the unique resource.
The problem with this is we need to deal with multiple clients running simultaneously.
As a rule, REST works much better if you can provide multiple clients with a common understanding of resources, rather than trying to tailor your representations to each.
Consider: Alice queries for a pile of work, Bob changes something, then Charlie queries for a pile of work. Can you live with it if Charlie gets a representation of the pile that was cached by Alice's query (ie, before Bob's change)? Cuz that's kind of how the web is designed to work....
(It doesn't have to - you can have each response set a bunch of no-cache headers. But it's something you should be thinking about, because it may be trying to tell you that the REST architectural constraints are not a great fit for your problem.)
How have other people solved this problem? Or do they just ignore it?
Well, it's a concurrent modification that's invalidating your iterator, right? Maybe you just pitch a fit and force the client to start over....
You might look into AtomSyndication, and how some services use it
For your case, I'd probably look at turning the problem around; instead of asking the server for N tickets that have some property, I'd look into asking the server for all tickets in some range that have the property. The client can just keep navigating through ranges until it fills its own bucket.
Another way of describing your problem is that you are trying to page through a mutable collection.
If you drop the paging constraint -- each request always fetches N unworked tickets starting from the first one, that's pretty straight forward.
If you drop the mutable constraint -- paging through an immutable list is straight forward. Making a mutable list immutable may be easy -- instead of asking for the latest version of the list, you ask for the version of the list as of some particular point in time. This is a very happy problem to have when using event sourcing.
One thing we've discussed is having one query return a list of ticket IDs, which would be small enough to return all of them, all of the time, then have a second query to return a single ticket given a query.
Another good answer; that's fundamentally the way web pages work -- (relatively) small payload of html, with hyperlinks to the java script, media, and so on that extends the representation.

am I exposing sensitive data if I put a bson ID in a url?

Say I have a Products array in my Mongodb. I'd like users to be able to see each product on their own page: http://www.mysite.com/product/12345/Widget-Wodget. Since each Product doesn't have an incremental integer ID (12345) but instead it has a BSON ID (5063a36bdeb13f7505000630), I'd need to either add the integer ID or use the BSON ID.
Since BSON ID's include the PID:
4-byte timestamp,
3-byte machine identifier,
2-byte process id,
3-byte counter.
Am I exposing secure information to the outside world if I use the BSON ID in my url?
I can't think of any use to gain privileges on your machines, however using ObjectIds everywhere discloses a lot of information nonetheless.
By crawling your website, one could:
find about some hidden objects: for instance, if the counter part goes from 0x....b1 to 0x....b9 between times t1 and t2, one can guess ObjectIds within these invervals. However, guessing ids is most likely useless if you enforce access permissions
know the signup date of each user (not very sensitive info but better than nothing)
deduce actual (as opposed to publicly available) business hours from the timestamps of objects created by the staff
deduce in which timezones your audience lives from the timestamps of user-generated objects: if your website is one which people use mostly at lunchtime, then one could measure peaks of ObjectIds and deduce that a peak at 8 PM UTC means the audience was on the US West coast
and more generally, by crawling most of your website, one can build a timeline of the success of your service, having for any given time knowledge of: your user count, levels of user engagement, how many servers you've got, how often your servers are restarted. PID changes occurring on weekends are more likely crashes, whereas those on business days are more likely crashes + software revisions
and probably find other info specific to your business processes and domain
To be fair, even with random ids one can infer a lot. The main issue is that you need to prevent anyone from scraping a statistically significant part of your site. But if someone is determined, they'll succeed eventually, which is why providing them with all of this extra, timestamped info seems wrong.
Sharing the information in the ObjectID will not compromise your security. Someone could infer minor details such as when the ObjectID was created (timestamp), but none of the ObjectID components should be tied to authentication or authorization.
If you are building an e-commerce site, SEO is typically a strong consideration for public URLs. In this case you normally want to use a friendlier URL with shorter and more semantic path components than an ObjectID.
Note that you do not have to use the default ObjectID for your _id field .. so could always generate something more relevant for your application. The default ObjectID does provide a reasonable guarantee of uniqueness, so if you implement your own _id allocation you will have to take this into consideration.
See also:
Create an Auto-Incrementing Sequence Field
As #Stennie said, not really.
Let's start with the pid, most hackers wouldn't bother looking for a pid, on say Linux, instead they would just do:
ps aux | grep mongod
or something similar. Of course this requires the hacker to have actually hacked your server, I know of no public hack available based on the pid alone. Considering the pid will change when you restart the machine or mongod, this information is utterly useless to anyone trying to spy.
The machine id is another bit of data that is quite useless publicly and, to be honest, they would get a better understanding of your network using ping or digg than they would through the machine id alone.
So to answer the question: No, there is no real security threat and the information you are displaying is of no use to anyone except MongoDB really.
I also agree with #Stennie on using SEO friendly URLs, an example which I commonly use for e-commerce is /product/product_title_ with a smaller random id (maybe base 64 encode the _id) or a auto incrementing id with .html on the end.

Creation Concurrency with CQRS and EventStore

Baseline info:
I'm using an external OAuth provider for login. If the user logs into the external OAuth, they are OK to enter my system. However this user may not yet exist in my system. It's not really a technology issue, but I'm using JOliver EventStore for what it's worth.
Logic:
I'm not given a guid for new users. I just have an email address.
I check my read model before sending a command, if the user email
exists, I issue a Login command with the ID, if not I issue a
CreateUser command with a generated ID. My issue is in the case of a new user.
A save occurs in the event store with the new ID.
Issue:
Assume two create commands are somehow issued before the read model is updated due to browser refresh or some other anomaly that occurs before consistency with the read model is achieved. That's OK that's not my problem.
What Happens:
Because the new ID is a Guid comb, there's no chance the event store will know that these two CreateUser commands represent the same user. By the time they get to the read model, the read model will know (because they have the same email) and can merge the two records or take some other compensating action. But now my read model is out of sync with the event store which still thinks these are two separate entities.
Perhaps it doesn't matter because:
Replaying the events will have the same effect on the read model
so that should be OK.
Because both commands are duplicate "Create" commands, they should contain identical information, so it's not like I'm losing anything in the event store.
Can anybody illuminate how they handled similar issues? If some compensating action needs to occur does the read model service issue some kind of compensation command when it realizes it's got a duplicate entry? Is there a simpler methodology I'm not considering?
You're very close to what I'd consider a proper possible solution. The scenario, if I may summarize, is somewhat like this:
Perform the OAuth-entication.
Using the read model decide between a recurring visitor and a new visitor, based on the email address.
In case of a new visitor, send a RegisterNewVisitor command message that gets handled and stored in the eventstore.
Assume there is some concurrency going on that, for the same email address, causes two RegisterNewVisitor messages, each containing what the system thinks is the key associated with the email address. These keys (guids) are different.
Detect this duplicate key issue in the read model and merge both read model records into one record.
Now instead of merging the records in the read model, why not send a ResolveDuplicateVisitorEmailAddress { Key1, Key2 } towards your domain model, leaving it up to the domain model (the codified form of the business decision to be taken) to resolve this issue. You could even have a dedicated read model to deal with these kind of issues, the other read model will just get a kind of DuplicateVisitorEmailAddressResolved event, and project it into the proper records.
Word of warning: You've asked a technical question and I gave you a technical, possible solution. In general, I would not apply this technique unless I had some business indicator that this is worth investing in (what's the frequency of a user logging in concurrently for the first time - maybe solving it this way is just a way of ignoring the root cause (flakey OAuth, no register new visitor process in place, etc)). There are other technical solutions to this problem but I wanted to give you the one closest to what you already have in place. They range from registering new visitors sequentially to keeping an in-memory projection of the visitors not yet in the read model.