There are two types of messages in FIX protocol for pricing: MarketData and Quotes.
What are the differences between them? What are the usecases?
Thanks
MarketData is a general message for market data on specific securities, forex quotes etc. It consists of real-time quote, order, trade, trade volume, open interest, and/or other price information, which you have subscribed to in your message. It is primarily the data from an exchange in real time.
Quote is primarily the information provided to the exchange from a broker. Exchange may ask a broker, what is your bid and ask price for a specific(multiple) security(ies). The information is provided to the exchange in the Quote message and which would ultimately percolate down to the MarketData being provided.
Are you referring to Fiximate or any other website ? The descriptions are there on the website.
Related
I read following books and links before I post this question and since this question is about best practices, this question might be closed. However i am expecting some expert views.
https://www.restapitutorial.com/resources.html
REST-API-Design-Rulebook book from oreily
other blogpost and stackoverflow question.
For example to get information about employee with id we are using uri as below
http://myapp-name.myorganization.com/employees/employeeid/123456
But all above resources tell me to do this way
http://myapp-name.myorganization.com/employees/123456
Similarly if i want go get information about employee with id 12345, my uri is as below
http://myapp-name.myorganization.com/countries/country/US/employeeid/12345
as opposed to
http://myapp-name.myorganization.com/countries/US/12345
Does that mean my uri are not standard?
They are just guidelines. You can't cover all kinds of possibilities of your business and necessities on a Rest documentation.
Talking about your examples, the
http://myapp-name.myorganization.com/employees/employeeid/123456
And
http://myapp-name.myorganization.com/employees/123456
Are both correct. But could be better (shorter).
Usually I prefer the second one and use the first one for the alternatives. Per example, if I would like to find an employee by id (the "default" method to find employees) or his unique internal company code, I prefer to use respectively:
/employees/123456 # by id
/employees/code/A899123A # by code
Similarly if i want go get information about employee with id 12345,
my uri is as below
http://myapp-name.myorganization.com/countries/country/US/employeeid/12345
This URL means to me that you trying to find an employee with id 12345 on the US country. But could be shorter too if the US term is the default method to find countries on your API:
/countries/US/employees/12345
as opposed to http://myapp-name.myorganization.com/countries/US/12345
This one seems confuse. Are you trying to find what with id 12345? It's hard to answer only looking for the URL. So, the /countries/US/employees/12345 is more consistent.
If the idea is find the employee on some country with some code, the URL can follow the same pattern: /countries/US/employees/code/A899123A
Does that my uri are not standard?
No, your URI are fine. REST doesn't care what spellings you use for your identifiers, so long as they are consistent with RFC 3986. There's also RFC 7320, which describes "Best Practices" -- but you will probably find that those best practices still leave you with a lot of freedom.
Think "variable names" - various communities will have their own conventions for how variable names should be spelled, but there isn't any standard.
The same holds for identifiers in REST -- they are opaque strings that neither the API consumer nor the client actually need to parse. (Example: when's the last time you actually looked at the URI used when you submit a search to Google?)
Some routing frameworks will be easier to use if you adhere to a particular convention, but that's purely an implementation detail on the server, the client doesn't care.
The scenario is: some user sending messages to some group of people.
I was thinking to create one ROW for that specific conversation into one CLASS. WHERE in that ROW contains information such "sender name", "receiver " and addition I have column (PFRelation) which connects this specific row to another class where all messages from the user to the receiver would be saved(vice-versa) into.
So this action will happen every time the user starts a new conversation.
The benefit from this prospective :
Privacy because the only convo that is being saved are only from the user and the receiver group.
Downside of this prospective:
We all know that parse only provide 30reqs/s for free which means that 1 min =1800 reqs. So every time I create a new class to keep track of the convo. Am I using a lot of requests ?
I am looking suggestions and thoughts for the ideal way before I implement this messenger library.
It sounds like you have come up with something that is similar to what I have used before to implement messaging in an app with Parse as a backend. It's also important to think about how your UI will be querying for data. In general, it's most important to ensure that it is very easy and fast to read data. For most social apps, the following quote from Facebook's engineering team on Haystack is particularly relevant.
Haystack is an object store that we designed for sharing photos on
Facebook where data is written once, read often, never modified, and
rarely deleted.
The crucial piece of information here is written once, read often, never modified, and rarely deleted. No matter what approach you decide to take, keep that in mind while engineering your solution. The approach that I have used before to implement a messaging system using Parse is described below.
Overview
Each row (object) of the Message class corresponds with an individual text, picture, or video message that was posted. Each Message belongs to a Group. A Group can be as small as 2 User (private conversation) or grow as large as you like.
The RecentMessage class is the solution I came up with to deal with quickly and easily populating the UI. Each RecentMessage object corresponds to each Group that a given User may belong. Each User in a Group will have their own RecentMessage object which is kept up to date using beforeSave/afterSave cloud code triggers. Whenever a new Message is created, in the afterSave trigger we want to update all of the RecentMessage objects that belong to the Group.
You will most likely have a table in your app which displays all of the conversations that the user is part of. This is easily achieved by querying for all of that user's RecentMessage objects which already contains all of the Group information needed to load the rest of the messages when selected and also contains the most recent message's data (hence the name) to display in the table. Alternatively, RecentMessage could contain a pointer to the most recent Message, however I decided that copying the data was a beneficial tradeoff since it streamlines future queries.
Message
group (pointer to group which message is part of)
user (pointer to user who created it)
text (string)
picture (optional file)
video (optional file)
RecentMessage
group (group pointer)
user (user pointer)
lastMessage (string containing the text of most recent Message)
lastUser (pointer to the User who posted the most recent Message)
Group
members (array of user pointers)
name or whatever other info you want
Security/Privacy
Security and privacy are imperative when creating messaging functionality in your app. Make sure to read through the Parse Engineering security blog posts, and take your time to let it all soak in: Part I, Part II, Part III, Part IV, Part V.
Most important in our case is Part III which describes ACLs, or Access Control Lists. Group objects will have an ACL which corresponds to all of its member User. RecentMessage objects will have a restricted read/write ACL to its owner User. Message objects will inherit the ACL of the Group to which they belong, allowing all of the Group members to read. I recommend disabling the write ACL in the afterSave trigger so messages cannot be modified.
General Remarks
With regards to Parse and the request limit, you need to accept that fact that you will very quickly surpass the 30 req/s free tier. As a general rule of thumb, it's much better to focus on building the best possible user experience than to focus too much on scalability. By and large, issues of scalability rarely come into play because most apps fail. Not saying that to be discouraging — just something to keep in mind to prevent you from falling into the trap of over-engineering at the cost of time :)
EWS Managed API have two properties:ConversaionId and ConversationIndex
What is the difference between them? I guess ConversationId is the the ConversationIndex of the first mail in the conversation which is essentially of 22 bytes, while ConversationIndex is the index of that particular reply in the conversation thread, essentially of 22 bytes + multiples of 5 bytes for each reply in the conversation. Is it like that?
Also ConversationId is accessible only with Exchange Server 2010 onwards. So cant we access ConversationId in the Exchange Server 2007?
Correct, you can't access ConversationId in Exchange 2007.
The ConversationId identifies the conversation. The ConversationIndex represents the message’s position relative to the original message. ConversationId is not the ConversationIndex of the first mail. Here are some sample values I just grabbed off a new message.
<t:ConversationId Id="AAQkADIwM2ZlM2ZlLWMwYjctNDg2Ny04MDU0LTVkMTFmM2IxY2ZjZQAQACkRMjewk3RHldv8l7aTV2s=/>
<t:ConversationIndex>AQHPkWCfKREyN7CTdEeV2/yXtpNXaw==</t:ConversationIndex>
<t:ConversationTopic>test message</t:ConversationTopic>
It should be noted that ConversationId does not appear to be unique per entirely different conversation threads.
Meaning, while you can be assured that two conversations that don't share the same ConversationId are definitely not related, the converse -- that the same ConversationId guarantees the same "email thread" -- as popularly understood (people answering each other in a chain) -- does not appear to be the case.
I have discovered multiple instances of the same ConversationId on the same email Subject (every now and then) even though the cascade is not off the original.
So for example if HR sends out a "Thought of the Day" email freshly each day to a given group X, this may have the same ConversationId even if they are new chains.
This is problematic if one is sorting emails on a website from payroll by, say, "RE: your 401k" and two distinct conversations are conflated.
In DDS what my requirement is, I have many subscribers but the publisher is single. My subscriber reads the data from the DDS and checks the message is for that particular subscriber. If the checking success then only it takes the data and remove from DDS. The message must maintain in DDS until the authenticated subscriber takes it's data. How can I achieve this using DDS (in java environment)?
First of all, you should be aware that with DDS, a Subscriber is never able to remove data from the global data space. Every Subscriber has its own cached copy of the distributed data and can only act on that copy. If one Subscriber takes data, then other Subscribers for the same Topic will not be influenced by that in any way. Only Publishers can remove data globally for every Subscriber. From your question, it is not clear whether you know this.
Independent of that, it seems like the use of a ContentFilteredTopic (CFT) is suitable here. According to the description, the Subscriber knows the file name that it is looking for. With a CFT, the Subscriber can indicate that it is only interested in samples that have a particular value for the file_name attribute. The infrastructure will take care of the filtering process and will ensure that the Subscriber will not receive any data with a different value for the attribute file_name. As a consequence, any take() action done on the DataReader will contain relevant information and there is no need to check the data first and then take it.
The API documentation should contain more detailed information about how to use a ContentFilteredTopic.
Baseline info:
I'm using an external OAuth provider for login. If the user logs into the external OAuth, they are OK to enter my system. However this user may not yet exist in my system. It's not really a technology issue, but I'm using JOliver EventStore for what it's worth.
Logic:
I'm not given a guid for new users. I just have an email address.
I check my read model before sending a command, if the user email
exists, I issue a Login command with the ID, if not I issue a
CreateUser command with a generated ID. My issue is in the case of a new user.
A save occurs in the event store with the new ID.
Issue:
Assume two create commands are somehow issued before the read model is updated due to browser refresh or some other anomaly that occurs before consistency with the read model is achieved. That's OK that's not my problem.
What Happens:
Because the new ID is a Guid comb, there's no chance the event store will know that these two CreateUser commands represent the same user. By the time they get to the read model, the read model will know (because they have the same email) and can merge the two records or take some other compensating action. But now my read model is out of sync with the event store which still thinks these are two separate entities.
Perhaps it doesn't matter because:
Replaying the events will have the same effect on the read model
so that should be OK.
Because both commands are duplicate "Create" commands, they should contain identical information, so it's not like I'm losing anything in the event store.
Can anybody illuminate how they handled similar issues? If some compensating action needs to occur does the read model service issue some kind of compensation command when it realizes it's got a duplicate entry? Is there a simpler methodology I'm not considering?
You're very close to what I'd consider a proper possible solution. The scenario, if I may summarize, is somewhat like this:
Perform the OAuth-entication.
Using the read model decide between a recurring visitor and a new visitor, based on the email address.
In case of a new visitor, send a RegisterNewVisitor command message that gets handled and stored in the eventstore.
Assume there is some concurrency going on that, for the same email address, causes two RegisterNewVisitor messages, each containing what the system thinks is the key associated with the email address. These keys (guids) are different.
Detect this duplicate key issue in the read model and merge both read model records into one record.
Now instead of merging the records in the read model, why not send a ResolveDuplicateVisitorEmailAddress { Key1, Key2 } towards your domain model, leaving it up to the domain model (the codified form of the business decision to be taken) to resolve this issue. You could even have a dedicated read model to deal with these kind of issues, the other read model will just get a kind of DuplicateVisitorEmailAddressResolved event, and project it into the proper records.
Word of warning: You've asked a technical question and I gave you a technical, possible solution. In general, I would not apply this technique unless I had some business indicator that this is worth investing in (what's the frequency of a user logging in concurrently for the first time - maybe solving it this way is just a way of ignoring the root cause (flakey OAuth, no register new visitor process in place, etc)). There are other technical solutions to this problem but I wanted to give you the one closest to what you already have in place. They range from registering new visitors sequentially to keeping an in-memory projection of the visitors not yet in the read model.