If, for whatever reason, my application loses track of what sequence number I was on, what is the recommended way of re-establishing the session to continue trading?
From FIX 4.1 onwards, the Logon message (A) contains a tag ResetSeqNumFlag (141). Setting it to 'Y' "indicates that both sides of the FIX session should reset sequence numbers."
If the acceptor receives a message out of sequence, it would send a reject message and the reason for it in the text field. In the text field it would mention the sequence number expected, so parse that part of the message, take the sequence number and start again.
Related
Computer System Architecture - Morris Mano In chapter 5 section 7 figure 5-13
When IEN it checks whether "FGI" or "FGO" are set to 1 then an interrupt cycle happens, but as I know is when FGI = 1 it means that information in INPR cannot be changed, and FGO is the reverse to that which means that when FGO is set to 1 then information in AC will be transferred to OUTR 'OUTR can be changed' so the question here shouldn't the condition of applying interrupt cycle happen when "FGI" = 0 or "FGO" = 1 since INPR or OUTR can be changed under these conditions which now make since to execute an interrupt?
Either flag being 1 logically means a device is "ready", but what "ready" means differs for input and for output devices. In either case, flag being 1 means that the processor can or should now take action.
FGI=1 means the input device is ready, but that really means a new input is available (e.g. the user typed a key on a keyboard) and the processor should accept it. FGO=1 prevents the input device(s) from overwriting a prior input held in INPR that the processor hasn't accepted yet. When the processor accepts the input, FGI goes to 0 unlocking the INPR register, and that allows the input device to write again, which it will eventually do when the user presses another key (sending FGI back to 1 to signal the processor).
FGO=1 means ready for output, which really means the last output has been fully accepted by the device, so the OUTR register is unlocked for the processor to write a new data (character for the console). FGO=0 prevents the processor from writing OUTR as the output device hasn't accepted the last one yet.
The interrupt service routine should check each flag, and if FGI=1 then accept an input (INPR->AC) and move it into a buffer for the user program to read when it is ready. Whereas if FGO=1, then move an output character from memory buffer into the AC, and then do AC->OUTR, also lowering FGO to 0, which will preclude the processor writing until that data has been accepted by the device.
So, FGI=0 means that the processor has accepted the prior INPR value provided by the input device, and there is no new character as yet but the register is unlocked so the device can write at will.
FGO=0 means that the processor has written a value to the OUTR register, but the output device hasn't accepted that yet, so the register should be considered locked.
I am trying to make a patch that plays audio when a bang is pressed. I have put a symbol so that I don't need to keep reimporting the file. However it works sometimes but not all the time.
A warning in the Pd console reads: Start requested with no prior open
However I have imported an audio file
Is there something that I have done wrong?
Use [trigger] to get the order-of-execution correct.
One problem is, that whenever you send a [1( to [readsf~] you must have sent an [open ...( message directly beforehand.
Even if you have just successfully opened a file, but then stopped it (with [0() or played it through (so it has been closed automatically), you have to send the filename again.
The real problem is, that your messages are out of order: you should never have a fan-out (that is: connecting a message outlet to multiple inlets), as this will create undefined behavior.
Use [trigger] to get the order-of-execution correct.
(Mastering [trigger] is probably the single most important step in learning to program Pd)
is there anyway to set request time-out while sending message from initiator ??
we had a issue where we got late reply from acceptor and application went in not responsive mode. issue can be with network delay or etc. but I think it will be good if we can set time-out option here.
Seeing with Application call back didn't find anything .
I want to set time-out option with SendToTarget API,,
any suggestion
Did you add CheckLatency and MaxLatency in your config file and confirmed ?
CheckLatency If set to Y, messages must be received from the counterparty within a defined number of seconds (see MaxLatency). It is useful to turn this off if a system uses localtime for it's timestamps instead of GMT.
MaxLatency If CheckLatency is set to Y, this defines the number of seconds latency allowed for a message to be processed. Default is 120. positive integer
I'm experiencing the same problem using QuickFix /n
Looking at the source code for version 1.4 the section that reads those settings from the configuration file is commented out and replaced with hard coded default values.
// FIXME to get from config if available
session.MaxLatency = 120;
session.CheckLatency = true;
When the quickfix initiator reconnects at startTime (defined in config) it deletes the files with sequence number, but does not set ResetSeqNumFlag to Y, and the server replies with a Logout message with text "seq msg number to low ..."
Is there a way to set ResetSeqNumFlag = Y only for this behavior? I don`t want to reset the sequence on every log-on.
This appears to be a QuickFIX/J quirk (some might consider it a bug). If ResetOnLogon=N then no ResetSeqNumFlag=Y is sent when the session start time triggers a logon. If ResetOnLogon=Y, the ResetSeqNumFlag=Y is sent on every logon. I believe this is not a big problem in practice because participants in a FIX session typically reset their sequence numbers locally after a session ends (logically ends at the end time, not a connection disconnect).
If you want to slightly modify the source code to implement this behavior, you'd modify the quickfix.Session next() method. You could add a local flag that indicates a session has restarted (per the schedule as determined by checkSessionTime()). Pass that flag to generateLogon() and that method would use it to determine when to send ResetSeqNumFlag=Y regardless of the ResetOnLogon configuration.
I don`t want to reset the sequence on every log-on.
Then don't do it! Set ResetOnLogon=N.
At StartTime, the session will reset sequence numbers always. If ResetOnLogon=N, then they won't reset again until the next StartTime.
The initiator and acceptor should always have matching ResetOnXXX settings.
What you are asking cannot, should not be done. You start you engine with some config and then you change the config while running. If something goes wrong it will be very difficult to pinpoint what started the issue.
Instead of doing ResetSeqNumFlag = Y try adding ResetOnLogon=Y in your config for the acceptor side(that is if you have control over it) or ResetOnLogout=Y / ResetOnDisconnect=Y in your initiator config file. That would be much easier and changing config while running, is possibly not the best solution.
Your logout(disconnect can happen anytime) will happen anyways at EndTime anyways and should be easier for your application.
Good morning,
I am retrieving a stream of bytes from a serial device that connects to the iPad. Once connected the supplied SDK will call a delegate method with the bytes that have been forwarded.
The readings forwarded by the serial device via the SDK are in the following format:
!X1:000.0;
Once connected to the serial device the delegated methods will start receiving data immediately - this could be in various states of completion i.e.
:000.00;
What I need to do is establish a concrete way of splitting the readings returned from the serial device so that I can manipulate the data.
Some of the tried options are:
Simply concatenate the received strings for a fixed period and then split the NSString on the ";" character. This is a little inefficient though and does not allow me to manipulate the data dynamically
-(void)receivingDelegateMethod:(NSString *)aString {
if(counter < 60){
[self.PropertyString stringByAppendingString:aString];
}else{
NSArray *readings = [self.PropertyString componentsSeparatedByString: #";"];
}
}
Determine a starting point by looking for the "!" character and then appending the resulting substring to a NSString property. All previous calls to the delegated method will append to this property and then remove the first 10 characters.
I know there are further options such as NSScanners and RegEx but I wanted to get the opinion of the community before wasting more time of different methods.
Thanks
Make a BOOL flag that indicates that the stream has been initialized, and set it to false. When you receive the next chunk of data, check the flag first. If it is not set, skip all characters until you see an exclamation point '!'. Once you see it, discard everything in front of it, and copy the rest of the string into the buffer. If the "is initialized" flag is set, append the entire string to the buffer without skipping characters.
Once you finish the append, scan the buffer for ! and ; delimited sections. For each occurrence of that pattern, call a designated method with a complete portion of the pattern. You can get fancy, and define your own "secondary" delegate for processing pre-validated strings.
You may need to detect disconnections, and set the "is initialized" flag back to NO.