Our customer (acceptors for this connection) has specified that sequence numbers should be reset after each disconnect or logout, but also that the Logon message may NOT contain the ResetSeqNumFlag field.
I've removed the field from the Logon message in FIX44.xml but it's still being populated. From what I understand, as long as ResetOnLogon, ResetOnLogout or ResetOnDisconnect is set, the field will be populated.
How can I prevent ResetSeqNumFlag from being set regardless of other settings?
FIX version: 4.4
quickfixj version: 1.5.3
Related
I have a metaplex candy machine and collection that I set up several weeks back. Minting worked initially but is now failing.
The error reported is
custom program error: 0x3f
Which appears to be from the nested instruction to the metadata program. Which should be
set_and_verify_collection
readonly code: number = 0x3f;
readonly name: string = 'DataTypeMismatch';
It can be thrown from metdata deserialize.
https://github.com/metaplex-foundation/metaplex-program-library/blob/master/token-metadata/program/src/state/mod.rs
Which is called for the token metadata and collection metadata data.
I believe those are the only two places it would be thrown from in this method. AccountInfo is resolved for several accounts but it's only deserialized into a typed entity, with size and type considerations for those two entities.
Checking the metadata, on the collection, it's present and the length looks normal for metaplex metadata accounts at 679 bytes.
Now the metadata for the token being minted is not present because the tx failed. However, if, I attempt a transaction without the 'SetCollectionDuringMint' instruction added, the tx succeeds.
Interesting. The metadata account for the token has zero bytes allocated.
I don't recall this changing. In fact, if I go through my source history to older revisions, I've not been explicitly requesting to create the metadata account. I've simply been pre-allocating the account and calling mint nft on the candy machine.
Did the candy machine change to no longer automatically create the metadata account for the minted NFT?
It occurred to me almost as soon as I finished typing up the question, what the likely cause was.
It came to my attention a few weeks back that this older v2 version of the candy machine, does not actually halt transaction execution on constraint violations, but rather, charges the client a fee , for executing the transaction incorrectly.
It's likely the 'bot tax' protocol is allowing the real error, which may be occurring earlier, to get suppressed.
v3 of the candy machhine has made this something you can disable but we are a bit coupled to v2 at the moment.
Anyhow, what I think has happened here is that the bot taxing version of the candy machine, allowed the nft to mint, but didn't actually finish setting it up. Then the next instruction, set collection during mint, was unable to complete.
The real failure is earlier in the transaction, somewhere during the mint, where we no longer meet the mint criteria, and the old version of the candy machine is just charging us and failing silently.
Unfortunately, the root cause is still not clear. One other change that would have occurred between now and then is that the collection is now 'live' having passed the go live date. I'll have to dig through the validation constraints and see if there are any bot tax related short circuits related to this golive date transition.
EDIT: UPDATE: Looks like there were some changes, specific to devnet's token metadata program and my machine was affected. I'll need some new devnet machines.
We have a WebService connector with multiples operations setup (GetObject for an account, Enable, Disable, ... create, update, ...).
Theses operation work as expected.
When we try to disable or enable a user account (Manage Access > Manage Accounts > User selection > Enable/Disable), IIQ perform an Update Account Operation with a correct payload. "Action Status" is set to "Pending Enable....". Immediately after that, IIQ perform a GetObject Operation through the WebService connector on the user account, and recover the new version of the user with updated values. "Status" is correctly set to "Active", but "Action Status" stay on "Pending Enable..." and we don't know why.
Even if we try to refresh user info (), IIQ perform a GetObject operation on the WebService, but still didn't remove the "pending..." action status
also running "perform Idnetity Request maintainance" the issue is still there.
Check if you have set Schema Attribute for Account Enable status Attribute value in the configuration to <AttributeName>=<Active value>
e.g. only setting variable doesn't make it Disable/Active.
accountStatus=Active
In this case all values apart from Active will be treated as inactive.
I have migrated Oracle Forms 10g to 12c and unusual issues occurring in 12c. Here is one of the issues feedback from users.
I have a form that have certain required fields. When users leave the item blank, error message "Field are required" display on form status bar. It is usual that users cannot go to next field until put something in the required item.
In 10g, users could tab backward leaving required field blank without error, but not in 12c anymore.
I came out a method which something like that (cannot say it is solution at all):
Step1. Initial Required to "Yes" in the item property platelet.
Step2. Create key-prev-item to the required items and put the following code:
If get_item_property(:system.cursor_name, required) = ‘true’ then
Set_item_property(:system.cursor_name, required, property_false);
End if;
Step3. Create key-next-item to the required items and reset the required property to true.
It looks silly and unreliable since the more codes you create, the more bugs will coming out.
Is there any build-in function in Oracle Form 12c handle such case?
Many many thanks
I think it is better to use the DEFER_REQUIRED_ENFORCEMENT at that moment.
We do this like that, we set it to true if we navigate out of the item and false after the navigation.
Usage Notes from oracle formsbuilder help to explain difference between option Yes and 4.5:
This property applies only when item-level validation is in effect. By default, when an item has Required set to true, Oracle Forms will not allow navigation out of the item until a valid value is entered. This behavior will be in effect if you set Defer Required Enforcement to No. (An exception is made when the item instance does not allow end-user update; in this unusual case, a Defer Required Enforcement setting of No is ignored and item-level validation does not take place.)
If you set Defer Required Enforcement to Yes (PROPERTY_TRUE for runtime) or to 4.5 (PROPERTY_4_5 for runtime), you allow the end user to move freely among the items in the record, even if they are null, postponing enforcement of the Required attribute until validation occurs at the record level.
When Defer Required Enforcement is set to Yes, null-valued Required items are not validated when navigated out of. That is, the WHEN-VALIDATE-ITEM trigger (if any) does not fire, and the item's Item Is Valid property is unchanged. If the item value is still null when record-level validation occurs later, Oracle Forms will issue an error.
When Defer Required Enforcement is set to 4.5, null-valued Required items are not validated when navigated out of, and the item's Item Is Valid property is unchanged. However, the WHEN-VALIDATE-ITEM trigger (if any) does fire. If it fails (raises Form_Trigger_Failure), the item is considered to have failed validation and Oracle Forms will issue an error. If the trigger ends normally, processing continues normally. If the item value is still null when record-level validation occurs later, Oracle Forms will issue an error at that time.
Setting a value of 4.5 for Defer Required Enforcement allows you to code logic in a WHEN-VALIDATE-ITEM trigger that will be executed immediately whenever the end-user changes the item's value (even to null) and then navigates out. Such logic might, for example, update the values of other items. (The name "4.5" for this setting reflects the fact that in Release 4.5, and subsequent releases running in 4.5 mode, the WHEN-VALIDATE-ITEM trigger always fired during item-level validation.)
Migration note: If your Forms application used "4.5" as the Runtime Compatibility Mode property setting, the Oracle Forms Migration Assistant will automatically set the Defer Required Enforcement property to "4.5" because the Runtime Compatibility Mode property is obsolete in Oracle Forms.
I'm getting familiarized with Cloud SQL API (v1beta1). I'm trying to update authorizedNetworks (sql.instances.update) and I'm using API explorer. I think my my request body is alright except for 'settingsVersion'. According to the docs it should be:
The version of instance settings. This is a required field for update
method to make sure concurrent updates are handled properly. During
update, use the most recent settingsVersion value for this instance
and do not try to update this value.
Source: https://developers.google.com/cloud-sql/docs/admin-api/v1beta3/instances/update
I have not found anything useful related to settingsVersion. When I try with different srings, instead of receiving 200 and the response, I get 400 and:
"message": "Invalid value for: Expected a signed long, got '' (class
java.lang.String)"
If a insert random number, I get 412 (Precondition failed) and:
"message": "Condition does not match."
Where do I obtain versionSettings and what is a signed long string?
You should do a GET operation on your instance and fetch the current settings, those settings will contain the current version number, you should use that value.
This is done to avoid unintentional settings overwrites.
For example, if two people get the current instance status which has version 1, and they both try to change something different (for example, one wants to change the tier and the other wants to change the pricingPlan) by doing an Update operation, the second one to send the request would undo the change of the first one if the operation was permitted. However, since the version number is increased every time an update operation is performed, once the first person updates the instance, the second person's request will fail because the version number does not match anymore.
I am trying to validate one field through postgres trigger.
If targeted field has value in decimals,i need to through a warning but allowing the user to save the record.
I tried with options
RAISE EXCEPTION,RAISE - USING
but it's throwing error on UI and transaction is aborted.
I tried with options
RAISE NOTICE,RAISE WARNING
through which warning is not shown and record is simply saved.
It would be great if any one help on this.
Thanks in Advance
You need to set client_min_messages to a level that'll show NOTICEs and WARNINGs. You can do this:
At the transaction level with SET LOCAL
At the session level with SET
At the user level with ALTER USER
At the database level with ALTER DATABASE
Globally in postgresql.conf
You must then check for messages from the server after running queries and display them to the user or otherwise handle them. How to do that depends on the database driver you're using, which you haven't specified. PgJDBC? libpq? other?
Note that raising a notice or warning will not cause the transaction to pause and wait for user input. You really don't want to do that. Instead RAISE an EXCEPTION that aborts the transaction. Tell the user about the problem, and re-run the transaction if they approve it, possibly with a flag set to indicate that an exception should not be raised again.
It would be technically possible to have a PL/Perlu, PL/Pythonu, or PL/Java trigger pause execution while it asked the client via a side-channel (like a TCP socket) to approve an action. It'd be a really bad idea, though.