UaSerializationException: request exceeds remote max message size: 2434140 > 2097152 - opc-ua

I am a rookie, I tried to use the following code for bulk subscription, but something went wrong, how can I solve this problem
OpcUaSubscriptionManager subscriptionManager = opcUaClient.getSubscriptionManager();
UaSubscription subscription = subscriptionManager.createSubscription(publishInterval).get();
List<MonitoredItemCreateRequest> itemsToCreate = new ArrayList<>();
for (Tag tag : tagList) {
NodeId nodeId = new NodeId(nameSpace, tag.getPath());
ReadValueId readValueId = new ReadValueId(nodeId, AttributeId.Value.uid(), null, null);
MonitoringParameters parameters = new MonitoringParameters(
subscription.nextClientHandle(), //
publishInterval, //
null, // filter, null means use default
UInteger.valueOf(queueSize), // queue size
true // discard oldest
);
MonitoredItemCreateRequest request = new MonitoredItemCreateRequest(readValueId,
MonitoringMode.Reporting, parameters);
itemsToCreate.add(request);
}
BiConsumer<UaMonitoredItem, Integer> consumer =(item, id) ->
item.setValueConsumer(this::onSubscriptionValue);
List<UaMonitoredItem> items = subscription.createMonitoredItems(
TimestampsToReturn.Both,
itemsToCreate,
consumer
).get();
for (UaMonitoredItem item : items) {
if (!item.getStatusCode().isGood()) {
log.error("failed to create item for nodeId={} (status={})",item.getReadValueId().getNodeId(), item.getStatusCode());
}
}

How many items are you trying to create?
It seems that the resulting message exceeds the limits set by the server you are connecting to. You may need to break your list up and create the items in smaller chunks.

I do not know the library that you use, but one of the previous steps for a OPC UA client to connect to a server is to negotiate the maximum size of the buffers, the message total size and the max number or chunks a message can be sent, this process is called by the OPC UA documentation as "Handshake".
If your request is too long it should be split and sent in several chunks according to the limits previously negotiated with the server.
And the server will probably also reply in several chunks, all that has to be considered in the programming of an OPC UA client.

Related

Running Mirth Channel with API Requests to external server very slow to process

In this question Mirth HTTP POST request with Parameters using Javascript I used a semblance of the first answer. Code seen below.
I'm running this code for a file that has nearly 46,000 rows. Which equates to about 46,000 requests hitting our external server. I'm noting that Mirth is making requests to our API endpoint about 1.6 times per second. This is unusually slow, and I would like some help to understand whether this is something related to Mirth, or related to the code above. Can repeated Imports in a for loop cause slow downs? Or is there a specific Mirth setting that limits the number of requests sent?
Version of Mirth is 3.12.0
Started the process at 2:27 PM and it's expected to be finished by almost 8:41 PM tonight, that's ridiculously slow.
//Skip the first header row
for (i = 1; i < msg['row'].length(); i++) {
col1 = msg['row'][i]['column1'].toString();
col2...
...
//Insert into results if the file and sample aren't already present
InsertIntoDatabase()
}
function InsertIntoDatabase() {
with(JavaImporter(
org.apache.commons.io.IOUtils,
org.apache.http.client.methods.HttpPost,
org.apache.http.client.entity.UrlEncodedFormEntity,
org.apache.http.impl.client.HttpClients,
org.apache.http.message.BasicNameValuePair,
com.google.common.io.Closer)) {
var closer = Closer.create();
try {
var httpclient = closer.register(HttpClients.createDefault());
var httpPost = new HttpPost('http://<server_name>/InsertNewCorrection');
var postParameters = [
new BasicNameValuePair("col1", col1),
new BasicNameValuePair(...
...
];
httpPost.setEntity(new UrlEncodedFormEntity(postParameters, "UTF-8"));
httpPost.setHeader('Content-Type', 'application/x-www-form-urlencoded')
var response = closer.register(httpclient.execute(httpPost));
var is = closer.register(response.entity.content);
result = IOUtils.toString(is, 'UTF-8');
} finally {
closer.close();
}
}
return result;
}

OPC UA Client capture the lost item values from the UA server after a disconnect/connection error?

I am building a OPC UA Client using OPC Foundation SDK. I am able to create a subscription containing some Monitoreditems.
On the OPC UA server these monitored items change value constantly (every second or so).
I want to disconnect the client (simulate a connection broken ), keep the subcription alive and wait for a while. Then I reconnect having my subscriptions back, but I also want all the monitored Item values queued up during the disconnect. Right now I only get the last server value on reconnect.
I am setting a queuesize:
monitoredItem.QueueSize = 100;
To kind of simulate a connection error I have set the "delete subscription" to false on ClosesSession:
m_session.CloseSession(new RequestHeader(), false);
My question is how to capture the content of the queue after a disconnect/connection error???
Should the ‘lost values’ be “new MonitoredItem_Notification” automatically when the client reconnect?
Should the SubscriptionId be the same as before the connection was broken?
Should the sessionId be the same or will a new SessionId let med keep the existing subscriptions? What is the best way to simulate a connection error?
Many questions :-)
A sample from the code where I create the subscription containing some MonitoredItems and the MonitoredItem_Notification event method.
Any OPC UA Guru out there??
if (node.Displayname == "node to monitor")
{
MonitoredItem mon = CreateMonitoredItem((NodeId)node.reference.NodeId, node.Displayname);
m_subscription.AddItem(mon);
m_subscription.ApplyChanges();
}
private MonitoredItem CreateMonitoredItem(NodeId nodeId, string displayName)
{
if (m_subscription == null)
{
m_subscription = new Subscription(m_session.DefaultSubscription);
m_subscription.PublishingEnabled = true;
m_subscription.PublishingInterval = 3000;//1000;
m_subscription.KeepAliveCount = 10;
m_subscription.LifetimeCount = 10;
m_subscription.MaxNotificationsPerPublish = 1000;
m_subscription.Priority = 100;
bool cache = m_subscription.DisableMonitoredItemCache;
m_session.AddSubscription(m_subscription);
m_subscription.Create();
}
// add the new monitored item.
MonitoredItem monitoredItem = new MonitoredItem(m_subscription.DefaultItem);
//Each time a monitored item is sampled, the server evaluates the sample using a filter defined for each monitoreditem.
//The server uses the filter to determine if the sample should be reported. The type of filter is dependent on the type of item.
//DataChangeFilter for Variable, Eventfilter when monitoring Events. etc
//MonitoringFilter f = new MonitoringFilter();
//DataChangeFilter f = new DataChangeFilter();
//f.DeadbandValue
monitoredItem.StartNodeId = nodeId;
monitoredItem.AttributeId = Attributes.Value;
monitoredItem.DisplayName = displayName;
//Disabled, Sampling, (Report (includes sampling))
monitoredItem.MonitoringMode = MonitoringMode.Reporting;
//How often the Client wish to check for new values on the server. Must be 0 if item is an event.
//If a negative number the SamplingInterval is set equal to the PublishingInterval (inherited)
//The Subscriptions KeepAliveCount should always be longer than the SamplingInterval/PublishingInterval
monitoredItem.SamplingInterval = 500;
//Number of samples stored on the server between each reporting
monitoredItem.QueueSize = 100;
monitoredItem.DiscardOldest = true;//Discard oldest values when full
monitoredItem.CacheQueueSize = 100;
monitoredItem.Notification += m_MonitoredItem_Notification;
if (ServiceResult.IsBad(monitoredItem.Status.Error))
{
return null;
}
return monitoredItem;
}
private void MonitoredItem_Notification(MonitoredItem monitoredItem, MonitoredItemNotificationEventArgs e)
{
if (this.InvokeRequired)
{
this.BeginInvoke(new MonitoredItemNotificationEventHandler(MonitoredItem_Notification), monitoredItem, e);
return;
}
try
{
if (m_session == null)
{
return;
}
MonitoredItemNotification notification = e.NotificationValue as MonitoredItemNotification;
if (notification == null)
{
return;
}
string sess = m_session.SessionId.Identifier.ToString();
string s = string.Format(" MonitoredItem: {0}\t Value: {1}\t Status: {2}\t SourceTimeStamp: {3}", monitoredItem.DisplayName, (notification.Value.WrappedValue.ToString().Length == 1) ? notification.Value.WrappedValue.ToString() : notification.Value.WrappedValue.ToString(), notification.Value.StatusCode.ToString(), notification.Value.SourceTimestamp.ToLocalTime().ToString("HH:mm:ss.fff"));
richTextBox1.AppendText(s + "SessionId: " + sess);
}
catch (Exception exception)
{
ClientUtils.HandleException(this.Text, exception);
}
}e here
I don't know how much of this, if any, the SDK you're using does for you, but the approach when reconnecting is generally:
try to resume (re-activate) your old session. If this is successful your subscriptions will already exist and all you need to do is send more PublishRequests. Since you're trying to test by closing the session this probably won't work.
create a new session and then call the TransferSubscription service to transfer the previous subscriptions to your new session. You can then start sending PublishRequests and you'll get the queued notifications.
Again, depending on the stack/SDK/toolkit you're using some or none of this may be handled for you.

PubNub to server data transfer

I am building an IoT application. I am using PubNub to communicate between the harware and the user.
Now I need to store all the messages and data coming from the hardware and from the user in a central server. We want to do a bit of machine learning.
Is there a way to do this other than having the server subscribe to all the output channels (There will be a LOT of them)?
I was hoping for some kind of once-a-day data dump involving the storage and playback module in PubNub
Thanks in advance
PubNub to Server Data Transfer
Yes you can perform once-a-day data dumps involving the storage and playback feature.
But first check this out! You can subscribe to Wildcard Channels like a.* and a.b.* to receive all messages in the hierarchy below. That way you can receive messages on all channels if you prefix each channel with a root channel like: root.chan_1 and root.chan_2. Now you can subscribe to root.* and receive all messages in the root.
To enable once-a-day data dumps involving Storage and Playback, first enable Storage and Playback on your account. PubNub will store all your messages on disk over multiple data centers for reliability and read latency performance boost. Lastly you can use the History API on your server to fetch all data stored as far back as forever as long as you know the channels to fetch.
Here is a JavaScript function that will fetch all messages from a channel.
Get All Messages Usage
get_all_history({
limit : 1000,
channel : "my_channel_here",
error : function(e) { },
callback : function(messages) {
console.log(messages);
}
});
Get All Messages Code
function get_all_history(args) {
var channel = args['channel']
, callback = args['callback']
, limit = +args['limit'] || 5000
, start = 0
, count = 100
, history = []
, params = {
channel : channel,
count : count,
callback : function(messages) {
var msgs = messages[0];
start = messages[1];
params.start = start;
PUBNUB.each( msgs.reverse(), function(m) {history.push(m)} );
callback(history);
if (history.length >= limit) return;
if (msgs.length < count) return;
count = 100;
add_messages();
},
error : function(e) {
log( message_out, [ 'HISTORY ERROR', e ], '#FF2262' );
}
};
add_messages();
function add_messages() { pubnub.history(params) }
}

Why are repeating groups required when requesting market data over FIX?

Can anyone tell me, why we need to use repeating groups in market data request. And what response/reply should we receive from acceptor against market data request. Please tell how can we receive market data request on acceptor side?
Sending Market Data request
public void sendMarketDataRequest(SessionID sessionId, String request, int ord){ // request new or old
String bankName = "HBL";
String mdReqCcyPair = "EURUSD";
String mkdreqId = "010qwerty";
SubscriptionRequestType type = new SubscriptionRequestType('1');
if(request.equals("new")){
reqId.put(mkdreqId, mkdreqId);
}else{
type.setValue('2');
}
quickfix.fix44.MarketDataRequest mdRequest = new quickfix.fix44.MarketDataRequest(new MDReqID(mkdreqId), type, new MarketDepth(1));
mdRequest.setField(new quickfix.field.Symbol(mdReqCcyPair));
mdRequest.setField(new Product(2));
mdRequest.setField(new NoRelatedSym(1));
mdRequest.setField(new MDUpdateType(0));
mdRequest.setField(new NoMDEntryTypes(3));
mdRequest.setField(new StringField(582, "1"));
quickfix.fix44.MarketDataSnapshotFullRefresh.NoMDEntries group = new quickfix.fix44.MarketDataSnapshotFullRefresh.NoMDEntries();
group.set(new MDEntryType('0'));
group.set(new MDEntryPx(12.32));
group.set(new MDEntrySize(10));
group.set(new OrderID("OrderId"));
mdRequest.addGroup(group);
group.set(new MDEntryType('1'));
group.set(new MDEntryPx(12.32));
group.set(new MDEntrySize(10));
group.set(new OrderID("OrderId"));
mdRequest.addGroup(group);
qCcyPair.substring(0, 3);
mdRequest.setField(new Currency(mdReqDealtCcy));
mdRequest.setField(new NoPartyIDs(1));
mdRequest.setField(new PartyID(bankName));
try{
boolean re = Session.sendToTarget(mdRequest, sessionId);
System.out.println(mdRequest);
System.out.println(re);
}catch(Exception e){e.printStackTrace();}
}
Receiving End Code
public void onMessage( quickfix.fix44.MarketDataRequest message, SessionID sessionID )
throws FieldNotFound, UnsupportedMessageType, IncorrectTagValue {
System.out.println("On Message: "+message);
}
Market data requests are not normally used for a single instrument; you normally want market data for a set of instruments. Each group in the repeating group set represents an instrument you want data for. The response will depend on your counterparty and when you last had a full market data refresh (usually daily). On your initial request and then at a fixed schedule thereafter you will receive a full market data refresh message . If your counterparty supports an intraday update model you will then receive snapshot refresh messages which are partial data refreshes. The snapshot message provides an update on just the market data that has changed since last refresh (full or partial) and is intended to be a smaller message and therefore, hopefully, lower latency. Not all counterparties support partial refresh. If you are on the acceptor side where you are receiving market data requests (obviously normally on the sell side) you should provide a full market data refresh first covering all of the requested instrument details. Whether you support incremental updates is a business decision.

Spymemcache- Memcache/Membase Faileover

Platform: 64 Bit windows OS, spymemcached-2.7.3.jar, J2EE
We want to use two memcache/membase servers for caching solution. We want to allocate 1 GB memory to each memcache/membase server so total we can cache 2 GB data.
We are using spymemcached java client for setting and getting data from memcache. We are not using any replication between two membase servers.
We loading memcacheClient object at the time of our J2EE application startup.
URI server1 = new URI("http://192.168.100.111:8091/pools");
URI server2 = new URI("http://127.0.0.1:8091/pools");
ArrayList<URI> serverList = new ArrayList<URI>();
serverList.add(server1);
serverList.add(server2);
client = new MemcachedClient(serverList, "default", "");
After that we are using memcacheClient to get and set value in memcache/membase server.
Object obj = client.get("spoon");
client.set("spoon", 50, "Hello World!");
Looks like memcacheClient is setting and getting and value only from server1.
If we stop server1, it fails to get/set value. Should it not use server2 in case of server1 down? Please let me know if we are doing anything wrong here...
aspymemcached java client dos not handle membase failover for particular node.
Ref : https://blog.serverdensity.com/handling-memcached-failover/
We need to handle it manually(by our code)
We can do this by using ConnectionObserver
Here is my code :
public static void main(String a[]) throws InterruptedException{
try {
URI server1 = new URI("http://192.168.100.111:8091/pools");
URI server2 = new URI("http://127.0.0.1:8091/pools");
final ArrayList<URI> serverList = new ArrayList<URI>();
serverList.add(server1);
serverList.add(server2);
final MemcachedClient client = new MemcachedClient(serverList, "bucketName", "");
client.addObserver(new ConnectionObserver() {
#Override
public void connectionLost(SocketAddress arg0) {
//method call when connection lost
for(MemcachedNode node : client.getNodeLocator().getAll()){
if(!node.isActive()){
client.shutdown();
//re init your client here, and after re-init it will connect to your secodry node
break;
}
}
}
#Override
public void connectionEstablished(SocketAddress arg0, int arg1) {
//method call when connection established
}
});
Object obj = client.get("spoon");
client.set("spoon", 50, "Hello World!");
} catch (Exception e) {
}
}
client.get() would use first available node and therefore your value would be stored/updated on one node only.
You seems to be a bit contradicting in your requirements - first you're saying that 'we want to allocate 1 GB memory to each memcache/membase server so total we can cache 2 GB data' which implies distributed cache model (particular key is stored on one node in the cache farm) and then you expect to fetch it if that node is down, which obviously won't happen.
If you need your cache farm to survive node failure without losing data cached on that node you should use replication, which is available in MemBase but obviously you would pay the price of storing the same values multiple times so your desire of '1GB per server...total 2GB of cache' won't be possible.