Write Service in OPCUA Is returning BadWriteNotSupported - opc-ua

I am using OPCUA .net client and server SDKs. I Created a Node in Server from client using AddNodes Service. The node is not attached to any Model in the server. Then i tried to Write a value(eg 121) to the node. The write returned [BadWriteNotSupported]. Is there something I am doing wrong

Probably you created read-only node. Download UA Expert and inspect the node, it is very handy tool for second check.

It is because that node's Access level or user access level is read-only.
Mahe sure to set the access level and use access level for that now is read and write.
So that you can read and write a value to that node.

Related

In chef how is it possible to generate a variable from a recipe and another recipe to use that variable?

I want a recipe which will run on a client to create a variable which will store the FQDN of the client and another recipe which will run on another server to use that variable .how is that possible in chef.
Looks like you are looking for service discovery, Chef might not be the best tool for this job. However, if your client is running Chef, its FQDN is already stored in Chef server. You can pull it in various ways. For example
client_node = search(:node, "recipes:client_cookbook::client_recipe")
Then you can access client's FQDN from the node mash - client_node["fqdn"].

Use an instance of Orion Context Broker FIWare

It is my first time with Fiware technologies and I want to test an instance of the FI-PPP Testbed for Orion Context Broker. I have the service end point (http://catalogue.fi-ware.org/enablers/configuration-manager-orion-context-broker/instances) but I don't know how I can use this information. I'm calling the service through REST Console Chrome extension and I don't have any response useful.
What are the steps to test Orion Context Broker through the instance from http://catalogue.fi-ware.org/enablers ??
UPDATE:
I'm reading https://forge.fi-ware.org/plugins/mediawiki/wiki/fiware/index.php/Publish/Subscribe_Broker_-_Orion_Context_Broker_-_Quick_Start_for_Programmers and I don't have clear if I need to install a Linux machine or I need to use a Virtual Machine from Fi-Lab.
Could anybody help me???
Thanks in advance.
I don't recomend you to use the Configuration Manager catalogue entry except if you have a powerful reason to do that. Use Publish/Subscribe Broker entry instead (see this post about differences between Configuration Manager and Publish/Subscribe Broker).
Taking into account that, the Orion Context Broker instance that you should use is the one at
orion.lab.fi-ware.org:1026. You need an authentication token to use it, a simple way of getting that token is described in the Orion Quick Start Guide.

How does an out-of-process semantic logging service receive events?

The reason I'm asking is I would like to use the out-of-proc mode, but I cannot install a service on each user's workstation, only on a central server. Is the communication between event source and listener service an ETW thing, or is there some kind of RPC I could use?
Yes, the out-of-process mode works by using ETW. All ETW events are system wide so the service just has to listen to ETW events.
ETW only works locally and does not offer a remote solution you could use. Your options are to install a service on each workstation, listen to ETW events (here or here) and forward them to your server with a RPC solution you build yourself. Using MSMQ comes to mind. Or have your application forward the events to your server directly so you don't need the service. Either way, you will have to build it yourself.
To add to Lars' answer, you could also log to SQL. There is a SQL sink you can use but like everything else, to get the most customized fit, you would build your own (or inherit from another class to give you a good starting point). Be careful though. Not all sinks are created the same. They all have their pros and cons. For example, with SQL and Azure sinks, you have to worry about high latency. The XML formatter doesn't write the root starting and ending node so it's not well-formed xml. Whatever reads that file would have to provide them. Good luck!

AWS deployment without using SSH

I've read some articles recently on setting up AWS infrastructure w/o enabling SSH on Ec2 instances. My web app requires a binary to run. So how can I deploy my application to an ec2 instance w/o using ssh?
This was the article in question.
http://wblinks.com/notes/aws-tips-i-wish-id-known-before-i-started/
Although doable, like the article says, it requires to think about servers as ephemeral servers. A good example of this is web services that scale up and down depending on demand. If something goes wrong with one of the servers you can just terminate your server and spin up another one.
Generally, you can accomplish this using a pull model. For example at bootup pull your code from a git/mecurial repository and then execute scripts to setup your instance. The script will setup all the monitoring required to determine whether your server and application are up and running appropriately. You would still need an SSH client for this if you want to pull your code using ssh. (Although you could also do it through HTTPS)
You can also use configuration management tools that don't use ssh at all like Puppet or Chef. Essentially your node/server will pull all your application and server configuration from the Puppet master or the Chef server. The Puppet agent or Chef client would then perform all the configuration/deployment/monitoring changes for your application to run.
If you with this model I think one of the most critical components is monitoring. You need to know at all times if there's something wrong with one of your server and in the event something goes wrong discard the server and spin up a new one. (Even better if this whole process is automated)
Hope this helps.

OpenLdap redirect on write

I am currently trying to setup a redirect on write for an installation of OpenLdap 2.2.
I have two instances running. One is configured to be read-only (only read access, database specified as read-only) and has redirect configured to point to the second instance. The second instance is configured to allow for the desired write permissions.
When I attempt a modify on the first instance it fails as expected but does not send back the referral. Am I missing a piece of the configuration? Am I even on the right path? Any guidance would be greatly appreciated. Thanks.
In the database section of you slapd.conf do you add the redirection like this ? :
updateref "ldap://master-host:port/"
So, it turns out the best way to do this is to go ahead and set up replication using slurpd and point all requests at the slave instance. Unfortunately you can't set up the master and slave on the same host (for obvious reasons, but still), so I had to spin up a second VM to get this going.
Honestly, if I was not trying to replicate a redirect problem it wouldn't be worth it, but I have to duplicate a production issue.
For more information on slapd and specifically slurpd, the OpenLDAP documentation is actually crazy helpful: slurpd config for OpenLDAP 2.2