What I want to achieve:
I want to have an unique identifier for each raspberry pi which is running node-red and simultaneously being a mosquitto client so that the clients can publish their unique identity to the broker.
My idea:
I want to use the MAC-Adress of the raspberry Pis' as an unique identifier but how can I get the adress in a node-red function block?
I got it running doing this:
In the node-red folder I added this to the global context:
functionGlobalContext: {
osModule:require('os')
}
In one of the function blocks of node-red I added this piece of code:
const os = global.get('osModule');
result = os.networkInterfaces();
I'm saving that in the payload and then I'm publishing an initial mqtt post to the broker.
Related
I have a requirement where I need to read the OPC UA data via apache PLC4x and push to Apache Kafka server . I have configured the OPC UA simulator (ProSys OPC Simulator) , Configured my kafka cluster in virtual machine.
#The PLC4X connection string to be used. Examples for each protocol are included on the PLC4X website.
sources.machineA.connectionString=opcua:tcp://192.168.29.246:53530
#The source 'poll' method should return control to Kafka Connect every so often.
#This value controls how often it returns when no messages are received.
sources.machineA.pollReturnInterval=5000
#There is an internal buffer between the PLC4X scraper and Kafka Connect.
#This is the size of that buffer.
sources.machineA.bufferSize=1000
#A list of jobs associated with this source.
#sources.machineA.jobReferences=simulated-dashboard,simulated-heartbeat
sources.machineA.jobReferences=simulated-dashboard
#The Kafka topic to use to produce to. The default topic will be used if this isn't specified.
#sources.machineA.jobReferences.simulated-heartbeat.topic=simulated-heartbeat-topic
#A list of jobs specified in the following section.
#jobs=simulated-dashboard,simulated-heartbeat
jobs=simulated-dashboard
#The poll rate for this job. the PLC4X scraper will request data every interval (ms).
jobs.simulated-dashboard.interval=1000
#A list of fields. Each field is a map between an alias and a PLC4X address.
#The address formats for each protocol can be found on the PLC4X website.
jobs.simulated-dashboard.fields=Counter
jobs.simulated-dashboard.fields.Counter=3:1001:Integer
#jobs.simulated-dashboard.fields=running,conveyorEntry,load,unload,transferLeft,transferRight,conveyorLeft,conveyorRight,numLargeBoxes,numSmallBoxes,testString
#jobs.simulated-dashboard.fields.running=RANDOM/Running:Boolean
#jobs.simulated-dashboard.fields.conveyorEntry=RANDOM/ConveryEntry:Boolean
#jobs.simulated-dashboard.fields.load=RANDOM/Load:Boolean
#jobs.simulated-dashboard.fields.unload=RANDOM/Unload:Boolean
#jobs.simulated-dashboard.fields.transferLeft=RANDOM/TransferLeft:Boolean
#jobs.simulated-dashboard.fields.transferRight=RANDOM/TransferRight:Boolean
#jobs.simulated-dashboard.fields.conveyorLeft=RANDOM/ConveyorLeft:Boolean
#jobs.simulated-dashboard.fields.conveyorRight=RANDOM/ConveyorRight:Boolean
#jobs.simulated-dashboard.fields.numLargeBoxes=RANDOM/NumLargeBoxes:Integer
#jobs.simulated-dashboard.fields.numSmallBoxes=RANDOM/NumSmallBoxes:Integer
#jobs.simulated-dashboard.fields.testString=RANDOM/TestString:STRING
Help me solving the issue
The error message indicates that it is trying to match the PLC4X connection string to the endpoint string returned from the Prosys Simulation Server.
The Prosys endpoint string can be found on the status tab of the Simulation Server. Converting this to the PLC4X connection string format, the connection string should be.
opc.tcp://192.168.29.246:53530/OPCUA/SimulationServer
Just looking at the config file, there also seems to be an issue with the address
jobs.simulated-dashboard.fields.Counter=3:1001:Integer
This should probably be
jobs.simulated-dashboard.fields.Counter=ns=3;i=1001
I need to read Kafka messages with .Net from an external server. As the first step, I have installed Kafka on my local machine and then wrote the .Net code. It worked as wanted. Then, I moved to the cloud but the code did not work. Here is the setup that I have.
I have a Kafka Server deployed on a Windows VM (VM1: 10.0.0.4) on Azure. It is up and running. I have created a test topic and produced some messages with cmd. To test that everything is working I have opened a consumer with cmd and received the generated messages.
Then I have deployed another Windows VM (VM2, 10.0.0.5) with Visual Studio. Both of the VMs are deployed on the same virtual network so that I do not have to worry about opening ports or any other network configuration.
then, I have copied my Visual Studio project code and then changed the IP address of the bootstrap-server to point to the Kafka server. It did not work then, I read that I have to change the server configuration of Kafka, so I opened the server.properties and modified the listeners property to listeners=PLAINTEXT://10.0.0.4:9092. It still does not work.
I have searched online and tried many of the tips but it does not work. I think first of all to provide the credential to an external server (vm1), and probably some other configuration. Unfortunately, the official documentation of confluent is very short with very few examples. There is also no example to my case on the official GitHub. I have played with the "Sasl" properties in the Consumer Config class, but also no success.
the error message is:
%3|1622220986.498|FAIL|rdkafka#consumer-1| [thrd:10.0.0.4:9092/bootstrap]: 10.0.0.4:9092/bootstrap: Connect to ipv4#10.0.0.4:9092 failed: Unknown error (after 21038ms in state CONNECT)
Error: 10.0.0.4:9092/bootstrap: Connect to ipv4#10.0.0.4:9092 failed: Unknown error (after 21038ms in state CONNECT)
Error: 1/1 brokers are down
Here is my .Net core code:
static void Main(string[] args)
{
string topic = "AzureTopic";
var config = new ConsumerConfig
{
BootstrapServers = "10.0.0.4:9092",
GroupId = "test",
//SecurityProtocol = SecurityProtocol.SaslPlaintext,
//SaslMechanism = SaslMechanism.Plain,
//SaslUsername = "[User]",
//SaslPassword = "[Password]",
AutoOffsetReset = AutoOffsetReset.Latest,
//EnableAutoCommit = false
};
int x = 0;
using (var consumer = new ConsumerBuilder<Ignore, string>(config)
.SetErrorHandler((_, e) => Console.WriteLine($"Error: {e.Reason}"))
.Build())
{
consumer.Subscribe(topic);
var cancelToken = new CancellationTokenSource();
while (true)
{
// some tasks
}
consumer.Close();
If you set listeners to a hard-coded IP, it'll only start the server binding and accepting traffic to that ip
And your listener isn't defined as SASL, so I'm not sure why you've tried using that in the client. While using credentials is strongly encouraged when sending data to cloud resources, it's not required to fix a network connectivity problem. You definitely shouldn't send credentials over plaintext, however
Start with these settings
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://10.0.0.4:9092
That alone should work within the VM shared network. You can use the console tools included with Kafka to test it.
And if that still doesn't work from your local client, then it's because 10.0.0.0/8 address space is considered a private network and you must advertise the VM's public IP and allow TCP traffic on port 9092 through Azure Firewall. It'd also make sense to expose multiple listeners for internal Azure network and external, forwarded network traffic
Details here discuss AWS and Docker, but the basics still apply
Overall, I think it'd be easier to setup Azure EventHub with Kafka support
these are the steps i did :
1- created a keypair.
2- downloaded the keypair and used puttygen to generate a private key
3-created a new instance using the orion-psb-image-R5.4 image for a context broker.
4-created a security group and added a rule that opened the ssh port
5- associated a floating ip to that image
6-tried to access the image from putty using the floating ip and the private key generated in step 2
putty gives me this error:
Disconnected : No supported authentication methods available (server sent:publickey).
I would like to know how to solve this issue and understand the reason for it.
update:
Screen shots:
1.loading the downloaded keypair into puttygen
2.the downloaded keypair file from fiware lab (keypair.pem) and the generated private key
3.entering the floating ip for the contextbroker instance
4.loading the generated private key to use during connection establishment
5.the error message when i try to connect
Seems to be a problem with key generation or Putty configuration. Unfortunatelly, the question post doesn't include enough detail to provide a more precices anser.
I'd suggest you to edit your question post to include full detail of each step you have done (even including screenshots as you go).
EDIT: use centos as user login instead of root
I want to include the "ibmiot" Input node into my local installation of Node-RED ?
Can I do it ? and if so - how ?
The Node-RED flow editor that shows up when I start the Bluemix IoT Starter, shows the node that exactly - fits into what I want to do.
It has the API Keys, device Type, ID and so on as seen below. I'm trying to add this to my local install of Node-RED.
Any help is appreciated.
Below - I want to add the same node ( or import ) into my Laptop where I have the Node-RED installed.
The iotapp nodes are provided by the node-red-contrib-scx-ibmiotapp module - http://flows.nodered.org/node/node-red-contrib-scx-ibmiotapp
I'm new to IBM Bluemix Blockchain service. I wonder if I can create multiple chain code. This is because I got the following error.
! looks like an error loading the chaincode or network, app will fail
{ name: 'register() error',
code: 401,
details: { Error: 'rpc error: code = 13 desc = \'server closed the stream without sending trailers\'' } }
Here is what I did:
Create a blockchain serivce, and nameded as 'blockchain'.
Run cp-web example => Success
Run marbles demo using existing blockchain service ('blockchain'). => Gives me the above error
Newly create a blockchain service, names as 'mbblochchain'
Repush marbles demo with new service name => Success
So I wonder if I can put multiple chaincode into peer's network or not. It is likely I may be misunderstanding how it works or should behave.
Yes you can deploy multiple chaincodes on the same network. The issue you are having is because each app is registering users differently.
Currently only 1 username (aka enrollID) can be registered against 1 peer. If you try to register the same username against two peers, the 2nd registration will fail. This is what is happening to you.
The Bluemix blockchain service is returning two type1 usernames (type1 is the type of enrollID these apps want to use).
cp-web will register the first and second enrollID against peer vp1
marbles will register the first enrollID against vp1 and the 2nd enrollID against vp2
Therefore when you ran marbles after cp-web it tried to register the 2nd enrollID against vp2 when it had already been registered with vp1. Thus giving you an error.
In general, you can deploy multiple chaincode apps to a single instance of the Bluemix Blockchain service and more broadly speaking multiple chaincode apps to a single peer network.
Were you deploying the web apps directly using "cf push" and trying to bind to existing Blockchain service instance or where you trying to use the "deploy to Bluemix" functionality?