Apache PLC4x connecting with Apache kafka , trying connect with Pro sys OPC simulator - apache-kafka

I have a requirement where I need to read the OPC UA data via apache PLC4x and push to Apache Kafka server . I have configured the OPC UA simulator (ProSys OPC Simulator) , Configured my kafka cluster in virtual machine.
#The PLC4X connection string to be used. Examples for each protocol are included on the PLC4X website.
sources.machineA.connectionString=opcua:tcp://192.168.29.246:53530
#The source 'poll' method should return control to Kafka Connect every so often.
#This value controls how often it returns when no messages are received.
sources.machineA.pollReturnInterval=5000
#There is an internal buffer between the PLC4X scraper and Kafka Connect.
#This is the size of that buffer.
sources.machineA.bufferSize=1000
#A list of jobs associated with this source.
#sources.machineA.jobReferences=simulated-dashboard,simulated-heartbeat
sources.machineA.jobReferences=simulated-dashboard
#The Kafka topic to use to produce to. The default topic will be used if this isn't specified.
#sources.machineA.jobReferences.simulated-heartbeat.topic=simulated-heartbeat-topic
#A list of jobs specified in the following section.
#jobs=simulated-dashboard,simulated-heartbeat
jobs=simulated-dashboard
#The poll rate for this job. the PLC4X scraper will request data every interval (ms).
jobs.simulated-dashboard.interval=1000
#A list of fields. Each field is a map between an alias and a PLC4X address.
#The address formats for each protocol can be found on the PLC4X website.
jobs.simulated-dashboard.fields=Counter
jobs.simulated-dashboard.fields.Counter=3:1001:Integer
#jobs.simulated-dashboard.fields=running,conveyorEntry,load,unload,transferLeft,transferRight,conveyorLeft,conveyorRight,numLargeBoxes,numSmallBoxes,testString
#jobs.simulated-dashboard.fields.running=RANDOM/Running:Boolean
#jobs.simulated-dashboard.fields.conveyorEntry=RANDOM/ConveryEntry:Boolean
#jobs.simulated-dashboard.fields.load=RANDOM/Load:Boolean
#jobs.simulated-dashboard.fields.unload=RANDOM/Unload:Boolean
#jobs.simulated-dashboard.fields.transferLeft=RANDOM/TransferLeft:Boolean
#jobs.simulated-dashboard.fields.transferRight=RANDOM/TransferRight:Boolean
#jobs.simulated-dashboard.fields.conveyorLeft=RANDOM/ConveyorLeft:Boolean
#jobs.simulated-dashboard.fields.conveyorRight=RANDOM/ConveyorRight:Boolean
#jobs.simulated-dashboard.fields.numLargeBoxes=RANDOM/NumLargeBoxes:Integer
#jobs.simulated-dashboard.fields.numSmallBoxes=RANDOM/NumSmallBoxes:Integer
#jobs.simulated-dashboard.fields.testString=RANDOM/TestString:STRING
Help me solving the issue

The error message indicates that it is trying to match the PLC4X connection string to the endpoint string returned from the Prosys Simulation Server.
The Prosys endpoint string can be found on the status tab of the Simulation Server. Converting this to the PLC4X connection string format, the connection string should be.
opc.tcp://192.168.29.246:53530/OPCUA/SimulationServer
Just looking at the config file, there also seems to be an issue with the address
jobs.simulated-dashboard.fields.Counter=3:1001:Integer
This should probably be
jobs.simulated-dashboard.fields.Counter=ns=3;i=1001

Related

Connection to external Kafka Server using confluent-kafka-dotnet fails

I need to read Kafka messages with .Net from an external server. As the first step, I have installed Kafka on my local machine and then wrote the .Net code. It worked as wanted. Then, I moved to the cloud but the code did not work. Here is the setup that I have.
I have a Kafka Server deployed on a Windows VM (VM1: 10.0.0.4) on Azure. It is up and running. I have created a test topic and produced some messages with cmd. To test that everything is working I have opened a consumer with cmd and received the generated messages.
Then I have deployed another Windows VM (VM2, 10.0.0.5) with Visual Studio. Both of the VMs are deployed on the same virtual network so that I do not have to worry about opening ports or any other network configuration.
then, I have copied my Visual Studio project code and then changed the IP address of the bootstrap-server to point to the Kafka server. It did not work then, I read that I have to change the server configuration of Kafka, so I opened the server.properties and modified the listeners property to listeners=PLAINTEXT://10.0.0.4:9092. It still does not work.
I have searched online and tried many of the tips but it does not work. I think first of all to provide the credential to an external server (vm1), and probably some other configuration. Unfortunately, the official documentation of confluent is very short with very few examples. There is also no example to my case on the official GitHub. I have played with the "Sasl" properties in the Consumer Config class, but also no success.
the error message is:
%3|1622220986.498|FAIL|rdkafka#consumer-1| [thrd:10.0.0.4:9092/bootstrap]: 10.0.0.4:9092/bootstrap: Connect to ipv4#10.0.0.4:9092 failed: Unknown error (after 21038ms in state CONNECT)
Error: 10.0.0.4:9092/bootstrap: Connect to ipv4#10.0.0.4:9092 failed: Unknown error (after 21038ms in state CONNECT)
Error: 1/1 brokers are down
Here is my .Net core code:
static void Main(string[] args)
{
string topic = "AzureTopic";
var config = new ConsumerConfig
{
BootstrapServers = "10.0.0.4:9092",
GroupId = "test",
//SecurityProtocol = SecurityProtocol.SaslPlaintext,
//SaslMechanism = SaslMechanism.Plain,
//SaslUsername = "[User]",
//SaslPassword = "[Password]",
AutoOffsetReset = AutoOffsetReset.Latest,
//EnableAutoCommit = false
};
int x = 0;
using (var consumer = new ConsumerBuilder<Ignore, string>(config)
.SetErrorHandler((_, e) => Console.WriteLine($"Error: {e.Reason}"))
.Build())
{
consumer.Subscribe(topic);
var cancelToken = new CancellationTokenSource();
while (true)
{
// some tasks
}
consumer.Close();
If you set listeners to a hard-coded IP, it'll only start the server binding and accepting traffic to that ip
And your listener isn't defined as SASL, so I'm not sure why you've tried using that in the client. While using credentials is strongly encouraged when sending data to cloud resources, it's not required to fix a network connectivity problem. You definitely shouldn't send credentials over plaintext, however
Start with these settings
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://10.0.0.4:9092
That alone should work within the VM shared network. You can use the console tools included with Kafka to test it.
And if that still doesn't work from your local client, then it's because 10.0.0.0/8 address space is considered a private network and you must advertise the VM's public IP and allow TCP traffic on port 9092 through Azure Firewall. It'd also make sense to expose multiple listeners for internal Azure network and external, forwarded network traffic
Details here discuss AWS and Docker, but the basics still apply
Overall, I think it'd be easier to setup Azure EventHub with Kafka support

Can't Create Receive Connector

So I'm trying to copy a single receive connector from one server to another. I store the receive connector in a variable like so:
$NKCRC = Get-ReceiveConnector "<Source Resource Connector>" | Select *
Then I try to create the new one with the command below:
New-ReceiveConnector -Server $targetServer -Bindings $NKCRC.Bindings -RemoteIPRanges $NKCRC.RemoteIPRanges -Name $NKCRC.Name -TransportRole FrontEndTransport
The source RC is an Exchange 2010 RC and I'm creating it on an Exchange 2016 server. When I run the command, I get the following error:
The values that you specified for the Bindings and RemoteIPRanges parameters
conflict with the settings on Receive connector "<Identity of receive connector on the
target server>" SMTP Relay". A Receive connector must have a unique combination of a local
IP address & port bindings and remote IP address ranges. Change at least one of these
values.
I checked the mentioned RC and found they don't have the exact same settings for ANYTHING AT ALL, Bindings or RemoteIPRanges. I'm kinda at a loss. The bindings and RemoteIPRanges for each RC is below:
Source RC Bindings: {0.0.0.0:25}
Count of Source RC RemoteIPRanges: 60 (Rather than post them all, lol)
RC that the Exchange thinks is a duplicate
Bindings: {0.0.0.0:25, 10.200.154.79:25}
Count of Source RC RemoteIPRanges: 50
So obviously they aren't duplicates, right? What am I missing?

How to send PHP app logs directly to ELK service?

According to the documentation there are two ways to send log information to the SwisscomDev ELK service.
Standard way via STDOUT: Every output to stdout is sent to Logstash
Directly send to Logstash
Asking about way 2. How is is this achieved, especially how is the input expected?
We're using Monolog in our PHP buildpack based application and using its stdout_handler is working fine.
I was trying the GelfHandler (connection refused), SyslogUdPHandler (no error, but no result), both configured to use VCAPServices logstashHost and logstashPort as API endpoint / host to send logs to.
Binding works, env variables are set, but I have no idea how to send SwisscomDev ELK service Logstash API endpoint compatible log information from our application.
Logstash is configured with a tcp input, which is reachable via logstashHost:logstashPort. The tcp input is configured with its default codec, which is the line codec (source code; not the plain codec as stated in the documentation).
The payload of the log event should be encoded in JSON so that the fields are automatically recognized by Elasticsearch. If this is the case, the whole log event is forwarded without further processing to Elasticsearch.
If the payload is not JSON, the whole log line will end up in the field message.
For your use case with Monolog, I suggest you to use the SocketHandler (pointing it to logstashHost:logstashPort) in combination with the LogstashFormatter which will take care of the JSON encoding with the log events being line delimited.

Is there a api for ganglia?

Hello I would like to enquire if there is an API that can be used to retrieve Ganglia stats for all clients from a single ganglia server?
The Ganglia gmetad component listens on ports 8651 and 8652 by default and replies with XML metric data. The XML data type definition can be seen on GitHub here.
Gmetad needs to be configured to allow XML replies to be sent to specific hosts or all hosts. By default only localhost is allowed. This can be changed in /etc/ganglia/gmetad.conf.
Connecting to port 8651 will get you a default XML report of all metrics as a response.
Port 8652 is the interactive port which allows for customized queries. Gmetad will recognize raw text queries sent to this port, i.e. not HTTP requests.
Here are examples of some queries:
/?filter=summary (returns a summary of the whole grid, i.e. all clusters)
/clusterName (returns raw data of a cluster called "clusterName")
/clusterName/hostName (returns raw data for host "hostName" in cluster "clusterName")
/clusterName?filter=summary (returns a summary of only cluster "clusterName")
The ?filter=summary parameter changes the output to contain the sum of each metric value over all hosts. The number of hosts is also provided for each metric so that the mean value may be calculated.
Yes, there's an API for Ganglia: https://github.com/guardian/ganglia-api
You should check this presentation from 2012 Velocity Europe - it was really a great talk: http://www.guardian.co.uk/info/developer-blog/2012/oct/04/winning-the-metrics-battle
There is also an API you can install from pypi with 'pip install gangliarest' and sets up a configurable API backed with a Redis cache and indexer to improve performance.
https://pypi.python.org/pypi/gangliarest

Cannot read remote private queue

I'm trying to get MSMQ 5 working on my two Windows Server 2008 R2 virtual machines.
I can send to local and remote private queues, and I can read from local private queues.
I can't read from remote private queues.
I've read a number of suggestions, especially the ones summarised by John Breakwell at MSMQ Issue reading remote private queues (again).
Things I've already done:
Turned off firewalls on both machines.
Ensured that Everyone and AnonymousLogon have full control of the queues. (If I take away AnonymousLogon access, then I can't remotely send to the queue, and the message ends up with "Access is denied" on the receiving machine.)
Allowed Nonauthenticated Rpc on both machines.
Allowed NewRemoteReadServerAllowNoneSecurityClient on both machines.
the sending code fragment is:
MessageQueue queue = new MessageQueue(queueName, false, false, QueueAccessMode.Send);
Message msg = new Message("Blah");
msg.UseDeadLetterQueue = true;
msg.UseJournalQueue = true;
queue.Send(msg, MessageQueueTransactionType.Automatic);
queue.Close();
The receiving code fragment is:
queueName = String.Format("FormatName:DIRECT=OS:{0}\\private$\\{1}",host,id);
queue = new MessageQueue(queueName, QueueAccessMode.Receive);
queue.ReceiveCompleted += new ReceiveCompletedEventHandler(receive);
queue.BeginReceive();
...
public void receive(object sender, ReceiveCompletedEventArgs e)
{
queue.EndReceive(e.AsyncResult);
Console.WriteLine("Message received");
queue.BeginReceive();
}
My queueName ends up as FormatName:DIRECT=OS:server2\private$\TestQueue
When I call beginReceive() on the queue, I get
Exception: System.Messaging.MessageQueueException (0x80004005)
at System.Messaging.MessageQueue.MQCacheableInfo.get_ReadHandle()
at System.Messaging.MessageQueue.ReceiveAsync(TimeSpan timeout, CursorHandle cursorHandle, Int32 action, AsyncCallback callback, Object stateObject)
at System.Messaging.MessageQueue.BeginReceive()
I've used Wireshark on Server1 to look at the network traffic. Without posting all the detail, it seems to go through the following stages. (Server1 is trying to read from a queue on Server2.)
Server1 contacts Server2, and there is an NTLMSSP challenge/response negotiation. A couple of the responses mention "Unknown result (3), reason: Local limit exceeded".
Server1 sends Server2 an rpc__mgmt_inq_princ_name request, and Server2 replies with a corresponding response.
There's some ldap exchanges looking up the domain, then a referral to ldap://domain/cn=msmq,CN=Server2,CN=Computers,DC=domain which returns a "no such object" response.
Then there's some SASL GSS-API encrypted exchange with the LDAP server
Then connections to the ldap server and Server2 are closed.
I've tried enabling Event Viewer > Applications and Services Logs > Microsoft > Windows > MSMQ > End2End. It shows messages being sent, but no indication of why trying to receive is failing.
How can I debug this further?
The problem was related to domains. Server1 and Server2 were part of a development domain. My login account was part of the corporate domain. The development domain trusts the corporate domain enough for me to log in, be a member of administrators, install features etc. But it seems to be insufficient trust to read remote queues.
I found this by looking into public queues. If I was having trouble reading remote private queues, perhaps I should get more data by trying public queues. After installing the appropriate directory integration feature, I was able to create a public queue, but not see it in the list of public queues. Trying to refresh the list of public queues gave me this error:
Not all
public queues can be displayed. Only public queues cached locally can be
displayed. Error: The object was not found in Active Directory.
Google pointed me to John Breakwell's answer to a similar problem here, which indicates that trust relationships don't work across messaging protocols.
Try to use the standard Receive method instead and specify the transaction type as it seems like BeginReceive does not support receiving from transactional queues.
Message msg = queue.Receive(MessageQueueTransactionType.Automatic);
MSMQ does not always return logical error messages...
System.Messaging.MessageQueueException (0x80004005)
at System.Messaging.MessageQueue.MQCacheableInfo.get_ReadHandle()
This error can also be caused due to the BeginReceive Read on an non-existent queue. Check the configuration to ensure queue path specified exists physically and has "Everyone" full permissions