Is it possible to have external IOT device publish to IBM cloud node-red aedes mqtt broker? - ibm-cloud

I am new to MQTT. Is it possible to have an external IOT device that can publish it's readings in MQTT format and use the aedes MQTT broker node and a subscriber node in node-red that is installed in the IBM cloud environment to receive the payloads?
Currently I have a simple setup within node-red with the publish and subscribe nodes when using the 'localhost:1883' as a server address. Do I need to have a server with IP address to make this work?
Where do I start?

Short answer, no.
You can only access Node-RED running on IBM Cloud via HTTP/HTTPS, you will not be able to connect to the any broker you ran in the Node-RED process externally.
This is because the Node-RED instance is behind a reverse HTTP/HTTPS proxy.
You could deploy Node-RED in a Docker container or on a Virtual machine, or just make use of an external MQTT broker. But without a much better understanding of the whole problem you are trying to solve it is impossible to be more specific.

Related

Run Kafka Consumers on Google Cloud Run

I have a large application with lots of microservices that communicate through Kafka. Right now it's working on GKE.
We are moving Kafka to confluent.io and we were planning to move some of the microservices to Google Cloud Run (fully managed).
BUT,... it looks like Google Cloud Run (fully managed) does not support listening to kafka events, right? Are there any plans to support it? Is there a workaround?
EDIT:
This post shared by andres-s, shows that you can implement your own cloud run and have it connected to confluent kafka, in Anthos.
It would be great to have this option in the fully-managed Google Cloud Run service.
But in the meantime, the question would be: is it possible to implement it in a regular GKE cluster (not Anthos)?
Google Cloud has a fully managed Kafka solution through SaaS partner Confluent, which uses Cloud Run for Anthos (with GKE)
Google Pub/Sub is the GCP alternative for Kafka, but through Confluent you can use kafka on GCP
Cloud Run is just Knative SERVING. It is stateless and spins up when it receives events. Due to this, it can't really subscribe to a topic and pull an event.
Knative Eventing is more stateful in nature and handles the pulls and subsequently triggers the pods running Knative Serving. They, ideally, are used together to give you the full serverless experience.
The good news is, there is a "hack". You can do Kafka to PubSub then PubSub to Cloud Run. If you are adventurous and don't mind OSS software, there are a number of Knative Eventing tutorials at serverlesseventing.com.

How to get availability status of middleware services which is running on ibm cloud?

IBM internally monitor services which is offered on cloud but somehow I need to get status of middleware services such as kafka,API Connect etc. It will help me to automate things if some service stopped/not accessable.
To monitor your provisioned instances of these services you could exercise them. For example on API Connect create an API called /health and curl the API to verify it is working. For kafka create a topic to check the health.

How get public ip for kubernetes pod?

The question:
I have a VOIP application running in a kubernetes pod. I need to set in the code the public IP of the host machine on which the pod is running. How can I get this IP via an API call or an environmental variable in kubernetes? (I'm using Google Container Engine if that's relevant.) Thanks a lot!
Why do I need this?
The application is a SIP-based VOIP application. When a SIP request comes in and does a handshake, the application needs to send a SIP invite request back out that contains the public IP and port which the remote server must use to set up a RTP connection for the audio part of the call.
Please note:
I'm aware of kubernetes services and the general best-practise of exposing those via a load balancer. However I would like to use hostNetwork=true and expose ports on the host for the remote application to send RTP audio packets (via UDP) directly. This kubernetes issue (https://github.com/kubernetes/kubernetes/issues/23864) contains a discussion of various people running SIP-based VOIP applications on kubernetes and the general concessus is to use hostNetwork=true (primarily for performance and due to limitations of load balancing UDP I believe).
You can query the API server for information about the Node running your pod like it's addresses. Since you are using hostNetwork=true the $HOSTNAME environment variable identifies the node already.
There is an example below that was tested on GCE.
The code needs to be run in a pod. You need to install some python dependencies first (in the pod):
pip install kubernetes
There is more information available at:
https://github.com/kubernetes-incubator/client-python
import os
from kubernetes import client, config
config.load_incluster_config()
v1=client.CoreV1Api()
for address in v1.read_node(os.environ['HOSTNAME']).status.addresses:
if address.type == 'ExternalIP':
print address.address

Remote NiFi input ports are not exposed

I'm learning Apache NiFi. I'm working on a simple site-to-site data flow. On one side, I have a single node NiFi and on the other side, I have two node NiFi cluster. The issue that I'm facing is, (on the single node instance)when I connect a GetFile processor with a Remote Process Group(Two node NiFi cluster), the connection details asks me to select the Input Port name of the Remote Cluster. However, in the drop down list, my remote cluster's input port name is not displayed.
I have given the correct URL of the remote NiFi cluster. the single node instance is supposed to talk with the remote cluster to get the port details and port names, right? Then why is it now showing my input port?
In a secure setup, there are two policies that need to be created. One is a global policy that allows the Remote Process Group to ask the other NiFi for information about the nodes/node, this is called "retrieve site-to-site details", the other is a policy on each port that allows data to be sent to it, this is called "receive data via site-to-site".
This blog post explains how to configure secure site-to-site in more detail:
http://bryanbende.com/development/2016/08/30/apache-nifi-1.0.0-secure-site-to-site

is it possible to subscribe to CloudAMQP rabbitmq service from android devices?

Is it possible to publish messages to the rabbitmq service by CloudAMQP from other cloud solutions like Appfog and consume them from internet(not inside the cloud) ??
Yes, you can connect from anywhere!