I'm using Thingsboard on cloud and thingsboard gateway on my raspberry pi.
How can i send and receive push of a button from the dashboard to a device connected, and read it's status?
| tb | tb-gw | device |
| ---------------> ?? |
| ?? <--------------- |
Thanks
Related
I have a Windows 10 machine, and I would like to access a database which is set on another machine outside local network.
Is there any possibility of achieving that using postgresql?
Thank's a lot, and I'd appreciate your effort to help me overcome this situation.
It is possible, provided that:
The firewall of your local network allows outgoing connections to the PostgreSQL listen port (usually 5432).
The firewall of the other network allows incoming connections to the PostgreSQL listen port (usually 5432).
The firewall of the PostgreSQL server allows connection on its listen port (usually 5432).
The PostgreSQL server is configured to accept network connections.
You can use a network scanner such as Nmap to test things, thing to do is to get a laptop on the customer's network, and scan from there. If you can connect to the PostgreSQL from an address on the same subnet, then you know there is nothing else needed on the PostgreSQL server, and so your attention need to be on the customer's firewall. This is where things can get difficult, and you'll need to work with whoever controls that firewall / router.
Chances are that the customer's network is on an RFC 1918 subnet. If this is the case the firewall / router will need to be configured to port forward like this:
public internet
|
----public address--port nnn--
| |
| firewall |
| |
|-----rfc 1918 address--------|
|
|
|
----rfc 1918 address--port 5432--
| |
| PostgreSQL server |
| |
|--------------------------------|
Consider the following situation:
Internet
||
||
.------''------.
| HTTPS (:443) |
'------..------'
||
.-----------------------'|
| \/
| 3rd party HAproxy service
| ||
| ||
optional .-----------''-----------.
route | PROXY Protocol (:5443) |
| '-----------..-----------'
| || ________
___________|_______________________||________________________________| SERVER |____
| | \/ |
| | local HAproxy |
| | || |
| | || |
| | .------''------. |
| | | HTTPS (:443) | |
| | '------..------' |
| | || |
| | || |
| | \/ |
| '---------------> local webserver |
|___________________________________________________________________________________|
The backend server has both HAproxy and Apache httpd locally running on port 5443 and 443 respectively.
My local webserver does not support the PROXY protocol. So I want HAproxy to catch the PROXY Protocol from the 3rd party service, and pass the data to the local webserver in either HTTPS or simply a TCP pass-through.
In the case of HTTPS I suppose it should manipulate the HTTP packets using the correct SSL-certificate to add the original sender IP in the X-Forwarded-For HTTP headers (which should be provided by the PROXY protocol).
However, the documentation of HAproxy is awful if you are new to HAproxy, and I could not find examples that explain how to do this. I know it has to be possible since HAproxy is listed as "Proxy-protocol ready software", but how?
Yes, you need to use the accept-proxy keyword after bind in the frontend declaration. It will also be good to read about the related send-proxy keyword which is used in the given "3rd party HAproxy service".
The PROXY Protocol can be stripped back to its original state using the following HAproxy configuration:
frontend app-proxy
bind *:5443 accept-proxy
mode tcp
option tcplog
default_backend app-httpd
backend app-httpd
mode tcp
server app1 127.0.0.1:443 check
This will accept a PROXY Protocol on port 5443, strip it, and send the TCP data to 443.
If you would like to manipulate the HTTP packets in the SSL-encrypted TCP data, you would need to have access to the correct SSL certificates (which your webserver should have access to already). This is what you'll likely want to do.
frontend app-proxy
bind *:5443 accept-proxy ssl crt /path/to/certnkey-file.pem
mode http
option httplog
default_backend app-httpd
backend app-httpd
mode http
server app1 127.0.0.1:443 check ssl verify none
The advantage of the latter approach is that the original client data is preserved while passing through the proxies, so that you know what the original IP of your visitor is. Which is kind of the whole idea of using PROXY Protocol in the first place! HAproxy will automatically update the X-Forwarded-For header with the correct IP-address which was transferred using the PROXY Protocol.
A usual deployment have looked in past and present like the following to me:
+------------------+ +---------+ tcp +-------+ tcp
| PSGI Application |----o| Starman |---->| nginx |<----(internet)
+------------------+ +---------+ +-------+
In fact I do have two fully fledged web servers in between the internet and the actual web application.
Since nginx has uWSGI directly build in and uWSGI supports the PSGI protocol, which is a fork of WSGI, I would love to use a PSGI-broker (only PSGI no HTTP) instead of a full fledged web server (Starman).
Is there an PSGI-only-broker solution available?
The PSGI 'protocol' (like WSGI) is essentially a calling convention for a subroutine. A request comes into the application as a subroutine call with a hash as an argument. The application responds through the subroutine's return value: an arrayref containing HTTP status code, HTTP headers and body. There's more to it than that, but those are the essentials.
What this means is that a process can only implement PSGI if the process contains a Perl interpreter. To achieve this, the process might be implemented in Perl or it might be implemented in a language like C that can load the libperl.so shared library. Similarly a process can only implement WSGI if it contains a Python interpreter.
Your block diagram contains three parts, but in reality the PSGI application is inside the Starman process. So there are really only two parts (although both parts are multiprocess containers).
You say that "nginx has uWSGI directly build in". This does not mean that a WGSI application runs inside the Nginx process. It means that the WSGI application runs in a separate uwsgi process and Nginx communicates with that process over a TCP socket using the uWSGI protocol. This is essentially the same model as Nginx with Starman behind it, but with the distinction that the socket connection to Starman will use the HTTP protocol:
.----------------------. .-----------.
| Starman | | Nginx |
| | HTTP | | HTTP
| .------------------. |<---------| |<-------(internet)
| | PSGI Application | | | |
| '------------------' | | |
'----------------------' '-----------'
The HTTP protocol does have higher overheads than the uWSGI protocol so you could get better performance by running an application server that speaks the WSGI socket protocol and can load libperl.so to implement the PSGI interface. uWSGI can do that:
.----------------------. .----------.
| uWSGI | | Nginx |
| | WSGI | | HTTP
| .------------------. |<---------| |<-------(internet)
| | PSGI Application | | | |
| '------------------' | | |
'----------------------' '----------'
My requirement is to create a multiple tap interfaces, each with ip address on same subnet.
I tried this by creating a bridge
br0 (192.168.1.199)
___________|_____________________________________
| | | | | |
eth0 tap0 tap1 tap2 tap3 tap4
(192.168.1.150) (.151) (.152) (.153) (.154)
I need all the tap interface be reachable from external PC. When i ping from tap0 to external computer say 192.168.1.200
ping -I tap0 192.168.1.200 -- the ping is not going through.
But when i ping from 192.168.1.200 to 192.168.1.150 (tap0) it is working, but i get the mac address of the bridge (br0)
I have two problems:
How to ping from tap interface to external
How to get the mac address of the right tap interface, when pinged from outside.
I have a tun/tap device which is used to read incoming packets from one interface and send them as UDP packets via another interface. I could implement this and could read ICMP pakcets send to the tun/tap interface and also get them remotely using UDP. But the issue happens when I try to change the default gateway of the input interface to the tun/tap device so that I can read all the incoming data from the tun/tap. When this is done, I cant send the UDP packets as the routing isnt proper.
I also tried to you the "SO_BINDTODEVICE" option in socket comm but still didnt work. Please note that I havent used the write() method in the tun/tap. I just used the read() function, collected the data and send them via UDP socket communication.
Please let me know if my approach is wrong or any other work around to overcome this. Thanks.
/********More Details********/
Thanks Rob.
What I am trying to achieve is a simulation of IP based header commuication(ROHC) in a high latency channel.
For this I have 4 virtual machines. VM1 is a normal desktop machine. VM2 is a gateway which takes the packets using tun/tap(from VM1) and does the UDP based communication with VM4. VM3 is the channel where parameters like latency, error rate etc can be set. VM4 is connected to the WAN. The user in VM1 should be able to browse the WAN just like normal. Please find the diagram below.
IP Packets
|
| +------------------+ +--------------+ +----------------+
'---|eth1..... | | | | |
| | | | | | |
| tun/tap | | eth0|___|UDP Sock eth0|___
| | | | | | | | | |
| ..UDP Sock|_____|eth1 | | | | | |
| | | | | +tun/tap+ | '
+------------------+ +--------------+ +----------------+ WAN
VM2 VM3(Channel) VM4
Update:
Thanks Tommi. Your solution worked. I could get the UDP packets one way to the final NAT gateway. But I could not get the reverse way to work till now.
I tried enabling the masquerade using iptables and also setting up the host route to the tuntap at VM1 but it wasnot working.
I have a few queries regarding this.
1) In VM4 I receive the UDP data and write to the tun/tap. This will get routed to the WAN by the kernel. But for the incoming packet, do I again need to read using the tun/tap? In this case do I need to make the read and write in different threads? I am asking this because I need to transport them back also as UDP data. Let me know if I am missing something here.
Once again thanks a lot for your help.
Your udp packets will get routed to your tuntap interface, too. (well, depending on some settings they may just get discarded). You need to add a route rule for the udp peer you are sending them to, a host rule or a smaller network rule that wont interfere with your other communication.