If external-ip of coturn is only used for aws? - stun

https://github.com/coturn/coturn/blob/c4477bfddd2cd51de1ad37032ca88330f3c44ed6/docker/coturn/turnserver.conf#L100
In turnserver.conf , I see a world " For Amazon EC2 users", if the external-ip is only used for aws?
I let the stun server run in the k8s cluster, and then expose it to the public network with the nodeport service, but the srflx returned by stun is a gateway address, not the external-ip which I set. My k8s cluster runs on Alibaba Cloud.
I hope someone can help me solve this problem, thank you!!!

AWS EC2 instances, for the most part, run behind a NAT. Even if you've assigned a public IP address (e.g. 1.2.3.4) via the AWS Console, the instance only knows about the private network its on and is unaware of the public IP address assigned to it. That is, the instance thinks its IP address is 172.31.5.6 because that's what the Operating System discovered at boot time. Port forwarding enables certain TCP and UDP ports to be forwarded from the public IP address to the private IP address that the EC2 instance is running on.
This typically isn't a problem for most services run on an AWS EC2 instance. With STUN running in full "2 IP address and 2 port mode", the server needs to advertise its alternate IP address back to the client, should the client want to conduct NAT behavior and filtering tests. But it would be incorrect for the STUN server to send back 172.31.5.7 as its alternate IP - the client has no way of reaching that IP since its private.
Similarly for TURN, when port allocations occur, the server needs to send back the public IP address of the EC2 isntance to the client who allocated it. It would be bad if the client requested a TURN port to share with another peer - only for the TURN srever to send back 172.31.5.6.
Hence, for a STUN or TURN server to be hosted behind a NAT, a set of command line parameters or configuration parameters are needed to tell the server what its "real" IP addresses are. The STUN/TURN software will use these IP addresses for sending responses back to clients.

Related

How kubernetes makes 'weird' IP addresses accessible?

How does it work actually? When I start my Kubernetes cluster I can access the address 0.0.0.0 in my browser. When I create a LoadBalancer I can access some other address in my browser e.g. 174.23.0.12. How does Kubernetes know that those addresses are not colliding with some other addresses? Is it possible to e.g. serve a react app on some IP like that?
To my understanding, 0.0.0.0 is a non-standard broadcast address (according to RFC 1122, section 3.3.6) so your request to 0.0.0.0 is received by all hosts on your local network and the address is not routed, so it stays in your local network. You as client don't need to know the actual target address.

unable to ping or view IP Nginx on IBM Cloud Virtual Server, but can view locally

i have an IBM Cloud virtual server instance with a bound IP address, let's say 150.2.3.4
I can get the page using curl 127.0.0.1:80 on the server itself
I can ssh into the server ssh root#150.2.3.4
I cannot ping 150.2.3.4 from a public location
I cannot reach http://15.2.3.4:80 either
The range of internal IPs (10.240.x.x/18 in the VPC, a subrange in the subnet, same region etc.) all seem to be correct.
I do have the IP address bound to that server under "Floating IPs for VPC" on eth0. Does anyone know what else is necessary to make the IP address publicly available?
FWIW I get this for the firewall:
root#my-pipe-01:~# ufw app list
Available applications:
Nginx Full
Nginx HTTP
Nginx HTTPS
OpenSSH

How to access REST APIs hosted locally on Alexa

I am developing a custom Alexa Skill and have a requirement where I want Alexa to access REST APIs that are hosted locally on http://localhost:8080? Any idea how to do this?
Thanks!
If you really want to do this, and I’m assuming you are hosting the skill on AWS Lambda, it would involve quite a bit of work.
Your local endpoints need to be accessible from outside of your network, which requires port forwarding in your router to your machine where the endpoints are hosted. This needs to be configured in your router.
An easier way is to deploy your project containing the API to something like Heroku, which can be done easily. They give you a domain and make the endpoints accessible to Lambda. This should be possible within their free tier.
Here' a link to a pretty good article about how IP addresses work.
Allowing a device sitting on your local network (eg. a laptop computer or Raspberry Pi connected to your wifi) to be accessed from outside your local network (eg. from a service running on AWS) will involve mapping 2 separate IP addresses:
The IP address assigned to your router (your public IP)
The private IP addresses assigned by your router to your devices (laptop, iPhone, RPi, etc).
You have a couple options for allowing your router's IP (#1) to be accessible from outside your local network:
a. Pay your internet provider to provide you with a static IP address
b. Use a dynamic DNS service such as DuckDNS or No-IP.
Once you have a fixed public IP that can be used to access your router, you will then need to map a port on your router (#1) to the device IP on your local network (#2). This is usually referred to as "port forwarding". Most routers will support configuring this. In effect, your tell your router "when you get a message to : pass it to my laptop :"
Your local private IP address will typically have an IP value like 192.168.0.23 (where the 23 can be anything from 1 to 254).
An outside IP will start with something other than 192. Refer to the first link above regarding IP ranges.
You can google "port forwarding" and "public IP" for more info on how IP addresses and port forwarding work, but hopefully this will help get you started. It may seem a bit complicated at first, but if I can understand it, then anyone can :-)

Access external IP address from service

Is it possible to get the external IP address for a POD? It doesn't appear to be populating in the environmental variables for a service, so I was wondering if there was another way to get that information.
Basically: I'm setting up a proftpd service, and it needs to send out its external ip as well as a port for passive communication. Right now, it's sending the local IP address which is causing FTP clients to fail.
The kubernetes service discovery mechanism (DNS or environment variable) doesn't populate the external IP.
One way to work around is to create a static IP first, then assign it to your service.
Or you can exec kubectl inside your cluster to get the external IP but that's nasty.

IP Address of servers

So I am kind of new to networking and I'm just interested in the client/server architecture. Let's say you developed a program and the client version ran on a computer and the server version on the server(obviously). In order for the client to connect to the server, it would have to know the ip address of the server (and the port attached so it can be routed to the correct computer/program). Does that mean that the server's ip address can not change? Would you have to specifically tell your ISP to keep the ip address static? Because if both the client and server ip addresses change, then they would have no way to connect and the program wouldn't work... in other words there has to be one constant. When you sign up for a VPS do they give you a static ip address you can bind to from the client version? Thanks!
In order for the client to connect to the server, it would have to know the ip address of the server (and the port attached so it can be routed to the correct computer/program).
Correct.
Does that mean that the server's ip address can not change?
No. In fact, IPs can change at any time. Most servers that are exposed to the public Internet have a static domain name registered in the Internet's DNS system. A client asks DNS to resolve the desired domain name to its current IP address, and then the client can connect to it. But even in private LANs, most routers act as a local DNS server, allowing machines on the same network to discover each other's IP by machine name.
The OS typically handles DNS for you. A client can simply call gethostbyname() or prefferably getaddrinfo(), and the OS will perform DNS queries as needed on the client's behalf and return back the reported IP(s).
Would you have to specifically tell your ISP to keep the ip address static?
You can, but that usually costs extra. And it is not necessary if your server is registered in DNS. And there are free/cheap DNS systems that work with servers that do not have a static IP.
Because if both the client and server ip addresses change, then they would have no way to connect and the program wouldn't work...
That is where DNS comes into play.
in other words there has to be one constant.
A registered domain name that can be resolved by DNS.
When you sign up for a VPS do they give you a static ip address you can bind to from the client version?
It depends on the VPS service, but a more likely scenario would be you are assigned a static sub-domain within the VPS service's main domain. For example, myserver.thevps.com. Or, if you buy your own domain (which can be done very cheaply from any number of providers), you can usually link it to the DNS server operated by your VPS service.