Is there any way to use same IP for several requests with GAE Sockets API? - sockets

I'm using third party service that has own notion of session and expects all requests in session to come from same IP. They claim that it's a required security measure and suggest to use proxy, LOL.
Is there any way to use same IP for several requests with Socket API?
Interval between requests in session is ~10 seconds, so keeping connection alive and reusing it should work. I've tried to setup proxy module that runs single instance and uses HttpClient with connection pool. Logging shows that after first request connection is properly released and stored in pool. While doing second request I see that pool has 1 connection, but HttpClient says that there in no free connection for route and opens new. Probably route changes somehow?

It goes against the way App Engine is meant to work, scale-ability. Spawning instances closets to the consumers and multiple instance will mean different IP's. If you want a static IP you will need to switch to Compute Engine. Its a server VM that can have a static IP address. Or if your 3rd party service can take range you get get them from the link.
App Engine does not currently provide a way to map static IP addresses to an application. In order to optimize the network path between an end user and an App Engine application, end users on different ISPs or geographic locations might use different IP addresses to access the same App Engine application. DNS might return different IP addresses to access App Engine over time or from different network locations.

Actually, I solved this issue. Solution that I described in question was missing one step. As my connection was SSL authenticated, I had to use same context for all requests that I make.

Related

Knative/Kubernetes unique IP for outbound traffic

Question:
Does Knative expose low-level network components that allow me to configure the stack in such a way, that each instance has a unique IP address available for outbound networking?
Info
I have a workload that has to happen on queue event. The incoming event will start the fetching on an API. Due to rate limiting and amount of request (around 100), the process is long-running and with wait / request / wait / request / wait / .. . What the code (JS) basically does is, hitting an API endpoint with parameters from the queues message and sending the result of the 100 API requests back with another queue.
Serverless on Lamdba is therefore expensive, also on AWS multiple instances are likely to be spawned on the same VM (tested), resulting in the same IP for outbound traffic. Therefore Lambda is not an option for me.
I read a lot about Knative lately and I imagine that the Kubernetes stack offers better configurability. I need to have concurrent instances of my service, but I need to have a unique outbound IP per instance.
Currently, the solution is deployed on AWS Beanstalk where I scale them out based on queue-length. Therefore 1 - 10 instances exist at the same time and perform the API requests. I use micro instances since CPU/../.. load is really low. There have been multiple issues with Beanstalk, that's why we'd like to move.
I do not expect a monthly cost advantage (IPs are expensive, that's ok), I am just unhappy with the deployment on Beanstalk.
IMHO, going with KNative/Kubernetes is probably not the way to proceed here. You will have to manage a ton of complexity just to get some IP addresses. Beanstalk will seem like a walk in the park.
Depending on how many IPs you need, you can just setup a few EC2 instances loaded up with IP addresses. One cheap t3.small instance can host 12 IPv4 addresses (ref) and your JS code can simply send requests from each of the different IP addresses. (Depending on your JS http client, usually there's a localAddress option you can set.)

Simplest server to server authentication

I have microservice on a new server/vps that will only ever be called via REST by monolith app to perform some heavy lifting and then post the operation results back to the monolith in few minutes.
How should I protect these two endpoints? I think my main goal, for now, is just preventing someone that found servers address to be able to do anything.
Almost every solution I google seems like overkill/premature optimization.
Is it sufficient that I generate random long token once on each machine and then just pass it to headers and check it's presence on the other end?
Do I even need to SSL this? As far as I understand we need SSL encryption for clients that are trying to send sensitive data via wireless or unsafe shared networks.
What are the chances(is it even possible?) that somebody is gonna eavesdrop between two digitalocean vps's sending data via http? Did it ever happen before ?
Q: Is it sufficient that I generate random long token once on each machine and then just pass it to headers and check it's presence on the other end?
A: Generally microservices are behind a GateKeeper/Gateway(nginx,haproxy) so you can expose the endpoints you want. In your case I would recommend to create a private network between the two vps's and expose your microservice on that internal IP.
Q: Do I even need to SSL this? As far as I understand we need SSL encryption for clients that are trying to send sensitive data via wireless or unsafe shared networks.
A: No. If you use internal networks and don't expose to the public then there is no need for SSL/TLS. If you would do something with Tier 3/4 then you would need encryption for cross datacenter communication.
Q: What are the chances(is it even possible?) that somebody is gonna eavesdrop between two digitalocean vps's sending data via http? Did it ever happen before ?
A: There are bots that scan for open ports on servers/computers and try to penetrate them with exploits. In all cases always use a firewall like UFW/firewalld.
So let's say you have two servers with these microservices using the internal private network from your favorite provider:
VPS1 (ip = 10.0.1.50)
FooBarService:1337
BarFooService:7331
VPS2 (ip = 10.0.1.51)
AnotherMicroService:9999
Now both VPS's can access each other's services by simply calling the ip + port.
Good luck.
There's a few simple solutions you could use to authenticate both servers back and forth. The one I would recommend if you want to keep it simple, as you say, is Basic Auth. As long as you're utilizing that over an SSL/HTTPS connection, it suffices as a super simple way to authenticate each end.
You state it is your main goal to protect these endpoints, but then ask if SSL/HTTP is even needed. If these servers are vulnerable to the web in any way, then I would say yes, your endpoints need to be protected, and if you're transmitting sensitive data, then you need to be sending it through a secure stream.
If you believe the data you're sending is not very sensitive, and is likely that no one that knows these two endpoints will even know how to properly manipulate your data by sending fake requests, then sure, you don't need any of this, but then you assume the risk and responsibility for if and when it ever is exposed. Basic Auth is super easy, and with LetsEncrypt it's incredibly easy to obtain an SSL certificate for free. It's good experience, so may as well try it out and protect these endpoints and ensure that they're safe.

how to prevent my app from sending data through proxy?

I am developing a chat back-end application on aws cloud. In order to make a scalable architecture for the chat back-end I must ensure that the one who is opening a connection is the real one.
To be more accurate that chat ofcourse must keep a tcp connection open with the server all the time and I have the following problems:
1 - the back-end has a load balancer elastic load balancer.
2 - the tcp connection between the client app and the back-end server must stay open and alive. which mean the app must keep the connection alive with the server not the elb load balancer.
3 - the elb load balancer must send the connection and load through a session table sticky session to the same server the app connected to before.
unfortunately, the load balancer only support l4 and l7 layers and I think I need to use the l3 layer.
the main problem here is most people operate behind proxy server so I can't maintain a connection with them because the tcp connection will be made with the proxy and not their app.
I don't know how to solve this but the only solution that I know now is:
I must prevent the users from operating behind any proxy servers to make sure the tcp connection is direct with them not the proxy, how do I do that?
If there is a way to let them operate behind a proxy and a solution can be made on the back-end tell me.
I'm not sure I understand your concern. If you are using web sockets, most proxies would allow this type of communication but they can cause you troubles as well if they have timeouts and such.
You cannot control whether someone is behind a proxy. In many cases the proxy will be completely transparent so you'd have no way to know it is there without inspecting all of the network hops. You may want to read up further on this. A good start is this article -
https://www.infoq.com/articles/Web-Sockets-Proxy-Servers
If you are attempting to use the IP address as an authentication mechanism, I suggest instead using a standard authentication mechanism. Once authenticated, you should manage the session using either session cookies, JWT, or another standard session management solution. Note that JWT is typically stateless (doesn't use a session) but can be used to authorize a user to session type data.

Kamailio and a connection to the PSTN via SIP

I think about the following setup, but I do not know how to connect the main parts.
One the one side there is a Kamailio SIP server. This server provides VoIP connectivity within a certain network (non public intranet).
On the other side there is a SIP provider. This provider provides a single telephone number from the PSTN. Let's say the number is 0034-443322.
Both components are working fine so far.
I want to use that number as a dial-in to my private network. A user with number 8282 in my network should be reachable via 0034-443322-8282 from the outside world. Outgoing calls aren't necessary.
How to reach my goal? I don't know what to look for :/ Any ideas are very welcome :)
kind regards
K.A.
If your PSTN gateway can be reached by dialing the full number (including the extension), simply let the gateway forward every incoming call to your Kamailio instance which will forward the call to the appropriate user. For that, you need to create your users (known as subscribers in Kamailio) and they need to register to your Kamailio instance so that they can receive incoming calls. Regarding mapping extensions to users, you can simply let the extension be the username; or you can add extensions as aliases of the subscribers.

Multiple TCP/IP servers and sharing the same "well known port" ... somehow?

I apologize for the weird question wording... here's the design problem:
I am developing a server (on Linux using C++, FWIW) that provides a service to many instances of a client application running on consumer PCs.
I want the following:
1) All clients first identify themselves to a "gatekeeper" server application. Consider this a login procedure, with credentials like a user name and password being passed in. Call the gatekeeper program "gserver". (for gatekeeper.)
2) Once each client has been validated, it is then placed into a long term connection with one of several instances of a different server application running on the same physical server box bound to the same server address. Call any of these instances "wserver" (for "working" server.)
So, what the client sees is that a "gatekeeper" application gives it passworded access to one of several "working" servers running on the same box.
Here is the "real" challenge: we want to exclusively use a "well known" port number for the inbound server connections (like port 80 or 443, say.) Or, our own "well known" port.
We would prefer not to have to make the client talk to a second port on the server for the long term connection phase with wserver(n). The problem with this, of course, is that only one server process at a time can be bound to the same port and server address.
This implies that a connection made by the client with gserver must also fill the role of the long term connection. The only way I see to accomplish this is that gserver must, after login, act like a proxy and copy traffic between itself and the client to the particular wserver(n) that the client is bound to logically.
It would be ideal if a TCP/IP connection first made between client(n) and gserver could be somehow "transported" to another application on the same server, intact, and could then be sustained by one of the wserver(n) instances for the long term connection.
I know that web servers do something like this for spreading out server loads. "Load balancing". The main difference here is that the "balancing" is the allocation of a particular user to a particular wserver(n) instance. But I also have the impression that load balancing is a kind of proxying - which I am trying to avoid (since it complicates the architecture and adds overhead as well as a single point of failure.)
This is a conceptual and design question. Don't worry about source code examples, unless they are absolutely essential to get the ideas across. If we pin down an approach, I can code it up.
Thanks!
What you are looking for is file descriptor passing. See UNP 15.7. One well-known heavy user of this facility is postfix.
I developed such an application long time ago. Since multiple servers can't listen on the same port. What you need is to have gserver listening on the well-known port. Once connection is established, pass the connection to the other servers via an Unix socket. Once the connection is passed to other server, gserver is out of picture. It can die and the other server will be still serving the connection.
I dont' know if this applies to your design, but the usual solution (as implemmented by the xinetd daemon) is to fork() and then exec() the process. For example, xinetd may serve services like rlogin, rsh, tftp, telnet, etc. which are actually served by different programs. This will not be useful to you if your wservers are processes already running in the system.