Access public address of a DC/OS service - marathon

I deployed a service on DC/OS with the following config
when I access this address (http://eureka.marathon.l4lb.thisdcos.directory:8761/) it says the site can't be reached, although all services are healthy on my dashboard.
How can I access the public IP of the service?
I don't know if it is related or not but when I look into the load balancing config of my public slaves, I get 0 of 2 instances in service

<vip-name>.marathon.l4lb.thisdcos.directory:<vip-port> is the internal named virtual IP, configured with the VIP_0 env var in your example.
VIPs are not externally exposed. They are made possible via layer 4 name and IP mapping performed by DC/OS components on each node.
In order to expose a public address you have a few options:
Deploy your app on a public node
Deploy Marathon_LB on a public node and configure your app to be exposed via a virtual host
Set up your own reverse proxy on a public node
Make all your private nodes publicly accessible and then use the host agent node IP and host port
If your app is a Mesos framework, it can register a webui_url for administrative access via the admin router.

in windows command prompt(administrator mode) type "nslookup domain of service".
In your case "nslookup eureka.marathon.l4lb.thisdcos.directory". In your case it will provide all instance ip address.
if your service deployed properly it will give you all instance ip address.

Related

Subnet routing to AWS VPC doen't appear to work

I'm trying to set up a Tailscale node as a relay to my AWS VPC. I've followed the instructions here to the letter, multiple times. Unfortunately, I just cannot seem to ssh to the second (non-Tailscale) instance. My process, briefly:
Set up an AWS VPC with the VPC wizard
create an instance tailscale-relay on the VPC, on the public subnet, with SSH enabled, and my private key. Assign it a new Security Group called sg-tailscale-relay
ssh to tailscale-relay, install tailscale
enable IP forwarding (per docs here)
sudo tailscale up --advertise-routes=10.0.0.0/24, where 10.0.0.0/24 is the range specified in the private subnet (and equivalently in the public subnet, see photo at bottom)
disable key expiry and authorize subnet routes for this node in the Tailscale console
close off ssh access to tailscale-relay in its Security Group, then verify that I can ssh to it with it's Tailscale IP (annoyingly, still requiring my .pem key)
create another instance, test-tailscale, assign it to the same VPC but to the private subnet. Do NOT give it a public IP. Allow all inbound traffic from the sg-tailscale-relay subnet, but not from anywhere else
Then, from my local machine, SSH to the private IP of test-tailscale times out.
I can ping test-tailscale from tailscale-relay (but not tailscale ping, obviously)
What gives? I don't understand what I'm doing wrong.
Bonus: Can I ssh without the private key?
private subnet route table
One possibility is in the non-AWS Tailscale node which you're using to send the ping, if it is a Linux system. Linux was the first client developed, and the one most often used as a subnet router itself.
All of the other clients accept subnet routes by default, but Linux by default does not and needs tailscale up --accept-routes=true to be specified.

Why can't App Engine connect to Compute Engine VM instance?

I have a VM instance (e2-micro) on GCP running with postgres. I added my own external ip address to pg_hba.conf so I can connect to the database on my local machine. Next to that I have a nodeJS application which I want to connect to that database. Locally that works, the application can connect to the database on the VM instance. But when I deploy the app to GCP I get a 500 Server Error when I try to visit the page in the browser.
These are the things I already did/tried:
Created a Firewall rule to allow connections on my own external ip address
Created a VPC connector and added that connector to my app.yaml
Made sure everything is in the same project and region (europe-west1)
If I allow all ip addresses on my VM instance with 0.0.0.0/0 then App Engine can connect, so my guess is that I'm doing something wrong the connector? I use 10.8.0.0/28 as ip range while the internal ip address of the VM instance is 10.132.0.2, is that an issue? I tried an ip range with 10.0.0.0 but that also didn't work.
First check if your app uses a /28 IP address range (see the documentation):
When you create a connector, you also assign it an IP range. Traffic
sent through the connector into your VPC network will originate from
an address in this range. The IP range must be a CIDR /28 range that
is not already reserved in your VPC network.
When you create a VPC connector a proper firewall rulle is also created to allow traffic:
An implicit firewall rule with priority 1000 is created on your VPC
network to allow ingress from the connector's IP range to all
destinations in the network.
As you wrote yourself when you create a rule that allows traffic from any IP it works (your app can connect). So - look for the rule that allows traffic from the IP range that your app is in - if it's not there create it.
Or - you can connect your app to your DB over public IP's - in such case you also have to create a proper rule that will allow the traffic from the app to DB.
Second - check the IP of the DB that app uses.
My guess is that you didn't change the IP of the DB (that app uses) and it tries to connect not via VPC connector but via external IP and that's why it cannot (and works only when you create a firewall rule).
This answer pointed me in the right direction: https://stackoverflow.com/a/64161504/3323605.
I needed to deploy my app with
gcloud beta app deploy
since the VPC connector method was on beta. Also, I tried to connect to the external IP in my app.yaml but that needed to be the internal IP ofcourse.

Amazon ECS fargate : static ip address

I have a question about assigning a static ip address to your service/task that you define on ECS cluster , we're aware that for each task created there is a private and public ip address assigned to it but the problem here is that in case you want to update your task ( or simply if you shut down your task and turn it back on) it will automatically change the ip address, which is not good in prod mode. So how can I set a static ip address and allocate it to each task i create ? I used Network load balancer and network interfaces but it did not work ( the network with the assigned ip address is awlays on "available" status) ?
Thank you
You can use an application load balancer (ALB) over your service, you'll get a URL something like this:
after that, you can use route 53 to redirect this URL to your domain/sub-domain

How to access mongodb on GCE with GAE

I´ve deploy my demo app on GAE and works fine with mLab , but when I try to deploy mongodb on GCE (MongoDB (Google Click to Deploy) )the deploy is success but I don´t know how to get te URI to set on my app running on GAE.
I try with internal and external IP but it seems dont work !
Thanks
GAE Standard deployments are sand-boxed. Therefore you can not connect to GCE instances' internal IPs. You can imagine it as two different devices on two different private networks that are not capable to communicate with one another using their internal IPs. However, they can always communicate if one of the devices (GCE instance in this case) has a public IP, and it's private network (firewall) allowed traffic through the port required by the device.
On the other hand, if the GAE deployment is in flex environment, you should be able to connect to the db using the API through internal IPs.
I have tried and succeeded with this flex environment example for both internal and external IP addresses. Like you, I used Cloud Launcher to deploy Mongodb which created GCE instances with public IPs and network tags mongodb and mongodb-db. Then I created a db, username and a password by connecting to the primary db instance through SSH.
To use the internal IP, I just created/modified keys.json file per the example, as follows:
{
"mongoHost": "internal IP address",
"mongoPort": "27017",
"mongoDatabase": "db",
"mongoUser": "username",
"mongoPass": "password"
}
So I didn't have to worry about the URI as the code in server.js took care of it through passing this string:
mongodb://${user}:${pass}#${host}:${port}
But for your demo app, you may have to check the MongoDB official documentation for the standard connection string format URI.
As for using public IPs, I had to create a network firewall rule that allows tcp ingress on port 27017 with target tags identical to the network tags in order to limit access through the port to the MongoDB instances only. Next, I modified the keys.json file as above by replacing the internal IP with the public one.

How to find IP of my server for Microsoft's Cloud

I created tcp ip application and published it to cloud of Microsoft, but for now I don't know how to find the IP of my server.
Or in another words, how can I find the IP at which implemented role was deployed?
Depends on whether you are trying to get the public IP or the private IP of the server.
If you want to reach this server from outside of the Azure network, then you are looking for the public IP. In this case you must define an InputEndpoint for your role. You'll be required to specify a FQDN for your app. You can find the IP address of this FQDN using the usual methods like tracert, ping, etc.
If you want to reach this server from within the Azure network, typically you'd want some other role in your tenant to communicate with this server, then you'd need to define an InternalEndpoint for your server. You can then use the ServiceRuntime library to discover the private endpoint of your role instance.
Enabling Communication for Role Instances in Windows Azure is an excellent resource to get a better understanding of how this works.