[Internet Resolver][1]
Hi ,
How can I delete this , it been trying but I am not getting any suitable way to delete it .Thanks
[1]: https://i.stack.imgur.com/5MigP.png
If you are using amazon provided DNS in your VPC then dot (.) rule will be the default for internet resolver pointing to amazon DNS server. You can not delete this as it will be assigned to the default VPC in that AWS account. However you can override that association with your VPC is in question and you can have your custom dot(.) forwarding rule pointing to your own DNS server or another server in the internet such as 8.8.8.8. and have that rules associated with all your VPC to have DNS query resolved accordingly.
However if you are using a custom server make sure you have a system rule in place to resolve amazon owned domain name(amazonaws.com) privately. and associate that system rule with all of your VPCs.
Related
I have transferred my domain from Google to AWS 7 days back. The transfer process has been completed in AWS. I have created a public hosted zone in Router 53 and the NS records present in Route 53 are matching with the NS records of my domain in AWS. Also created CNAME records pointed to the application load balancer.
I am able to access my domain inside AWS workspace and even dig command returns the results in my AWS workspace but not working in some machines outside of AWS workspace but when I use load balancer url, I am able to access the application so there are no issues with security group configuration.
Also no dig results when using this url https://toolbox.googleapps.com/apps/dig/
Am I missing something here? Any help is highly appreciated
The issue is resolved after disabling the DNSSEC in Route 53. I think I had not disabled DNSSEC in google before transferring the domain to AWS so it has transferred the DNSSEC with keys to AWS.
Google has nice ways to connect to cloudsql from other google services but I cannot see how to connect from ai-platform jobs. As part of our training job, we need to update our cloudsql db with metrics but the only I could get it to work is by whitelisting all IPs (don't want that!) in the cloudsql and connecting via the public IP. I don't see an option to add cloud-sql-proxy to the trainer instance. Since the IP of the trainer instance is dynamic, we cannot reliably add specific IP address to whitelist. Any other ways to handle this?
It looks like AI Platform supports VPC peering, so you should be able to connect to Cloud SQL using private IP.
Since Cloud SQL also uses VPC peering, you'll likely need to do the following to get the resources to connect:
Create a VPC to share (or use the "default" VPC)
Follow the steps here to setup VPC peering for AI Platform in your VPC.
Follow the steps here to setup a private IP for your instance in your VPC.
Since the resources are technically in different networks, you may need to export custom routes (Step #2) to allow the AI platform access to your Cloud SQL instance.
Alternatively to using private IP, you could keep using public IP w/ an IP allowlist coupled with Authorizing with SSL/TLS certificates. This still isn't as secure as using the proxy or private IP (as users are technically able to connect to your instance), but they'll be unable to interact with the database engine without the correct certificates.
Can you publish a PubSub message from within your training job and have it trigger a cloud function that connects to the database? AI Platform training seems to have IAM restrictions that I too am curious how to control.
I can't connect from azure resource (aks node) to Azure postgres using pgcli. I also tried directly from node and got the same error message:
FATAL: Client from Azure Virtual Networks is not allowed to access the server. Please make sure your Virtual Network is correctly configured.
Firewall rules in the resource are on:
Allow access to Azure services: ON
Running the same pgcli login command on my computer and on another azure resource seems to work fine.
Adding Firewall rules to all IPs return the same error.
Curl from the problematic server (host:5432) returns a reply, so it's not an outbound issue.
What does the error mean?
A VM where the connection originates from is deployed to a virtual network subnet where Microsoft.Sql service endpoint is turned on. Per documentation:
If Microsoft.Sql is enabled in a subnet, it indicates that you only want to use VNet rules to connect. Non-VNet firewall rules of resources in that subnet will not work.
For connection to succeed there must be a VNet rule added on PostgreSQL side. At the time the question was asked VNet Service Endpoints for Azure Database for PostgreSQL just got to public preview so I assume it might not have been available for the OP.
Solution
As of November 2020, Service Endpoints for Postgres is GA and instead of disabling the service endpoint one can add a missing VNet rule to the PostgreSQL server instance and reference the service endpoint-enabled subnet. It can be done via Portal or Azure CLI
Apparently, the vm is part of a vnet that a service endpoint tag Microsoft.sql was enabled.
I found this answer. To solve the problem I disabled the service endpoint and added the public IP to the Connection Security section.
I encountered the same problem.
All I did was to switch Allow access to Azure services to ON .
I've recently setup a new Tomcat instance on Google Compute Engine and I can access my Tomcat instance via its IP address in the browser.
I've now setup a Cloud DNS entry and had my domain registrar point my domain name to the Cloud DNS servers. However this was 2 days ago and I still can't access my website via the domain name.
The WHOIS record shows the following Name Server entries
Name Server ns-cloud-e1.googledomains.com
Name Server ns-cloud-e2.googledomains.com
Name Server ns-cloud-e3.googledomains.com
Name Server ns-cloud-e4.googledomains.com
I've also setup an A record in the Cloud DNS console based on the feedback of my domain registrar. Is there anything else I need to setup in order for all this to work?
[EDIT 1] Having a look again at the instructions provided by Google it seems the name server names they wanted me to use have changed to
ns-cloud-d1.googledomains.com.
ns-cloud-d2.googledomains.com.
ns-cloud-d3.googledomains.com.
ns-cloud-d4.googledomains.com.
I've asked my registrar to make the change in case this is the problem.
[EDIT 2] My registrar has updated my DNS records and they resolve to Google's servers. However my website still doesn't show when entered into a browser I get an NXDOMAIN error, which implies my domain doesn't exist. Does anyone have a basic example of what the Cloud DNS settings should look like? Do I need to setup A records or CNAME records?
[EDIT3] My setup is shown here (domain name and IP addresses have been faked for screenshot)
Thanks in advance.
Andy.
OK, I finally worked out the problem.
In the screenshot in my question the following changes were required.
1) Replace the A record for *.andtest.com.au with an A record for just andtest.com.au
2) Replace the www.andtest.com.au A record with a CNAME record for www.andtest.com.au which points to andtest.com.au
Now when I enter www.andtest.com.au in a browser, I see my Tomcat web page.
I'm new to Amazon.
My client hosted their website www.domain.com at godaddy and
they have created the aws ec2 instance and running their Django apps in this instance.
Now they wants to use this instance for subdomain. say www.subdomain.domain.com
So I have created a record set in aws route 53 by following procedure.
Created Recordset A pointing to elastic IP
Created Nameserver (NS)
Finally added this recordset into godaddy dns files. Still I'm not able to access this subdomain. Please can anyone help here.
Are you managing DNS through AWS?
If so..
You need to create a DNS Zone for that domain.
Add the records or import the ZONE file
Take the Name Server records from AWS
Go to the domain registrar(assuming you manage this also), in this case GoDaddy!
Point the Name Server records from AWS to GoDaddy!