Move website to new server but DNS resolving to old server's ip address [closed] - server

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
before I begin kindly note that I am a newbie and still learning.
Yesterday 10 hours from now, I had to move all my hosted websites to a new server(to be more specific - from one droplet to a new droplet). So, since the websites were moved to a new server, meant that their ip addresses would change too. So, I updated the dns configuration for all the websites to point to the new ip address now. But I was unaware that the previous dns configuration had set the ttl to 86400(1 day). I learned about this concept after searching on google why my websites would still resolve to the old server.
So, that basically meant that the old dns config is cached for 1 day and I have to wait that long to see the change in the domain name resolution to reflect the websites from the new server.
So, i tried to perform nslookup and dig commands on the domains to just check the remaining ttl. But, this is where I am upset right now.
The nslookup command with -debug parameter gave the following result:
Please Note:- I have replaced my website's domain name with (mywebsite.com) and my new server's ip address with (new.server.ip.address) from the actual nslookup result
nslookup -debug mywebsite.com new.server.ip.address
------------
Got answer:
HEADER:
opcode = QUERY, id = 1, rcode = REFUSED
header flags: response, want recursion
questions = 1, answers = 0, authority records = 0, additional = 0
QUESTIONS:
address.ip.server.new.in-addr.arpa, type = PTR, class = IN
------------
Server: UnKnown
Address: new.server.ip.address
------------
Got answer:
HEADER:
opcode = QUERY, id = 2, rcode = NOERROR
header flags: response, auth. answer, want recursion
questions = 1, answers = 1, authority records = 2, additional = 2
QUESTIONS:
mywebsite.com, type = A, class = IN
ANSWERS:
-> mywebsite.com
internet address = new.server.ip.address
ttl = 14400 (4 hours)
AUTHORITY RECORDS:
-> mywebsite.com
nameserver = ns2.centos-webpanel.com
ttl = 86400 (1 day)
-> mywebsite.com
nameserver = ns1.centos-webpanel.com
ttl = 86400 (1 day)
ADDITIONAL RECORDS:
-> ns1.centos-webpanel.com
internet address = 127.0.0.1
ttl = 14400 (4 hours)
-> ns2.centos-webpanel.com
internet address = 127.0.0.1
ttl = 14400 (4 hours)
------------
------------
Got answer:
HEADER:
opcode = QUERY, id = 3, rcode = NOERROR
header flags: response, auth. answer, want recursion
questions = 1, answers = 0, authority records = 1, additional = 0
QUESTIONS:
mywebsite.com, type = AAAA, class = IN
AUTHORITY RECORDS:
-> mywebsite.com
ttl = 86400 (1 day)
primary name server = ns1.centos-webpanel.com
responsible mail addr = myemail#gmail.com
serial = 2013071601
refresh = 86400 (1 day)
retry = 7200 (2 hours)
expire = 3600000 (41 days 16 hours)
default TTL = 86400 (1 day)
------------
Name: mywebsite.com
Address: new.server.ip.address
Now, here's what upset me. As in the above result, the ttl (even after 10 hours since changing the dns configuration) shows 86400. I was expecting it to show the remaining ttl but the ttl is constant at 86400. Does that mean that the dns will never update for my websites?? The ttl just does not decrease.
So, to verify even further I tried using linux's dig command and here's the result I got.
Please Note:- I have replaced my website's domain name with (mywebsite.com) and my old server's ip address with (old.server.ip.address) from the actual dig result
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.62.rc1.el6_9.5 <<>> mywebsite.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15423
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;mywebsite.com. IN A
;; ANSWER SECTION:
mywebsite.com. 83221 IN A old.server.ip.address
;; Query time: 0 msec
;; SERVER: 67.207.67.2#53(67.207.67.2)
;; WHEN: Mon Feb 5 01:55:05 2018
;; MSG SIZE rcvd: 44
Now here, the dig command resolves the domain to my old server's ip address and it shows the ttl as 83221 !!! Like I said, its more than 10 hours that I updated the dns configuration to point to my new server's ip address. But, even after 10 hours passing, the ttl says 83221 !!!!
Running the dig command again does reflect a reduction in the ttl here though, unlike the nslookup command.
So, what do you guys think is the problem that has been going on here?? Or I am misunderstanding something?? If so, please correct me. Any kind of help will seriously assist a lot. It would really help me if someone can explain what is going on here and also what's wrong or if something is wrong with my new server.
And just if it helps, I have kept the websites' files on both - the old server as well as the new server.
Thanks.
Edit:- (Solved)
So here's what fixed all the issues I was facing. I use centos web panel on my server which comes bundled with freedns manager. So, a bug in freedns kept my nameservers and domains' dns from updating. So, I went for cloudflare dns and that fixed all the issues.

Your domain is not correctly configured, please use online diagnostics tools such as dnsviz.net, see the report: http://dnsviz.net/d/mkinfra.in/dnssec/
You are in a lame delegation situation.
If we query .IN authoritative nameservers for your domain, they reply:
mkinfra.in. 86400 IN NS ns1.centos-webpanel.com.
mkinfra.in. 86400 IN NS ns2.centos-webpanel.com.
mkinfra.in. 86400 IN NS ns3.centos-webpanel.com.
mkinfra.in. 86400 IN NS ns4.centos-webpanel.com.
mkinfra.in. 86400 IN NS ns5.centos-webpanel.com.
If we query any of these 5 nameservers for your domain, they reply:
mkinfra.in. 86400 IN NS ns1.centos-webpanel.com.
mkinfra.in. 86400 IN NS ns2.centos-webpanel.com.
Which is not the same set of records. You will first need to resolve this discrepancy.
For your website they all reply the same:
www.mkinfra.in. 86400 IN CNAME mkinfra.in.
mkinfra.in. 86400 IN A 139.59.63.210
So they all reply will your old IP and not the new one. Your problem has nothing to do with TTLs: the authoritative nameservers for your domain are still not delivering the new IP address you wish, so you have to configure them properly. If you do it yourself, please remember to update the serial of the zone for any change.
The serial is in fact 2018012401 which follows the pattern YYYYMMDDXX so we can infer that the zone was changed on January 24th but not since then (or was changed but serial not updated so the new content is not taken into account at all).
And to reply to your other question: if you query an authoritative nameserver you will always get the same TTL, which is per design. It is only if you query a resolving and caching nameserver that you will see the TTL decreasing from one query to another, because the case is slowly forgetting about the data it resolved in the past.
Never use nslookup but always dig but always specify the command you use when you ask for people to check what you are doing (it is very important to specify the nameserver you query with the # parameter of dig since the results will be vastly different from an authoritative or a recursive nameserver).

Related

youtube.com -> error.persotld.com -> Redirect without installing mitm certs

What I currently have :
I have a custom dns (powered by python twisted), installed in local network and used by all my local devices. That makes me able to customize the answers at the dns Level. The result is that i can reply something like that :
"myprivatenextcloud.mycompany.com. 1062 IN A 192.168.0.7"
for exemple.
All those stuffs are to resolve some domains in my local network, and to limit/block some domains as ads domains, or/and porn.
I can currently "block" some domains because i am replying NXDOMAIN to the query.
for exemple :
; <<>> DiG 9.10.6 <<>> #192.168.0.3 youporn.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 11662
;; flags: qr ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;youporn.com. IN A
;; Query time: 42 msec
;; SERVER: 192.168.0.3#53(192.168.0.3)
;; WHEN: Tue Oct 18 16:49:00 CEST 2022
;; MSG SIZE rcvd: 29
When the "browser" receives this message, it said something like "Cannot access to the website".
What i want :
Now i would like to "redirect" the domain to a other domain. For exemple youporn.com to error.mydomain.com
How can i do that?
I have tried differents things like CNAME (but not working, because of root domain & SSL Issue), DNAME, ANAME (ask for "youporn.com" replies "google.com")
But nothing is working.
The main probleme is that (I think) I cant use the classic redirect as HTTP 30X.
Because I shouldnt receive(and manage) the http request.
The other issue is about https (-> I can simulate with a virtualhost https://google.com, but the cert will be a selfsigned cert, and the browser will "reject" it. It can be solved with mitm. But it's not the goal of my project.)
If i can "redirect" before HTTP(S) calls, I will receive a classic/normal Http on http(s)://error.mydomain.com request and will be able to manage it.
I read somethings about TXT, but nothing working...
If u have some ideas...
Thx in advance.
PS : i know that DNS cant "redirect" the query, but it's to explain my goal.
Some links :
https://redirect.center
https://about.txtdirect.org/hosted/

https://dnsflagday.net/ report edns512tcp=timeout

i have a Ubuntu 16.04.5 server with Vesta CP.
I checked the server on https://dnsflagday.net, and I got this report:
domain.cl. #123.456.78.90 (ns1.domain.cl.): dns=ok edns=ok edns1=ok edns#512=ok ednsopt=ok edns1opt=ok do=ok ednsflags=ok docookie=ok edns512tcp=timeout optlist=ok
domain.cl. #123.456.78.90 (ns2.domain.cl.): dns=ok edns=ok edns1=ok edns#512=ok ednsopt=ok edns1opt=ok do=ok ednsflags=ok docookie=ok edns512tcp=timeout optlist=ok
I do not know what edns512tcp = timeout means and I have not had much luck looking for a solution on internet.
Can someone help me? thanks
For that tool, any kind of "timeout" error is a problem, it means some server did not reply or the message (either query or reply) was eaten by some active element on the path, so it needs to be fixed.
edns512tcp is when the testing software does an EDNS query with a buffer of 512 bytes and over TCP.
If you go to https://ednscomp.isc.org/ednscomp/ for your domain you will have the full test results.
For that specific error it is:
EDNS - over TCP Response (edns#512tcp)
dig +vc +nocookie +norec +noad +edns +dnssec +bufsize=512 dnskey zone #server
expect: NOERROR
expect: OPT record with version set to 0
See RFC5966 and See RFC6891
So you can see which DNS query was done with dig, that you can reproduce it (+vc is an old flag name that is an alias for +tcp). The test expects to get a NOERROR code back and an OPT record. Your servers did not reply at all, so the test failed.
It seems that your servers did not reply to that at all, which is wrong. Maybe they do not reply to TCP queries at all which is even more wrong. In all cases you will need to contact the entity responsible for maintaining those servers and point it to the test results so that they start to fix the problem.
thanks for your help.
I read more about it and I could detect that port 53 was being blocked by the firewall, I added the rule to the firewall to allow TCP connections on port 53.
Everything it's fine now

fail2ban custom filter for custom node.js application

Need some help related to create a custom filter for custom app which is websocket server written in node.js . As per my understanding from other articles the custom node.js app needs to write a log which enters any authentication failed attempts which will further be read by Fail2ban to block IP in question . Now I need help with example for log which my app should create which can be read or scanned by fail2ban and also need example to add custom filter for fail2ban to read that log to block ip for brute force .
Its really old question but I found it in google so I will write answer.
The most important thing is that line you logging needs to have right timestamp because fail2ban uses it to ban and unban. If time in log file is different than system time, fail2ban will not find it so set right timezone and time in host system. In given example I used UTC time and time zone offset and everything is working. Fail2Ban recognizes different types of timestamps but I didn't found description. But in fail2ban manual you can find two examples. There also exist command to check if your line is recognized by written regular expression. I really recommend to use it. I recommend also to use "regular expression tester". For example this one.
Rest of the log line is not really important. You just need to pass user ip.
This are most important informations but I will write also example. Im just learning so I did it for educational purposes and Im not sure if given example will have sense but it works. I used nginx, fail2ban, pm2, and node.js with express working on Debian 10 to ban empty/bad post requests based on google recaptcha. So set right time in Your system:
For debian 10 worked:
timedatectl list-timezones
sudo timedatectl set-timezone your_time_zone
timedatectl <-to check time
First of all You need to pass real user ip in nginx. This helped me so You need to add line in You server block.
sudo nano /etc/nginx/sites-available/example.com.
Find location and add this line:
location / {
...
proxy_set_header X-Forwarded-For $remote_addr;
...
}
More about reverse proxy.
Now in node.js app just add
app.set('trust proxy', true)
and you can get user ip now using:
req.ip
Making it work with recaptcha:
All about recaptcha is here: Google Developers
When You get user response token then you need to send post request to google to verify it. I did it using axios. This is how to send post request. Secret is your secret, response is user response.
const axios = require('axios');
axios
.post(`https://www.google.com/recaptcha/api/siteverify?secret=${secret}&response=${response}`, {}, {
headers: {
"Content-Type": "application/x-www-form-urlencoded; charset=utf-8"
},
})
.then(async function (tokenres) {
const {
success, //gives true or false value
challenge_ts,
hostname
} = tokenres.data;
if (success) {
//Do something
} else {
//For fail2ban. You need to make correct timestamp.
//Maybe its easier way to get this but on my level of experience
//I did it like this:
const now = new Date();
const tZOffset = now.getTimezoneOffset()/60;
const month = now.toLocaleString('en-US', { month: 'short' });
const day = now.getUTCDate();
const hours = now.getUTCHours()-tZOffset;
const minutes = now.getUTCMinutes();
const seconds = now.getUTCSeconds();
console.log(`${month} ${day} ${hours}:${minutes}:${seconds} Captcha verification failed [${req.ip}]`);
res.send(//something)
}
Time zone offset to set right time. Now pm2 save console.log instructions in log file in /home/youruserdir/.pm2/logs/yourappname-out.log
Make empty post request now. Example line of bad request will look like this:
Oct 14 19:5:3 Captcha verification failed [IP ADRESS]
Now I noticed that minutes and seconds have no 0 but fail2ban still recognizes them so its no problem. BUT CHECK IF DATE AND TIME PASSES WITH YOUR SYSTEM TIME.
Now make filter file for fail2ban:
sudo nano /etc/fail2ban/filter.d/your-filter.conf
paste:
[Definition]
failregex = Captcha verification failed \[<HOST>\]
ignoreregex =
Now ctrl+o, ctrl+x and you can check if fail2ban will recognize error lines using fail2ban-regex command:
fail2ban-regex /home/youruserdir/.pm2/logs/yourappname-out.log /etc/fail2ban/filter.d/your-filter.conf
Result will be:
Failregex: 38 total
|- #) [# of hits] regular expression
| 1) [38] Captcha verification failed \[<HOST>\]
`-
Ignoreregex: 0 total
Date template hits:
|- [# of hits] date format
| [38] {^LN-BEG}(?:DAY )?MON Day %k:Minute:Second(?:\.Microseconds)?(?: ExYear)?
`-
Lines: 42 lines, 0 ignored, 38 matched, 4 missed
[processed in 0.04 sec]
As You can see 38 matched. You will have one. If You have no matches, check pm2 log file. When I was testing on localhost my app gave IP address with ::127.0.0.1. It can be ipv6 related. It can maybe make make a problem.
Next:
sudo nano /etc/fail2ban/jail.local
Add following block:
[Your-Jail-Name]
enabled = true
filter = your-filter
logpath = /home/YOURUSERDIR/.pm2/logs/YOUR-APP-NAME-out.log
maxretry = 5
findtime = 10m
bantime = 10m
So now. Be sure that you wrote filter name without .conf extension.
In logpath be sure to write right user dir and log name. If You will get 5(maxrety) wrong post requests in 10minutes(finditme) then user will be banned for 10 minutes. You can change this values.
Now just restart nginx and fail2ban:
sudo systemctl restart nginx
sudo systemctl restart fail2ban
After You can check if Your jail is working using commands:
sudo fail2ban-client status YOUR-JAIL-NAME
There will be written how much matches was found and how much ips are banned. More information You can find in fail2ban log file.
cat /var/log/fail2ban.log
Found IPADDR - 2021-10-13 13:12:57
NOTICE [YOUR-JAIL-NAME] Ban IPADDRES
I wrote this step-by-step because probably only people with little experience will look for this. If You see mistakes or you can suggest me something then just comment.

Google Cloud DNS New zone

I create new zone to Google Cloud DNS, change domain registrants NS records to pointing ns-cloud-b1.googledomains.com, dig is showing correct information from authoritative NS but records doesn't appears on public DNS, is other configuration needed? or i must wait?
dig mydomain.com #ns-cloud-b4.googledomains.com
; <<>> DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5 <<>> mydomain.com #ns-cloud-b4.googledomains.com
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12022
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
mydomain.com. IN A
;; ANSWER SECTION:
mydomain.com 300 IN A x.x.x.x
;; Query time: 176 msec
;; SERVER: 216.239.38.107#53(216.239.38.107)
;; WHEN: Thu Apr 23 21:49:38 2015
;; MSG SIZE rcvd: 40
You need to wait for the information to propagate throughout the Domain Name System. This depends on the time-to-live setting of each RRset. If you just created the zone and switched the nameservers in the registrar, you need to wait for the TTL of the nameserver records to expire - they are just records in the parent zone.
dig +trace mydomain.com should help to ensure that everything is set up properly.

Custom Domain Github Pages

So I've followed the directions for setting up a custom domain with Github Pages. As per their recommendation, I'm attempting to set this up using a custom subdomain.
I purchased my domain through GoDaddy, and using their DNS Manager tool I added myappname.github.io under Host (CNAME):
I didn't change anything else, such as that IP address under A (Host).
Lastly, on my Github page when I go under settings it correctly says "Your site is published under www.myappname.com"
Yet, when I go to www.myappname.com, I see the following:
What did I do wrong?
Edit:
Output from dig:
dig www.myappname.co
; <<>> DiG 9.8.3-P1 <<>> www.myappname.co
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19874
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;www.myappname.co. IN A
;; ANSWER SECTION:
www.myappname.co. 3600 IN CNAME myappname.github.io.
myappname.github.io. 3600 IN CNAME github.map.fastly.net.
github.map.fastly.net. 18 IN A 199.27.76.133
;; Query time: 198 msec
;; SERVER: 10.2.0.4#53(10.2.0.4)
;; WHEN: Tue Mar 17 11:08:00 2015
;; MSG SIZE rcvd: 120
Your DNS is configured to redirect the www subdomain to your GitHub Pages site, but your GitHub Pages CNAME file specifies that your application should run on the apex domain, myappname.com. This causes another redirection to the apex domain, which as you point out in your question has its own A record pointing to a non-GitHub IP address.
As we discussed, one possible solution is to update the CNAME file in your repository to use www.myappname.com instead of myappname.com and then set up a redirect from the apex domain to the www subdomain.
This will cause requests to myappname.github.io and myappname.com to redirect to www.myappname.com, where your site lives.