Where are mxrecords set - in the domain or the hosting? - email

I have a domain name at web.com, hosting with another site, and want to setup emails through Google Suite. My memory from previous experience tells me that I should be able to set mxrecords directly inside the domain, but web.com is telling me:
Your domain name is not pointing towards Web.com's name servers, this
means if you want to make any changes, you will need to access the
zonefile at the place where you are pointing them, as control is
there.
The hosting is taking place on another site. So I read the above and think this means I need to contact the guy who handles my hosting to add the records. So I do, and I get this response:
You have to do it through web.com to change dns and add new records.
It is done through domain not hosting
Is someone wrong? Or am I misunderstanding how this process works?

Related

Individual pages via GitHub pages custom domain or FTP

I manage a website which is built from a GitHub repository via an action which commits a live version to a certain branch, and then the webserver routinely checks if there are any updates on this branch and, if so, pulls them down to its public_html directory. This then serves the website on some domain, say
example.com.
For various (practically immutable) reasons, there are individual webpages that are "built" from other individual repositories — here I say "built" because these repositories are almost always just some .html files and such, with little post-processing, and could thus be served directly via GitHub pages. I want these to be served at example.com/individual-page. To achieve this, I currently have a GitHub action which transfers the files via FTP to a directory on the webserver which is symlinked inside public_html, thus making all the files accessible.
However, it now occurs to me that I could "simply" (assuming this is even possible — I imagine it would need some DNS tweaking) activate GitHub pages on these individual repositories, set with the custom domain example.com, and avoid having to pass via FTP. On one hand, it seems maybe conceptually simpler to have public_html on the webserver only contain the files coming from the main website build, and it would be simpler to make new standalone pages from GitHub repositories; on the other hand, it seems like maybe "everything that appears on example.com should be found in the same directory" is a good idea.
What (if any) is the recommended best practice here: to have these individual pages managed by GitHub pages with custom domains (since they are basically just web previews of the contents of the repositories), or to continue to transfer everything over to the webserver in one directory?
In other words maybe, is it a "good idea" to partially host your website with GitHub pages? Is this even possible with the right DNS settings?
(I must admit, I don't really understand what exactly my browser does when I navigate to example.com/individual-page, and what would happen if such a directory existed in my webserver and also GitHub pages was trying to serve up a webpage at this same address, so I guess bonus points if you want to explain the basics!)
The DNS option you describe doesn't work for pages.
While you can use a CNAME record to point your domain to another domain or an A record to point your domain to an IP address, DNS doesn't handle individual pages (as in example.com/a). It would work if each page was, for instance, a.example.com, but that's not a page, it's a whole different website.
What you can do, however, is include those other repositories as submodules of your repository, and then everything works without any DNS magic (except the simple CNAME record, which isn't magic).
It would be a good idea to implement this described solution, as it's the simplest. In any case, as long as your current solution works automatically without issues and the hosting cost isn't an issue, I don't see any need to take the time to implement a new solution.
If you want to only serve some files or pages from the submodules, you can have build actions and serve a specific directory.
I must admit, I don't really understand what exactly my browser does when I navigate to example.com/individual-page
Your browser requests the DNS records of your domain (example.com), in this case the A record (since this is the root domain). This A record gives an IP address, which your browser uses to request the page. As you can see, pages aren't handled by DNS at all.
That means only one machine handles all requests for a domain (which doesn't mean it can't delegate some pages to another machine, but that's more complex). That means it's impossible to have "directory conflicts", because either your server handles the request or GitHub does. Your browser doesn't check if the page exists at server A and if not, if it exists at server B.

How do I avoid downtime when "upgrading" an Azure static web app to use FrontDoor?

I have a static web app to which I have mapped the domains [domain].se and www.[domain].se. The domain is managed in Azure.
The problem I'm facing is redirecting all calls to [domain].se to www.[domain].se
Since I couldn't come up with any solution to redirecting http traffic from [domain].se to www.[domain].se using a static web app (other than setting up an additional standard web app on [domain].se that manages redirects), I enabled the "Enterprise-grade edge" feature (which by the way is a very silly name) to be able to use FrontDoor.
When attempting to add the domain to the frontdoor resource, there is an error message telling me that it's already mapped to something (which is correct - the site that I want frontdoor to manage).
When trying to remap [domain].se (and www.[domain].se) to the front door endpoint (select Azure resource in the DNS zone manager), the frontdoor resource is not available.
The issue could probably be resolved by simply removing the current records from the name server and then add a cname record to point it to the frontdoor endpoint.
Two reasons not to do that:
1: It would cause downtime.
2: It feels like a hack. Stuff genereally work better when they are used the way they were expected to when developed. I want the platform to recognize what things are connected in order to avoid future issues.

Migrating a domain I bought from dreamhost to Amazon

I'm in the use case where I had nothing on this domain, nothing was started on either side, I just bought the domain on the wrong service.
I imagine it's possible to transfer ownership to AWS, so that I may start managing the DNS from there rather than from dreamhost.
I probably could have purchased the domain from route 53 in the first place but this is now done and I don't want for the year under dreamhost to time out to start using it. nor do I want to use dreamhost to manage this url since dreamhost charges quite a lot more.
I've found the amazon guide that's my exact situation, but as per ususal with these guides they're super afraid of providing a concrete example and get into super abstracts with reused terminology for different meaning resulting in an unusable jumble of uncertainties : https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/migrate-dns-domain-inactive.html
So I've gotten to :
Step 3: Create records (inactive domains)
I've just manually edited the values that were created by default by route 53 when I created that hosted zone to the ones I found in the dreamhost DNS configuration :
but I doubt that's what I have to do to transfer the domain especially since the step after that basically sais to change it back to what it was.
So what is it exactly I'm supposed to do in order to transfer the domain to amazon (route 53)?
Domain registration and DNS resolution are related, but separate entities. It seems like you decided you want route53 to serve your DNS entries. Given that, you have two choices.
Choice 1: Keep domain registered with dreamhost
If you do this, you need to instruct dreamhost to look up DNS entries for your domain at route53. This can be accomplished by setting the NS servers on dreamhost to point to route53. There are detailed instructions for this at AWS here. What you have in your step3 is backwards. Step 3 is just saying if you want HOST.yourdomain.com, to add a entry 'HOST' into the hosted zone. You should not touch the NS or SOA entries on the route53 hosted zone away from their original settings. You can simply delete the zone and start over again.
Background: Dreamhost will populate the NS entries by default and they will be the ones queried how to resolve HOST.yourdomain.com. However, if you don't provide dreamhost any information that they should refer the requests to route53, it has no way of knowing that. You need to tell dreamhost that the NS entries (Nameserver) should point to route53's servers. That way, the user trying to resolve HOST.yourdomain.com will be pointed to route53. When it asks route53 what the IP is, all will be well if you setup your hosted zone to resolve that entry. This is what you are going to do on step 4 from the AWS documentation.
Choice 2: Transfer your domain registration to route53
This is a little more up front work, but may be easier in the long run. You are permitted to transfer the domain to another domain registrar. You'll have to follow instructions at both the giving side (dreamhost) and the gaining side (route53).
NOTE: ICANN does enforce a 60 day lock on moves. If you just registered your domain, you will need to wait 60 days before the transfer process can begin. Also, do not worry about 'double paying' for the year. You are required to purchase at least one more year of domain registration, but it will be appended to the end date of your expiration (it won't start it over). Once you move to route53, especially if you already are using route53 for the hosted zone, you will have one less place to pay and administer.
Additional NOTE: Because of the 60 day lock, if it has been less than 60 days since you created the domain, choice #1 is the only choice during that period if you want to serve DNS records from route53.

subdomain show just blank space

I have a dedicated server centos 6.4 with plesk 12. I created to my existing domain.com an subdomain media.mydomain.com to move store there the images.
I created the subdomain also on provider side (3days ago). I also create an A Record on cloud flare point to my static servers ip (2 days ago). When I enter yet, media.mydomain.com to the browser, I get just a blank page, nothing more. When I check the DNS for my subdomain then i get the following message:
Delegation not found at parent.
No delegation could be found at the parent, making your zone unreachable from the Internet.
Not enough name server information was found to test the zone media.mydomain.com, but an IP address lookup succeeded in spite of that.
I don't know, how i can get my subdomain working, can someone give me tip, how i can accomplish that?
A blank page in your browser actually sounds like a server issue not delivering content. Difficult to look at without knowing the actual subdomain in question.

postfix: programmatically adding a user

I asked this question two months ago and got nary an answer. In fact I earned the tumbleweed badge for asking a question that garnered so little interest.
However, this seems like a straightforward question with a definitive answer and I really need to be able to do this.
If there's still no answers I'd sure appreciate if anyone has any ideas about any other forums that might help me out. I tried asking godaddy but I guess I don't spend enough money with them for this level of support.
Thanks and here's the question:
I'm using a godaddy virtual dedicated
server, and the default email server
that comes bundled with it is postfix.
There is even a way to add domains and
user accounts through the godaddy
control panel.
I am trying to figure out (1) exactly
what it is they are doing to create
new accounts via the control panel,
and then (2) how to do that via a
Linux shell script.
I have never used postfix and have
been trying to wade through the man
pages and other documentation. It
appears that when the user accounts
are associated with a domain, then the
user accounts are "virtual". So far
I've discovered that when I use the
godaddy control panel to add a new
email account, it adds an entry into
/etc/postfix/turbopanel/virtual_alias.
Then, that entry also seems to get
committed to the binary
virtual_alias.db in the same
directory.
I have manually replicated the process
of adding a new email address to the
virtual_alias file and then running
postmap
/etc/postfix/turbopanel/virtual_alias
to get the entry into the
virtual_alias.db file. This works, but
some steps are missing: I am not able
to send email to the added user, and
the user doesn't show up in the
godaddy control panel.
I don't think a new Linux account
needs to be created for the virtual
alias. The accounts created via the
control panel DO NOT have an
associated entry in /etc/passwd.
Any help is much appreciated.
Jeremy
Did you want to create virtual mailboxes or forwarders for these virtual users?
see doc on
virtual_mailbox_maps = hash:/etc/postfix/vmailbox
/etc/postfix/vmailbox
Tim
//