How to find domains registered at a certain time? - whois

Is there a place where you can reach the history of domain registrations?
To be more specific: How can I find out what other domains have been registered within a certain period of time?
I know there are paid services which allow you to find bulk-domain registrations. I wonder how they got those data.
thx in advance!

Typically they have an agreement with Whois registrars to obtain a dump of the database every night, and/or they run something like passive DNS to discover domains to put in their database.

Related

Migrating a domain I bought from dreamhost to Amazon

I'm in the use case where I had nothing on this domain, nothing was started on either side, I just bought the domain on the wrong service.
I imagine it's possible to transfer ownership to AWS, so that I may start managing the DNS from there rather than from dreamhost.
I probably could have purchased the domain from route 53 in the first place but this is now done and I don't want for the year under dreamhost to time out to start using it. nor do I want to use dreamhost to manage this url since dreamhost charges quite a lot more.
I've found the amazon guide that's my exact situation, but as per ususal with these guides they're super afraid of providing a concrete example and get into super abstracts with reused terminology for different meaning resulting in an unusable jumble of uncertainties : https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/migrate-dns-domain-inactive.html
So I've gotten to :
Step 3: Create records (inactive domains)
I've just manually edited the values that were created by default by route 53 when I created that hosted zone to the ones I found in the dreamhost DNS configuration :
but I doubt that's what I have to do to transfer the domain especially since the step after that basically sais to change it back to what it was.
So what is it exactly I'm supposed to do in order to transfer the domain to amazon (route 53)?
Domain registration and DNS resolution are related, but separate entities. It seems like you decided you want route53 to serve your DNS entries. Given that, you have two choices.
Choice 1: Keep domain registered with dreamhost
If you do this, you need to instruct dreamhost to look up DNS entries for your domain at route53. This can be accomplished by setting the NS servers on dreamhost to point to route53. There are detailed instructions for this at AWS here. What you have in your step3 is backwards. Step 3 is just saying if you want HOST.yourdomain.com, to add a entry 'HOST' into the hosted zone. You should not touch the NS or SOA entries on the route53 hosted zone away from their original settings. You can simply delete the zone and start over again.
Background: Dreamhost will populate the NS entries by default and they will be the ones queried how to resolve HOST.yourdomain.com. However, if you don't provide dreamhost any information that they should refer the requests to route53, it has no way of knowing that. You need to tell dreamhost that the NS entries (Nameserver) should point to route53's servers. That way, the user trying to resolve HOST.yourdomain.com will be pointed to route53. When it asks route53 what the IP is, all will be well if you setup your hosted zone to resolve that entry. This is what you are going to do on step 4 from the AWS documentation.
Choice 2: Transfer your domain registration to route53
This is a little more up front work, but may be easier in the long run. You are permitted to transfer the domain to another domain registrar. You'll have to follow instructions at both the giving side (dreamhost) and the gaining side (route53).
NOTE: ICANN does enforce a 60 day lock on moves. If you just registered your domain, you will need to wait 60 days before the transfer process can begin. Also, do not worry about 'double paying' for the year. You are required to purchase at least one more year of domain registration, but it will be appended to the end date of your expiration (it won't start it over). Once you move to route53, especially if you already are using route53 for the hosted zone, you will have one less place to pay and administer.
Additional NOTE: Because of the 60 day lock, if it has been less than 60 days since you created the domain, choice #1 is the only choice during that period if you want to serve DNS records from route53.

Lightweight Active Directory Monitoring/Auditing users, groups and group policy

My team has attempted to utilize a 3rd party Active Directory Object auditing tool which ran some automated scripts and turned on active directory auditing on our domain controllers. We use Win 2016 Server for our domain controllers.
As a result our DCs got bogged down and we subsequently turned off the auditing. My boss doesn't want to risk having this happen again so I am attempting to find a less invasive way to monitor changes to groups, user accounts and group policy. For security reasons, we want to be able to ask the question: Who changed what and at what date and time.
My options as I see them are basically some kind of custom .NET library or solution, accessing LDAP via PHP or perhaps a polling solution using PowerShell to dump data to a secondary file, API or service.
I've scoured the internet for a solution that might work for us and spent several days experimenting and building prototypes to no avail. It seems that the expectation for all possible solutions are to turn on the auditing features and simply hope that your DCs don't immediately max out on resources.
If we were to deploy a test DC and turn on auditing for evaluation purposes, I could potentially come up with a solution to track changes over time but we wouldn't be able to assess the real world impact of certain auditing features being turned on because it wouldn't have the same traffic that our production Domain Controllers have.
The solution that I am looking for has a low impact on the performance of our domain controllers and offers a method by which to store data pertaining to active directory object changes that can be subsequently displayed on one or more reports.

Sync two offline masters when network available

I have a use case where I need to set up two physical stations at a venue. Each station will be running a couple of app servers and a mongodb server.
I can't rely on the venue's internet access so I need my app to be able to work offline and "sync" the dbs every once in a while.
I initially thought about having two masters that would somehow sync with a remote one but TIL that master-master replication is not possible with mongodb.
I've read about the active-active approach, however, that won't let me write to a different shard when offline.
I'm running out of ideas, any recommendation would be greatly appreciated.
------ Update on what I'm trying to achieve:
I'm working with a venue that has two entrances. The idea is to be able to capture some information from people attending the events (name, email, etc). After getting registered we will print a name tag with some of the info.
Everything sounds pretty easy, however, if possible, I would like to not rely on the venue's network (internet). So that's where I started struggling figuring out whats the best approach. I guess what I want is being able to have a remote mongo but if the network goes down somehow keep saving records locally and send them to the remote mongo instance when network is available again.
Extra considerations:
- Events last a couple of days, some people lose their name tag overnight, they should be able to go to either of the entrances and get it reprinted. So we should be able to find their info even if they registered in entrance A but they are asking for a reprint in entrance B.
More questions:
- Am I overthinking it? Maybe venue's network + a 4G/LTE modem as a backup should be enough? I would prefer not relying on it tho.
I believe you're overthinking things. Here's what I would do if faced with a similar situation:
From the description, it doesn't sound like the two sites need to be connected in real time at all. I would create a server on Entry A, another in Entry B, and consolidate their data each day after the day ended if required. This is because:
It's unlikely that one person will register in both sites within a single day. If they lost their tag on that day, I'll just tell them to go back to where they registered earlier and get it reprinted there. Worst case, you'll create a duplicate entry (should be obvious which is the duplicate since no one would lose their tag within seconds) but I would not anticipate hundreds of people all lost their tags within a day.
If the attendee lost their tag overnight, both servers will have synced data and should be able to reprint.
If you're concerned about the venue's Wifi access, just run cables from the server to the printing stations.
Personally, I would argue that the overnight sync is not really needed at all (see the likelihood of people registering twice). I would just collect the data from both servers after the event ended. That is, unless you have specific needs for the combined data from both entries during the 2nd day.
Note: please make sure you're running a minimum of 3-node replica set. Running a standalone instance for prod environment is not recommended. Hardware/disk corruption is a common event.

Routing using OSRM for multiple profiles - does profile in the URL actually do anything?

With ORSM there are 3 profiles for different modes of transport, cycle, foot and car. These come with OSRM.
According to the following post which was made 1 year ago, OSRM does not support multiple profiles:
OSM routing (OSRM): do I need to duplicate all data for different profiles?
Yet in the official documentation there is a profile argument as part of the URL called for retrieving a route from a running OSRM instance:
http://project-osrm.org/docs/v5.6.4/api/#general-options
The path would look something like this:
http://router.project-osrm.org/route/v1/driving/
Without driving, foot or cycle in the URL a route won't be retrieved so one of them is required for the API, yet if I compile a route for car on the server, but then use /foot/ in the URL to retrieve a route, it will still retrieve a car based route, completely ignoring 'foot'.
Could anybody from OSRM explain why something as useful as multiple profile support has been withdrawn, and what the point of driving is in the above URL seeing as it is ignored anyway and just appears to use the profile attached to the running instance of OSRM?
The solution to the problem of multiple profiles appears to be to host parallel copies of the routing machine for each profile and address different IP's, so again, what is the point of 'profile' in the URL?
Could anybody from OSRM explain why something as useful as multiple profile support has been withdrawn
The support has never been there. You will need to run separate osrm instances for each profile.
The URL option is merely there to make it easier to stick a nginx in front of your OSRM instances and distribute to the correct instance based on profile string.
We might implement multiple profiles in the same OSRM instance in the future, but this is still far out.

Powershell - Add mailbox account to Outlook

I have googled tons for this but with no success, maybe I just have the wrong approach?
Problem:
We work with migrating organizations from on premises Exchange to Office 365 and vice versa. As a service we also log in to all user computers and do the initial "add existing mailbox"-steps. Since we are using Autodiscovery-pointers this takes a lot of time. With a bad bandwidth it can take up to 15 min per user.
Our goal:
Create a Powershelscript where we can put the settings that normally being fetched by Autodiscovery and quickly add a new, existing mailbox to the computer so next time they log in they can just start the Outlook application and they are logged in.
I hope I made myself understandable, please ask if anything is unclear.
Thanks in advance!
Edit:
Maybe there is a way to go through the initial Outlook-setup via a PSSession? Then the time that an Autodiscovery takes wouldn't matter since we can do all the setups remote, unatended.