In Bluemix's VM Service, I've been creating instance using floating IPs from the Public-Network. Most of the time when I delete an instance, the IP is released. However, after a few weeks of testing, I see that 3 out of my 11 floating IPs are still allocated. Is there a way to unallocate them? To reiterate: I do not currently have any instances.
This looks like the old bug here: https://bugs.launchpad.net/nova/+bug/997763, which was supposedly fixed in Folsom.
To fix your issue, click on Access & Security tab --> Floating IPs in Horizon dashboard
Select all. Release all.
Related
I'm in the use case where I had nothing on this domain, nothing was started on either side, I just bought the domain on the wrong service.
I imagine it's possible to transfer ownership to AWS, so that I may start managing the DNS from there rather than from dreamhost.
I probably could have purchased the domain from route 53 in the first place but this is now done and I don't want for the year under dreamhost to time out to start using it. nor do I want to use dreamhost to manage this url since dreamhost charges quite a lot more.
I've found the amazon guide that's my exact situation, but as per ususal with these guides they're super afraid of providing a concrete example and get into super abstracts with reused terminology for different meaning resulting in an unusable jumble of uncertainties : https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/migrate-dns-domain-inactive.html
So I've gotten to :
Step 3: Create records (inactive domains)
I've just manually edited the values that were created by default by route 53 when I created that hosted zone to the ones I found in the dreamhost DNS configuration :
but I doubt that's what I have to do to transfer the domain especially since the step after that basically sais to change it back to what it was.
So what is it exactly I'm supposed to do in order to transfer the domain to amazon (route 53)?
Domain registration and DNS resolution are related, but separate entities. It seems like you decided you want route53 to serve your DNS entries. Given that, you have two choices.
Choice 1: Keep domain registered with dreamhost
If you do this, you need to instruct dreamhost to look up DNS entries for your domain at route53. This can be accomplished by setting the NS servers on dreamhost to point to route53. There are detailed instructions for this at AWS here. What you have in your step3 is backwards. Step 3 is just saying if you want HOST.yourdomain.com, to add a entry 'HOST' into the hosted zone. You should not touch the NS or SOA entries on the route53 hosted zone away from their original settings. You can simply delete the zone and start over again.
Background: Dreamhost will populate the NS entries by default and they will be the ones queried how to resolve HOST.yourdomain.com. However, if you don't provide dreamhost any information that they should refer the requests to route53, it has no way of knowing that. You need to tell dreamhost that the NS entries (Nameserver) should point to route53's servers. That way, the user trying to resolve HOST.yourdomain.com will be pointed to route53. When it asks route53 what the IP is, all will be well if you setup your hosted zone to resolve that entry. This is what you are going to do on step 4 from the AWS documentation.
Choice 2: Transfer your domain registration to route53
This is a little more up front work, but may be easier in the long run. You are permitted to transfer the domain to another domain registrar. You'll have to follow instructions at both the giving side (dreamhost) and the gaining side (route53).
NOTE: ICANN does enforce a 60 day lock on moves. If you just registered your domain, you will need to wait 60 days before the transfer process can begin. Also, do not worry about 'double paying' for the year. You are required to purchase at least one more year of domain registration, but it will be appended to the end date of your expiration (it won't start it over). Once you move to route53, especially if you already are using route53 for the hosted zone, you will have one less place to pay and administer.
Additional NOTE: Because of the 60 day lock, if it has been less than 60 days since you created the domain, choice #1 is the only choice during that period if you want to serve DNS records from route53.
I am unable to access Google Cloud DNS page.
All it shows is:
"DNS API is being enabled. This may take a minute or more."
Then it reloads and repeats showing the same message.
The API is already enabled, and the records I created works. No problem with DNS.
I need to modify records, but I can't because of this problem.
I tried opening the page in different computers and different browsers without addons, same result.
If there is a better place to ask, please do tell.
Thank you.
You should be able to access the page regardless of what computer / browser you're using.
If you cannot it's either a temporary outage which you can check here or a bug.
The only thing to do here is to contact paid support for more immediate help and if the time is something you can afford report this at Google's IssueTracker and get help for free - however it may take a few days. It is possible that only you are affected. Please describe the issue in as much detail as possible - this will expidete the process.
I have a MongoDB client in three EC2 instances and I have created a replica set. Last time I had a problem, of space constraint which stopped my mongod process, thereby halting the application and now in an instance couple of days back, some of my tables were gone from database, so I set logging and all to my database just to catch if anything like that happens again. In a fresh incident this morning I was unable to login to my system and that's when I found out that whole database was empty. I checked other SO question like this which suggest setting up a TTL.Which I haven't done at all.
Now how do I debug this situation and do a proper root cause analysis? I can't even find anything in my debug logs as well. The tables just vanished. How do I set up proper logging mechanism and how do I ensure that all my tables are never ever deleted again?
Today I got a mail from Amazon that I was probably running an unsecured version of MongoDB and that may have caused this issue. So who ever is facing this issue please go through the Security Checklist Provided by MongoDB. There are some points that are absolutely necessary in there.
1. Enable Access Control and Enforce Authentication
2. Encrypt Communication
3. Limit Network Exposure
These three are the core and depending upon how many people access your database you can Configure Role-Based Access Control.
These are all the things I have done. Before this incident I had not taken security that seriously but after I was hit by it. I made sure I have all the necessary precautions in place.
Hope this helps someone.
We keep static files (images, javascript, and css) for our websites stored in a Google Storage bucket with different folders for different types of resources. Each file is accessed via its name coupled with a custom subdomain mapped via a CNAME record to the appropriate Google Storage bucket.
This approached has worked fine. Today, however, when attempting to access our main website in Chrome's incognito (private browsing) mode, all pages on the site wouldn't load. After some detective work, we've determined that the problem is with the files stored at Google Storage, which are not loading.
Unfortunately, this doesn't seem to be a problem specific to Google Chrome. It occurs in the private browsing modes in Firefox and Internet Explorer as well (at least on the Windows 8.1 Professional platform we're using for testing).
The problem appears to occur only if we use the CNAME-based approach for accessing a file. For example, if this method is used in a private browser window to access one of our image files on Google Storage,
Image of a crowd on Google Storage - direct access to Google Storage
the file can be viewed without a problem. If, on the other hand, the file is viewed in a private browsing window using the CNAME approach, like this
Image of a crowd on Google Storage - access via CNAME
the image will not load.
What's worse, for reasons we don't completely understand, once this problem occurs in a private browsing window, it will continue to interfere with the proper viewing of the website in regular (non-private browsing) browser windows in the case of some browsers.
Has anyone encountered this problem and, if so, found a solution for it?
Thanks in advance for any tips or suggestions.
UPDATE (2015-05-26)
This problem is still under investigation. It may be ISP-specific, although our ISP (Verizon) believes it is a problem on Google's end. An attempt to resolve the problem yesterday by tweaking some DNS settings seemed to solve the problem, but that was only temporary. We began to experience the problem again today. I will update this posting further as more information becomes available.
ADDITIONAL UPDATE (2016-08-25)
(Note: I originally wrote this update on 2015-05-26, but failed to post it, and discovered it today. I'm adding it to complete the description of the issue.)
This issue has been resolved. I cannot say for certain what the source of the problem was, but I can give further information on what exactly the nature of the problem was and what may have solved it.
As I mentioned in the comments below, this appears to have been an issue that was relatively isolated. Further investigation revealed that the problem was occurring only with access to the particular subdomain through Verizon Internet service (land-based or mobile) in the U.S. I do not know if the problem was a regional problem within the Verizon system, or throughout the entire Verizon system. But I do know it affected both landline and mobile access using Verizon.
The problem also evolved. What started as a problem accessing files at the subdomain in a browser's incognito mode became a problem regardless of what browsing mode was used. That said, it was only a problem if the attempt to load files from the subdomain was used with a browser. The files could be retrieved with no problem with, for example, wget. Also, pinging the subdomain also worked fine over the Verizon network.
As the problem became more acute, I decided to do a thorough check of the DNS settings related to the subdomain. Here is where I discovered what may have been causing the problem. There was a slight discrepancy between the DNS settings at the domain registrar and the (separate) DNS service that we use.
The discrepancy didn't lead to conflicting reports as to how the subdomain should be resolved (which is probably why this problem hadn't occurred in the past). But, if I recall correctly, it led to the DNS service providing the CNAME record for the subdomain, without the registrar's DNS information fully confirming that the DNS service had the right to provide that information.
This discrepancy was corrected. Within an hour or two, the problem resolved itself -- anyone viewing the file using the two links above should be successful with both links.
I cannot say for certain, however, whether the change to the DNS settings we made to resolve the discrepancy, or some updating at Verizon, was responsible for the problem being resolved. I will say, however, that I never reported the issue to Verizon. (I didn't get that far.)
Although the DNS discrepancy had existed for more than a year or two, and had not created any problems that we were aware of, I personally think it is what caused the problem.
I am hosting a service on Windows Azure and using an external DNS (Gandi) to manage zone files. What I would like is to redirect all the incoming Azure traffic to another web page, while I am updating the service (like we'll be back soon).
I know that I can do this by updating the zone file, but it takes time to propagate, then time to put back into normal. So 1 hour update finally takes 6-8 hours.
This is not good. Is there any other way to redirect traffic with IMMEDIATE effect and not wait DNS to propagate? Adding redirection inside the code is not really an option, because Azure service packages take an eternity to upload.
Thanks
Not sure what your full requirement here is, but could VIP swap do the trick for you?
You would deploy the alternate 'well be back' site to the staging slot and when you wish to update the service you'll use VIP swap to make that production whilst you update your service, which is now on staging with the latest build.
When that's ready you can VIP swap back and the new site is now in production?
Edited to add:
I take your point regarding wanting to leave the staging slot for rollbacks, makes perfect sense.
Another option could be to use the Traffic Manager -
Have your main application in one cloud service and your temporary landing page in another.
Configure a traffic manager failover policy with both services (main one first, alternative second).
when you want to go to 'maintenance mode' disable traffic to the main service and all traffic will get routed to the 'maintenance mode' one although there is some lag in propagating it measured in minutes rather than hours when I played with it a little bit just now - there's a DNS time-to-live setting available to you which defaults to 5 minutes.
When you're ready to come-back online re-enable the main site (and you can chose to remove the 'maintenance mode' deployment when all is working
There's a feature in ASP.NET to quickly stop serving content. If you put a file called app_offline.htm in the root of the site, it will serve that instead of regular content. There are details in this blog post.
Using it in Azure may be difficult. I would suggest using VIP Swap, but I see you don't want to do that. You could remote into each VM and manually add the file, though that could be painful. It may be possible to script it, but I don't know an easy way to do so.