I recently signed up for bluemix and suddenly I see two consoles url when I try to log in. Which is confusing me.
Can some one explain the difference between these.
https://console.eu-gb.bluemix.net/
and
https://console.ng.bluemix.net/
I can see all my test application that I created are part of "https://console.ng.bluemix.net/"
Though bluemix is allowing me to create applications in "eu-gb" doamin as well.
Bluemix has different hosting regions and you can create apps and services in them. Right now there are two hosting regions (data centers) for the public Bluemix. Hence the two consoles you are seeing.
Try switching to the "eu-gb" region, create a service there, and notice that the second console shows a service, too. :)
Related
I am using an EC2 instance as a backend database server that receives open listings for an AirBnB type site. I've checked on my own browsers and phones and had others check on theres in other regions as well, and these listings load perfectly fine for us. There is one person in another region, however, who is not seeing any listings at all and receives the Failed to load resource: net::ERR_CONNECTION_RESET error instead. I even had them try clearing their cache in Chrome, but that did not help. Below are photos depicting the situation:
Click here to see the problem page (lol)
Photos depicting errors:
What I See/What Should Show Up:
Black Listings Come Up For Them
Here Are The Errors They Receive
Here are the Inbound settings for my security group
I'm thinking it may be a firewall issue, but i'm just not sure. Any help would be greatly appreciated, thank you!
i would suggest you check the security group in which your machine is, since it's possible that you're not allowing traffic to reach your machines.
it's possible that your location IP is open to the API but not other IP's.
I've tried to reach your page from my place and it's timing out as well, that points to the security group.
if possible share a screenshot of the security group set up on your machine, that will help diagnose further.
I am not quite sure how to name this question. But, I will list the expectations below to explain it.
Having an application running on Bluemix.
Having code on local.
Push through git to Bluemix.
Restart the application for new code to take effect.
So the question is:
In the above situation, if I want to avoid down time while the server is restarting, which could be long if it is having unexpected issues, how can the website keep transferring data through the Bluemix server. Should I have a shadow server? How do I manage them so they know when to switch (automatic/manual) in case the website won't notice the down time? Many thanks.
You need to do a blue green deployment. Here is an example article http://garage.mybluemix.net/posts/blue-green-deployment/
I'm creating a iOS app that requires the user to log in at startup, and then uses those credentials to query 4-5 different services on a server over the course of the session.
The server (xyz) it self doesn't accept the credentials, but if the services that it provides are queried then they get accepted. For example https://xyz/service1 works, https://xyz doesn't.
Now what I'm wondering about is if there is anything that stands in the way of creating 4-5 NSURLProtectionSpace's at log in, one for each service on the server, and then use the corresponding protection space when use each service?
Or is there a better way of implementing something that could work in this situation?
All help would be appreciated.
Turns out that there is nothing that stands in the way of creating multiple NSURLProtectionSpace's since each is created for a separate url.
We have a few servers that have different roles. For instance, we have production servers, and testing/staging servers. We have a few end users who forget to switch paths to production once things are tested and approved or use; They use the new paths for a bit, then revert back to using the testing/staging at some point for some reason that we can't understand other than stupidity. We still want to be able to get a glimpse into our staging environment after pushing a build into production, but we want to stop them from being able to still hit those servers/services.
We are now pondering some solutions to this problem. One being never give them the direct staging url. An idea would be to create a virtual directory or have a set of domain aliases that we could give them and then shut down while still allowing us access to these endpoints. We could restrict our main staging domain to the office ip range so they never have direct access and call it good.
Does this sound like a good solution? Is our process wrong, are there better routes?
I am interested in solutions for websites as well as web services where visuals can't be used effectively.
We've run into this at my work as well… quite recently in fact. One thing that I thought about other than the virtual directory was setting up specific ports for them to test on then either take the ports down or change them for our internal uses only.
Well without details in how your application is deployed it could be troublesome to give concrete examples. One wonderful solution is to get better users :P Perhaps a more possible solution however is to let your production boxes move a certain set of users(as decided in your code) to your test/staging systems. I.E. the User always connects to Production, but the production machines at connect/auth time, may decide these people are too cool for production let them run the test/staging code instead.
It's not a fullproof method of course, but it works for many many websites to let a certain set of users into different parts of their codebase.
I don't know how feasible this would be for you, but it's a possibility perhaps.
I find that users sometimes have difficulties with URLs, and don't like to have subtle changes like port number in the address.
The best approach I've found is to have the application tell the user what environment they are in.
For example, my teams have used absolutely positioned headers or footers, color coded for Dev/Staging environments that show the application version number with an alpha/beta tag, along with a message that says "Work done on this site will be lost, use Production (link) to keep your work." Typically we make the Dev area red, and the staging area yellow. We also like to put a link to the bug tracking system right in this area.
On production there is not usually a region like this. However, we do sometimes provide positive reinforcement by placing a green region, with the app version and a Production tag in it, and then fade the green region away after a few seconds. This helps keep the app front and center, but let's the user know they are in the right place.
We have an internal web system that handles the majority of our companies business. Hundreds of users use it throughout the day, it's very high priority and must always be running. We're looking at moving to ASP.NET MVC 2; at the moment we use web forms. The beauty of using web forms is we can instantaneously release a single web page as opposed to deploying the entire application.
I'm interested to know how others are deploying their applications whilst still making them accessible to the user. Using the deployment tool in Visual Studio would supposedly cause a halt. I'm looking for a method that's super quick.
If you had high priority bug fixes for example, would it be wise to perhaps mix web forms with MVC and instead replace the view with a code-behind web form until you make the next proper release which isn't a web form?
I've also seen other solutions on the same server of having the same web application run side-by-side and either change the root directory in IIS or change the web.config to point to a different folder, but the problem with this is that you have to do an entire build and deploy even if it were for a simple bug fix.
EDIT: To elaborate, how do you deploy the application without causing any disruption to users.
How is everyone else doing it?
I guess you can run the MVC application uncompiled also? and just replace .cs/views and such on the run.
A websetup uninstall/install is very quick, but it kills the application pool.. which might cause problem. Depending on how your site is built.
The smoothest way is to run it on two servers and store the sessions in sql server or shared state. Then you can just bring S1 down and patch it => bring s1 back up again and bring S2 down => patch S2 and then bring it up again. Al thought this might not work if you make any major changes to the session parts of the code.
Have multiple instances of your website running on multiple servers. The best way to do it is to have a production environment, a test environment, and a developement environment. You can create test cases and run the load every time you have a new build, if can get through all the tests, move the version into production ;).
You could have two physical servers each running IIS and hosting a copy of the site. OR you could run two copies of the site under different IIS endpoints on the SAME server.
Either way you cut it you are going to need at least two copies of the site in production.
I call this an A<->B switch method.
Firstly, have each production site on a different IP address. In your company's DNS, add an entry set to one of the IPs and give it a really short TTL. Then you can update site B and also pre-test/warm-up the site by hitting the IP address. When it's ready to go, get your DNS switched to the new site B. Once your TTL has expired you can take down site A and update it.
Using a shared session state will help to minimise the transition of users between sites.