I am noticing latency in REST data the first time I visit a web site that is being served via Azure Mobile Services. Is there a cache or timeout of a connection after a set amount of time, because I am worried about user experience while waiting 7-8 seconds for the data to load (and there is not a lot of data, as I am testing 10 records returned). Once the first connection is made, subsequent visits appear to load quickly... but if I don't visit the site for a while, we are back to 7-8 seconds on first load.
Reason: The reason for this latency is the "shared" mode. When the first call to the service is made, it performs a "cold start" (initializing and starting the virtual server etc)
As you described in your question, after a while when the service is not used, it is put into the "sleep mode" again.
Solution: If you do not want this waiting-time, you can set your service to "reserved" mode, which forces the service to be active all time even when you do not access it for a while. But be aware that this requires you to pay some extra fees.
Related
I've used Fabric for quite a few applications, however I was curious about the performance when a single application submits potentially hundreds of events per minute.
For this example I'm going to be using a Pedometer application, in which I would want to keep track of the amount of steps users are taking in my application. Considering the average user walks 100 steps per minute, I wouldn't want the application to be sending several dozen updates to the server.
How would Fabric handle this, would it just tell the server "Hey, there were 273 step events in the last 5 minutes with this meta deta" or would it sent 273 step events.
Pedometer applications typically run in the background so how would we get data to Fabric without the user opening the application
Great question! Todd from Fabric. These get batched and sent at time intervals and also certain events (like installs) trigger an upload of the queued events data. You can watch our traffic in Xcode debugger if you are curious about the specifics for your app.
We have a mail sender application, that receives a bunch of mails in one blob, and then puts all those mails into database. This can take up to ten minutes. During this process the state of the mailing is BUILDING.
When it is finished the state gets changed to READY.
When the server crashes (shouldn't happen of course) and restarts, it looks for all mailings with status BUILDING and marks them as ERROR. This happens, because we never want to send incomplete mailings.
Now we'd like to scale up using a second server. The recovery strategy above doesn't work here.
e.g. server 1 is BUILDING a mailing, and server 2 crashes and restarts. Now server 2 will see the BUILDING mailing and doesn't know if it's been aborted or if it's running on another server.
So what's the best recovery strategy for distributed services?
(We thought about some timeout mechanism, where the BUILDING server updates a timestamp every few seconds, and when some server reboots it checks if there's a BUILDING mailing that hasn't been updated for x minutes. Then it's highly possible that this mailing has been aborted.)
EDIT:
What I'd like to achieve: If some server restarts (after a crash or just because we added a new mailing server to the cluster), it should not mark mailings as ERROR if this particular mailing is actually being built (by another server).
Nice to have: If this would work without having to store server ids, because then it's possible to easily add and/or remove servers. Else it would not be possible to completely remove some server, because then there might be a BUILDING mailing with that particular server id. But this server got removed and will never get started again. Though the only server that could set the mailing to ERROR will be gone.
Add two things to your state tracking: a timestamp and the server working on it.
If a server starts up and sees anything in a building state for itself it knows it failed. Conversely, if it starts up and sees something in a building state for another server, it now has information that it's going to need to look at later to see if there's a problem that needs to be addressed. You need to worry about multiple servers restarting at the same time, so you can't just have a server grab all old bundles for all servers at startup.
Or you can just use a clustering service for your OS.
I'm working on a tracking site that that tracks a player's levels for a game from day to day.
It's going to have to process around 25,000 API calls once a day. I'd like to be able to get this done in 1 hour but I would be okay with processing them all in 2 hours for now.
This is the API I would need to call for each player in my database to get their information: http://hiscore.runescape.com/index_lite.ws?player=Zezima
My site and database are hosted on a VPS.
My thought on how to achieve this is to spin up a handful of Digital Ocean VPS instances when the time comes to make the API calls and have my main VPS distribute the API calls across the DO instances which will make the API calls and insert the results back into my database.
Parallelization is your friend here. Pool your queue listeners and have them run on a machine with adequate CPU and memory.
How fast is your process? Completing 25,000 transactions in one hour means 7 per second. Do you have timing data to help guide the number of instances you'll need?
I'm assuming that your database will allow simultaneous INSERTs. You don't want those getting in each other's way.
I am hosting a service on Windows Azure and using an external DNS (Gandi) to manage zone files. What I would like is to redirect all the incoming Azure traffic to another web page, while I am updating the service (like we'll be back soon).
I know that I can do this by updating the zone file, but it takes time to propagate, then time to put back into normal. So 1 hour update finally takes 6-8 hours.
This is not good. Is there any other way to redirect traffic with IMMEDIATE effect and not wait DNS to propagate? Adding redirection inside the code is not really an option, because Azure service packages take an eternity to upload.
Thanks
Not sure what your full requirement here is, but could VIP swap do the trick for you?
You would deploy the alternate 'well be back' site to the staging slot and when you wish to update the service you'll use VIP swap to make that production whilst you update your service, which is now on staging with the latest build.
When that's ready you can VIP swap back and the new site is now in production?
Edited to add:
I take your point regarding wanting to leave the staging slot for rollbacks, makes perfect sense.
Another option could be to use the Traffic Manager -
Have your main application in one cloud service and your temporary landing page in another.
Configure a traffic manager failover policy with both services (main one first, alternative second).
when you want to go to 'maintenance mode' disable traffic to the main service and all traffic will get routed to the 'maintenance mode' one although there is some lag in propagating it measured in minutes rather than hours when I played with it a little bit just now - there's a DNS time-to-live setting available to you which defaults to 5 minutes.
When you're ready to come-back online re-enable the main site (and you can chose to remove the 'maintenance mode' deployment when all is working
There's a feature in ASP.NET to quickly stop serving content. If you put a file called app_offline.htm in the root of the site, it will serve that instead of regular content. There are details in this blog post.
Using it in Azure may be difficult. I would suggest using VIP Swap, but I see you don't want to do that. You could remote into each VM and manually add the file, though that could be painful. It may be possible to script it, but I don't know an easy way to do so.
Our .NET web app uses ODP.NET for connections and its Oracle User for connecting to database is "webuserOracle". That web app always close connections and dispose connections.
However, on our Oracle 10g database, we see that sessions and processes values of Oracle User "webuserOracle" is always high, as they woudn't close or die.
We have decided to set up on Oracle profile for "webuserOracle" in order to limit the connect time up to 5 minutes.
CREATE PROFILE profile_webuserOracle LIMIT CONNECT_TIME 5;
ALTER USER webuserOracle PROFILE profile_webuserOracle;
Question:
For a web app, limiting connection to 5 minutes, means that the user could interact, say, 2 hours with the web app. The limit of 5 minutes is only for events triggered (like clicking a button) to connect to database. 5 minutes for everything that happened between Con.Open and Con.Dispose:
Dim con As OracleConnection = oraConexion()
con.Open()
''' There'll be a limit of 5 minutes to run the code here
con.Close()
con.Dispose()
Setting a CONNECT_TIME in a profile for a web application is likely to be a very bad idea.
First off, generally, a three-tier application is going to make use of connection pools in the middle tier. That means that the middle tier servers open a pool of connections to the database that remain open for a long period of time and get handed out to web sessions as needed. That means that a single web user that is clicking around through the site is likely to get different database sessions with each click and that a single database session will be used by a very large number of web users.
If you set a CONNECT_TIME for your connection pool connections,
The middle tier is likely to constantly get errors that the particular connection it obtained from the connection pool has exceeded its allowed connection time. You can mitigate some of that by having the middle tier execute a dummy query (i.e. select 1 from dual) on every connection that it gets from the pool to verify that the 5 minutes hasn't elapsed before the interaction starts but there is no guarantee that the timeout won't be reached when you run the first query on the page.
The middle tier will constantly be opening new physical connections to the database (a rather expensive process) to replace the connections that have been closed because they've been open for 5 minutes. Those connection storms are likely to put a substantial load on the database. That will also create performance problems for the application as users are constantly waiting for new physical connections to be opened rather than being able to reuse connections from the pool.
The number of sessions and processes is likely to be much higher if you make this change. The middle tier is going to maintain however many real physical connections it needs to service the users plus a number of expired connections that have to remain simply to inform the next caller that they are expired.
What is the real problem you are trying to solve? It is perfectly normal that the middle tier will maintain a pool of database connections that do not close. That is perfectly normal and healthy. If you want to reduce the number of connections that are open at any one time, you can adjust the connection pool settings on your middle tier servers.