I've recently discovered the lift framework and have read that it's stateful.
Therefore, if I had a high-traffic site running on Lift - say something that was running a chat application that required users to be logged in - and I wanted to upgrade my app, would doing so kick everyone out of chat and make them have to log in again?
None of the previous answers are correct. Many of the artefacts held within the LiftSession are non-serilizable, so cant be stuffed into a database. You have two options for doing rollig upgrades of stateful applications:
1) Session bleeding. Basically you ween one of the deployments sessions away until their sessions have ended or X duration passes and then you remove the app from production whilst automatically rerouting traffic to another instance of Lift. Google around for rolling upgrades using HAProxy as this should help you from the cluster perspective.
2) If your state is fairly trivial (mostly primitive-style types: ints, strings etc) then you could think about using ContainerVar/MigratableSession and clustering the state using terracotta or similar. This comes with a range of limits though because it then uses the HTTPSession rather than LiftSession.
You might want to checkout chapter 15 of Lift in Action which details that latter solution in a fair amount of detail.
If you keep your state in memory and redeploy the web application, that state will be lost. You could save it to a database or a file before redeploying though and read it back from there.
Related
The server holds logic, iOS/Android App holds UI. Common case.
How do I suppose to deploy new features in this case with continuous deployment methodology?
I assume that server-side deploy looks like that:
I'm triggering new feature deployment, load balancer starts redirecting 1% of all users to the server instance with the new feature. If everything goes smoothly, then load balancer starts redirecting 10%, 30%, etc up to 100%.
The same can be done for client apps, using, say, Codepush.
So, if I'll deploy server without an app, then there will be no new features usage and therefore no problems with new deployment for sure.
So, probably I have to deploy app first and put some kind of server version checker, so if the server has api for this new feature, the UI for this feature is being shown, and if the app is connected to the wrong server, the new UI is hidden.
That's seems primitive. I need to persist socket connection to the same server to avoid hitting the wrong server, right? And what if instance/zone/region will go down and the user will be suddenly redirected to another sone/region and new server will not have the new feature api? Probably, my assumption is wrong.
So, how do I suppose to deploy new features in this case with continuous deployment methodology?
I would say that your question is more of version compatibility nature of server/client API than CD. We have a similar requirement where a server and the clients communicate and both are constantly enhanced with features. I don't know your production software architecture which might change the needs accordingly but I'll try to come up with some ideas.
I'm going to describe two cases which might apply for you.
First case:
The thing is easier when you do not face the situation that new client versions need to communicate with old server versions. The new server version is deployed first and old clients simply do not use the new feature, as you've already pointed out. In this situation my recommendation is to deploy the server app first and then start to roll out the new client apps. If that's possible I would do that. It applies only when the new feature doesn't force you to break the API.
Second case:
In the case that new client app versions need to talk to an old server app, which I would try to avoid at all costs, the new client needs some switch inside to deactivate feature e.g. B when it's talking to an old server that doesn't support this feature. An API version counter could be the solution. But it requires the client to be able to distinct between server versions. In REST you often see the .../v1/.. inside the URL but could be solved differently as well. Hopefully the API provides some mechanism to get the version the server speaks.
We faced both cases at the same time, the protocol changed over the time including breaking changes, so we needed to implement an API version negotiation mechanism.
I have a kinda small service fabric application that I'm building and have since I converted to service fabric been annoyed about the slow startup time and it's not only after a release but also after like 10-15 min of inactivity.
I have added a project whose sole purpose is to go to each service and make a small db request every 10s, thinking that will keep the application and ef running. This helped me from getting timeouts and now the first requests are in the 5-15s range. After some warming up the requests are usually in the 300ms range so they are quite easy requests and there isn't much communication between the services (4 services in total).
After a lot of searching I found a profiler that seems to work as most doesn't like the one in visual studio. Unfortunately it didn't really say that much except that it waits for threads a lot and that it doesn't seem to be in my code. All my external requests use await async. Also when following the request it kinda seemed like there were information missing...
At first I thought that the slowness might come from ef generating the search query so I migrated that part to use dapper instead (the full request still uses some ef) but that didn't change anything really.
The application has all the latest service fabric, dotnet core, ef core, application insights packages. All services except for the one validating tokens are stateless. And of course built in release mode.
At this point I'm kinda lost as I cannot find the reason it's so slow. In the old days this was usually because of IIS shutting down the application or recycling it but now when it isn't there, what can it be?
Similar issue happen to us however we use DI container and until the first call to our service, all dependency is not resolved and it take time to create these instances. For example a singleton of class. Another one is was EF DB context. To overcome that we have process to "warm" the services first.
Hope that helps
This might be a shot in the dark: Are your services communicating using the Service Fabric remoting options or using HTTP? In the case of HTTP, might the hibernation and warmup time be caused by HttpSys/Kestrel?
Regarding your slow responses (300ms) that does seem a bit odd, we have multiple stateless services (using HTTP and Kestrel) with EF in the back, and have sub 50ms response times).
I have a webapp using the Scala-based Scalatra web framework. The problem is, anytime the application is re-deployed, or anytime the app-server is rebooted, all session data is lost. This means (to name one downside) users must re-login every time we make an update to the site.
Some research reveals there are, apparently, "container-specific" ways to make sessions persist across app and server reboots (e.g., in the case of Tomcat), but this has two shortcomings:
If the app is not always deployed in the same container (and in the case of Scalatra, an embedded Jetty is used for dev purposes) then I'll need separate configuration for each container.
Using a server-local configuration file is much more fickle -- it's likely to get lost in server migrations, and it won't be automatically available to each instance (e.g., to each developer) of the app, whereas something stored with the core application code is much easier to test, retain, and generally keep track of.
So, to sum up...
Is there a generic, container-neutral way to make sessions persistent? Even if only by overriding appropriate methods in the Java/Servlet stack and storing the session data manually?
Barring that, is there a way to store relevant configuration for multiple containers (e.g., for both Jetty and Tomcat) in my application code (web.xml or similar)?
Thanks -- any insights appreciated!
I'm working on an iPhone application that should work in offline and online modes.
In it's online mode it's supposed to feed all the information the user enters to a webservice backed by GWT/GAE.
In it's offline mode it's supposed to store the information locally, and when connection is available sync it up to the web service.
Currently my plan is as follows:
Provide a connection between an app and a webservice using Protobuffers for efficient over-the-wire communication
Work with local DB using Core Data
Poll the network status, and when available sync the database and keep some sort of local-db-to-remote-db key synchronization.
The question is - am I in the right direction? Are the standard patterns for implementing this? Maybe someone can point me to an open-source application that works in a similar fashion?
I am really new to iPhone coding, and would be very glad to hear any suggestions.
Thanks
I think you've blurring the questions together.
If you've got a question about making a GWT web interface, that's one question.
Questions about how to sync an iPhone to a web service are a different question. For that, you don't want to use GWT's RPCs for syncing, as you'd have to fake out the 'browser-side' of the serialization system in your iPhone code, which GWT normally provides for you.
about system design direction:
First if there is no REAL need do not create 2 different apps one GWT and other iPhone
create one but well written GWT app. It will work off line no problem and will manage your data using HTML feature -- offline application cache
If it a must to create 2 separate apps
than at least save yourself effort and do not write server twice as if you go with standard GWT aproach you will almost sertanly fail to talk to server from stand alone app (it is zipped JSON over HTTP with some tricky headers...) or will write things twise so look in to the RestLet library it well supported by the GAE.
About the way to keep sync with offline / online switching:
There are several aproaches to consider and all of them are not perfect. So when you conseder yours think of what youser expects... Do not be Microsoft Word do not try to outsmart the user.
If there at least one scenario in the use cases that demand user intervention to merge changes (And there will be - take it to the bank) - than you will have implement UI for this - than there is a good reason to use it often - user will get used to it. it better than it will see it in a while since he started to use the app because a need fro it is rare because you implemented a super duper merging logic that asks user only in very special cases... Don't do it.
balance the effort. Because the mess that a bug in such code will introduce to user is much more painful than the benefit all together.
so the HOW:
The one way is the Do-UnDo way.
While off line - keep the log of actions user did on data in timed order user did them
as soon as you connected - send to server and execute them. Same from server to client.
Will work fine in most cases as long as you are not writing a Photoshop kind of software with huge amounts of data per operation. Also referred as Action Pattern by the GangOfFour.
Another way is a source control way. - Versions and may be even locks. very application dependent. DBMS internally some times use it for transactions implementations.
And there is always an option to be Read Only when Ofline :-)
Wonder if you have considered using a Sync Framework to manage the synchronization. If that interests you can take a look at the open source project, OpenMobster's Sync service. You can do the following sync operations
two-way
one-way client
one-way device
bootup
Besides that, all modifications are automatically tracked and synced with the Cloud. You can have your app offline when network connection is down. It will track any changes and automatically in the background synchronize it with the cloud when the connection returns. It also provides synchronization like iCloud across multiple devices
Also, modifications in the Cloud are synched using Push notifications, so the data is always current even if it is stored locally.
Here is a link to the open source project: http://openmobster.googlecode.com
Here is a link to iPhone App Sync: http://code.google.com/p/openmobster/wiki/iPhoneSyncApp
We have an internal web system that handles the majority of our companies business. Hundreds of users use it throughout the day, it's very high priority and must always be running. We're looking at moving to ASP.NET MVC 2; at the moment we use web forms. The beauty of using web forms is we can instantaneously release a single web page as opposed to deploying the entire application.
I'm interested to know how others are deploying their applications whilst still making them accessible to the user. Using the deployment tool in Visual Studio would supposedly cause a halt. I'm looking for a method that's super quick.
If you had high priority bug fixes for example, would it be wise to perhaps mix web forms with MVC and instead replace the view with a code-behind web form until you make the next proper release which isn't a web form?
I've also seen other solutions on the same server of having the same web application run side-by-side and either change the root directory in IIS or change the web.config to point to a different folder, but the problem with this is that you have to do an entire build and deploy even if it were for a simple bug fix.
EDIT: To elaborate, how do you deploy the application without causing any disruption to users.
How is everyone else doing it?
I guess you can run the MVC application uncompiled also? and just replace .cs/views and such on the run.
A websetup uninstall/install is very quick, but it kills the application pool.. which might cause problem. Depending on how your site is built.
The smoothest way is to run it on two servers and store the sessions in sql server or shared state. Then you can just bring S1 down and patch it => bring s1 back up again and bring S2 down => patch S2 and then bring it up again. Al thought this might not work if you make any major changes to the session parts of the code.
Have multiple instances of your website running on multiple servers. The best way to do it is to have a production environment, a test environment, and a developement environment. You can create test cases and run the load every time you have a new build, if can get through all the tests, move the version into production ;).
You could have two physical servers each running IIS and hosting a copy of the site. OR you could run two copies of the site under different IIS endpoints on the SAME server.
Either way you cut it you are going to need at least two copies of the site in production.
I call this an A<->B switch method.
Firstly, have each production site on a different IP address. In your company's DNS, add an entry set to one of the IPs and give it a really short TTL. Then you can update site B and also pre-test/warm-up the site by hitting the IP address. When it's ready to go, get your DNS switched to the new site B. Once your TTL has expired you can take down site A and update it.
Using a shared session state will help to minimise the transition of users between sites.