ASP.NET Storing global variables - accessible from every page - asp.net-3.5

I am building a large application and I ususally use a simple session to store private global information however as the application could be rather large I belive this could be a problem due to the amount of memory sessions it could have.
Is there a better way to store such variables?
For example, when the user logs in I want to store data about that user and display it where needed without having to query the database each time.

Sessions are the way to go here, they are intended to persist information about the current session across requests. There is no other object in the ASP.NET framework that has this intention.
You could use the Cache, or store in the Application collection, but then the responsibility of uniquely identifying the individual session data is up to you.
What's also up to you is handling when the session terminates, and freeing up the instances that are stored in those collections (Cache or Application).
It's really a bad idea to start to ask these questions based on what you might "think" will happen. This is a form of premature optimization, and you should avoid it. Rather, use Sessions, as they were intended for this purpose, then measure where your bottlenecks are and address them, should performance be an issue when testing.

use cookies - they would work irrespective of your load balance environments
other options include:
1) writing your sessionvalues to a sql database - you can configure your asp.net app to configure session state to use sql server - but this has its own problems as sessions never time out (so u need to handle this via code explicitly)
2) if not using sql server - basically you would face a problem when you have too many users and you implement load balancing on your web server - so a user can go to a different web server in the same session (and it would not work)
there is a work around for this too - its called STICKY SESSIONS - where your web server guarantees your user would always hit the same web server within the session
3) with .net 2.0 provider model, you can even write your own session storage provider by implementing their delegates - so you can create your own xml files on your web server / shared server to read / write session data there :-)
so there are many ways you can solve this. however the simplest and cost effective solution is to use cookies

You might use Cache. That has built-in mechanism to free up when memory is running out...

Definitely use cookies for this. The best approach is to make yourself a cookies wrapper class that will do all the heavy lifting for you - checking if cookie is null, accessing the httpcontext, etc. No need to mess up your code with all that; just abstract it all out into cookies.cs or .vb.
SetCookieValue(someValue, cookieName); //there will be some expiration concerns here as well
myValue = GetCookieValue(cookieName);
Christian Weiss has a good strategy.

If you think your data is too large for the Session, I would consider a database of some sort using cache so that you don't unnecessary calls.

If it is per-user-session data you're storing, using the ASP.NET Session is definitely your best bet. If you're most worried about memory usage then you can use MSSQL mode. The data has to live somewhere and the choice of which session mode to use is dependent on your environment and the usage patterns of your users.

Don't assume there will be a problem with the size of session state until you see such a problem and have tried to solve it. For instance, it's possible that, although the application as a whole may use a large amount of session state, that any given user may not use that much in the course of a session.
I's also possible that changing from the default session state provider to the SQL provider or state server provider would ease a memory issue.
You can use Cache, but Cache is application-wide. You would need to qualify Cache entries with the user id or session id: Cache[userID + ".MyCacheEntry"].
Do not, under any circumstances, use static variables to store this data. As suggested by your subject line, they are application-wide, not per-user.

Related

EF Core and caching of results

I'm working on an Websocket application. When the client connects to the server, the websocket session get one dbcontext from dependency injection
Services.AddDbContext<Db>
This dbcontext will be the same for the whole websocket session. The problem is that the dbcontext will cache results. So if the websocket session is open for for example two hours and its reading the same data twice, while the data has been changed outside that dbcontext, the dbContext will give give invalid data back as response for the query. (the cached result from last query). There is serveral examples on how to avoid this, but it has to be done on every query. This is not really practical and somewhere in the code it might be forgotten and you have a chance to get invalid data.
Is there someway to permanently disable caching?
I think that you try use Entity Framewor in a very wrong way, DbContext is not supposed to work this way and it is not a cache per say, although it keeps some data in memory for you.
In your case I would suggest to either
Query The database every time as you suggested.
Or even better
Take advantage of proper caching mechanisms.
The decision if you should use sql server or a caching mechanism is based on how long you want to keep the data and how often you want to query them. If it is permanent and not query so often then it is sql server. If it is a couple of hours and you query very often it is better caching.
As a caching mechanism you can use:
The default MemoryCache, but it has quite limited functionality and it is restricted to the application level, so if you run multiple instyance of yor application this solution will not work out.
A distributed cache solution, like Redis, which supports a lot of functionality and you can connect many instances of your applications.

Is there any value in using core data for iPhone apps?

Can people give me examples of why they would use coreData in an application?
I ask this because most apps are just clients to a central server where an API of some sort gives you the information you need.
In my case I'm writing a timesheet application for a web app which has an API and I'm debating if there is any value in replicating the data structure on my server in core data(Sqlite)
e.g
Project has many timesheets
employee has many timesheets
It seems to me that I can just connect to the API on every call for lists of projects or existing timesheets for example.
I realize for some kind of offline mode you could store locally in core data but this creates way more problems because you now have a big problem with syncing that data back to the web server when you get connection again.. e.g. the project selected for a timesheet no longer exists.
Can any experienced developer shed some light on there experiences on when core data is best practice approach?
EDIT
I realise of course there is value in storing local persistance but the key value of user defaults seems to cover most applications I can think of.
You shouldn't think of CoreData simply as an SQLite database. It's not JUST an SQLite database. Sure, SQLite is an option, but there are other options as well, such as in-memory and, as of iOS5, a whole slew of custom data stores. The biggest benefit with CoreData is persistence, obviously. But even if you are using an in-memory data store, you get the benefits of a very well structured object graph, and all of the heavy lifting with regards to pulling information out of or putting information into the data store is handled by CoreData for you, without you necessarily needing to concern yourself with what is backing that data store. Sure, today you don't care too much about persistence, so you could use an in-memory data store. What happens if tomorrow, or in a month, or a year, you decide to add a feature that would really benefit from persistence? With CoreData, you simply change or add a persistent data store, and all of your methods to get information out or in remain unchanged. The overhead for that sort of addition is minimal in comparison to if you were trying to access SQLite or some other data store directly. IMHO, that's the biggest benefit: abstraction. And, in essence, abstraction is one of the most powerful things behind OOP. Granted, building the Data Model just for in-memory storage could be overkill for your app, depending on how involved the app is. But, just as a side note, you may want to consider what is faster: Requesting information from your web service every time you want to perform some action, or requesting the information once, storing it in memory, and acting on that stored value for the remainder of the session. An in-memory data store wouldn't persistent beyond that particular session.
Additionally, with CoreData you get a lot of other great features like saving, fetching, and undo-redo.
There are basically two kinds of apps. Those that provide you with local functionality (games, professional applications, navigation systems...) and those that grant access to a remote service.
Your app seems to be in the second category. If you access remote services, your users will want to access new or real-time data (you don't want to read 2 week old Facebook posts) but in some cases, local caching makes sense (e.g. reading your mails when you're on the train with unstable network).
I assume that the value of accessing cached entries when not connected to a network is pretty low for your customers (internal or external) compared to the importance of accessing real-time-data. So local storage might be not necessary at all.
If you don't have hundreds of entries in your timetable, "normal" serialization (NSCoding-protocol) might be enough. If you only access some "dashboard-data", you will be able to get along with simple request/response-caching (NSURLCache can do a lot of things...).
Core Data does make more sense if you have complex data structures which should be synchronized with a server. This adds a lot of synchronization logic to your project as well as complexity from Core Data integration (concurrency, thread-safety, in-app-conflicts...).
If you want to create a "client"-app with a server driven user experience, local storage is not necessary at all so my suggestion is: Keep it as simple as possible unless there is a real need for offline storage.
It's ideal for if you want to store data locally on the phone.
Seriously though, if you can't see a need for it for your timesheet app, then don't worry about it and don't use it.
Solving the sync problems that you would have with an "offline" mode would be detailed in your design of your app. For example - don't allow projects to be deleted. Why would you? Wouldn't you want to go back in time and look at previous data for particular projects? Instead just have a marker on the project to show it as inactive and a date/time that it was made inactive. If the data that is being synced from the device is for that project and is before the date/time that it was marked as inactive, then it's fine to sync. Otherwise display a message and the user will have to sort it.
It depends purely on your application's design whether you need to store some data locally or not, if it is a real problem or a thin GUI client around your web service. Apart from "offline" mode the other reason to cache server data on client side might be to take traffic load from your server. Just think what does it mean for your server to send every time the whole timesheet data to the client, or just the changes. Yes, it means more implementation on both side, but in some cases it has serious advantages.
EDIT: example added
You have 1000 records per user in your timesheet application and one record is cca 1 kbyte. In this case every time a user starts your application, it has to fetch ~1Mbyte data from your server. If you cache the data locally, the server can tell you that let's say two records were updated since your last update, so you'll have to download only 2 kbyte. Now you should scale up this for several tens of thousands of user and you will immediately notice the difference of the server bandwidth and CPU usage.

Best way to store dyamic data on iOS App from Web Service

I want to know what is the best way to store data on the iPhone from a web service.
I want the information to be stored on the device so the person doesn't need to access the web service every time he/she needs it. The currently information isn't much and contains less that 150 records. The records might update from time to time and a few new ones will be added. What is the best way to go about storing the data?
Thanks
If you use ASIHTTPRequest for your network stuff (and if you don't already, I can't sing its praises highly enough), you will find it has a cache layer built in which is perfect for situations like this.
You can activate it with a simple one line;
[ASIHTTPRequest setDefaultCache:[ASIDownloadCache sharedCache]];
And you have full control over the cache policy etc - just read the documentation.
The other simple approach of course is - on the assumption that your web service is returning JSON or XML - simply to store the response in a local file against a hash of the request parameters, then when you request the data again, you can first look to see if the file exists and if it does, return that data rather than going back to the website. You can roll your own cache policies etc too.
Since I discovered ASIHTTPRequest had a cache though, I've not needed to roll my own again.
I find that using coreData or sqllite3 is just overkill for 99% my requirements and a simple cache works very well.
If the data is relational, a Sqlite3 database would be the best storage option you have.
Also, this helps by allowing you to retrieve from the server and to update only the records that have changed, thus saving time and bandwidth.
This is the best option from a scalability point of view as well, as you stated that "current information isn't much", thus giving the impression that this is only a current situation, that may be subjected to further change, probably towards more records being added in time.
Sqite3 also gives you more control and better performance than using, for instance, Core Data. Here's an article explaining some of the details. Moreover, if you work through an Objective-C wrapper, such as FMDB, you get all the advantages without managing the complexity yourself.

Core Data with Web Services recommended pattern?

I am writing an app for iOS that uses data provided by a web service. I am using core data for local storage and persistence of the data, so that some core set of the data is available to the user if the web is not reachable.
In building this app, I've been reading lots of posts about core data. While there seems to be lots out there on the mechanics of doing this, I've seen less on the general principles/patterns for this.
I am wondering if there are some good references out there for a recommended interaction model.
For example, the user will be able to create new objects on the app. Lets say the user creates a new employee object, the user will typically create it, update it and then save it. I've seen recommendations that updates each of these steps to the server --> when the user creates it, when the user makes changes to the fields. And if the user cancels at the end, a delete is sent to the server. Another different recommendation for the same operation is to keep everything locally, and only send the complete update to the server when the user saves.
This example aside, I am curious if there are some general recommendations/patterns on how to handle CRUD operations and ensure they are sync'd between the webserver and coredata.
Thanks much.
I think the best approach in the case you mention is to store data only locally until the point the user commits the adding of the new record. Sending every field edit to the server is somewhat excessive.
A general idiom of iPhone apps is that there isn't such a thing as "Save". The user generally will expect things to be committed at some sensible point, but it isn't presented to the user as saving per se.
So, for example, imagine you have a UI that lets the user edit some sort of record that will be saved to local core data and also be sent to the server. At the point the user exits the UI for creating a new record, they will perhaps hit a button called "Done" (N.B. not usually called "Save"). At the point they hit "Done", you'll want to kick off a core data write and also start a push to the remote server. The server pus h won't necessarily hog the UI or make them wait till it completes -- it's nicer to allow them to continue using the app -- but it is happening. If the update push to server failed, you might want to signal it to the user or do something appropriate.
A good question to ask yourself when planning the granularity of writes to core data and/or a remote server is: what would happen if the app crashed out, or the phone ran out of power, at any particular spots in the app? How much loss of data could possibly occur? Good apps lower the risk of data loss and can re-launch in a very similar state to what they were previously in after being exited for whatever reason.
Be prepared to tear your hair out quite a bit. I've been working on this, and the problem is that the Core Data samples are quite simple. The minute you move to a complex model and you try to use the NSFetchedResultsController and its delegate, you bump into all sorts of problems with using multiple contexts.
I use one to populate data from your webservice in a background "block", and a second for the tableview to use - you'll most likely end up using a tableview for a master list and a detail view.
Brush up on using blocks in Cocoa if you want to keep your app responsive whilst receiving or sending data to/from a server.
You might want to read about 'transactions' - which is basically the grouping of multiple actions/changes as a single atomic action/change. This helps avoid partial saves that might result in inconsistent data on server.
Ultimately, this is a very big topic - especially if server data is shared across multiple clients. At the simplest, you would want to decide on basic policies. Does last save win? Is there some notion of remotely held locks on objects in server data store? How is conflict resolved, when two clients are, say, editing the same property of the same object?
With respect to how things are done on the iPhone, I would agree with occulus that "Done" provides a natural point for persisting changes to server (in a separate thread).

C# ASMX webservice semi -permanant storage requirement

I'm writing a mock of a third-party web service to allow us to develop and test our application.
I have a requirement to emulate functionality that allows the user to submit data, and then at some point in the future retrieve the results of processing on the service. What I need to do is persist the submitted data somewhere, and retrieve it later (not in the same session). What I'd like to do is persist the data to a database (simplest solution), but the environment that will host the mock service doesn't allow for that.
I tried using IsolatedStorage (application-scoped), but this doesn't seem to work in my instance. (I'm using the following to get the store...
IsolatedStorageFile.GetStore(IsolatedStorageScope.Application |
IsolatedStorageScope.Assembly, null, null);
I guess my question is (bearing in mind the fact that I understand the limitations of IsolatedStorage) how would I go about getting this to work? If there is no consistent way to do it, I guess I'll have to fall back to persisting to a specific file location on the filesystem, and all the pain of permission setting that entails in our environment.
Self-answer.
For the pruposes of dev and test, I realised it would be easiest to limit the lifetime of the persisted objects, and use
HttpRuntime.Cache
to store the objects. This has just enough flexibility to cope with my situation.