Out of memory Entity Framework 4 - entity-framework

I am hoping some one can help as I'm at a loss.
On my machine I can run a process that retrieves quite a lot of records using Entity Framework 4 and it uses at maximum about 190 000 KB. On a client machine the same process uses about 800 000 KB.
Client machine is Windows 7.
Does anyone at all have any ideas on what I can look for as to why it's using more memory on the Client machine?

Related

SQL Server Windows NT - 64 Bit consuming memory

Yesterday I was going over some scripts and notes from a class and noticed that queries were a lot slower. Queries that took 0 ms of cpu time and 10 ms of elapsed time before were taking over 10 seconds. Task Manager showed that SQL Server Windows NT - 64 Bit was running and using 5 GB out of 8 GB of my memory. I stopped running queries but it went on for hours until dropped down to 2 GB and went on until
I shut it down around 10 pm. This morning SQL Server Windows NT- 64 Bit was using
a few hundred MB of memory until I ran a stored procedure. As you can see in the picture it shot up to over 4 GB of memory and staying at it. I know for sure that this happens a lot but never thought there was a possibility that it was caused by something I ran. If I end the task, I lose the connection. If I restart my computer, it will restart as well, I tested a couple of times. What is really happening and What should I do? Screenshot of Task Manager

Cassandra + Redis Project Server setup

Currently i'm having a Dedicated VPS Server with 4GB Ram , 50GB Hard-disk , i have a SAAS solution running on the server with more than 1500 customers. Now i'm going to upgrade the projects business plan,There will be 25000 customers and about 500 - 1000 customers using the project realtime . For now it takes 5 Seconds to fetch cassandra database records from the server to the application.Then i came through redis and it says that saving a copy to redis will help to fetch the data much faster and lowers server overhead.
Am i right about this ?
If i need to improve the overall performance , Can anybody tell me what are the things i need to upgrade ?
Can a server with configuration said above can handle cassandra and redis together ?
Thanks in advance .
A machine with 4GB of RAM will probably only be single-core so it's too small for any production workload and only suitable for dev usage where you run 1 or 2 transactions per second, mostly for functional testing.
We generally recommend deploying Cassandra on machines with at least 2 cores + 8GB allocated to the heap (so need at least 16GB of RAM) for low production loads. For moderate loads, 4 cores + 32GB RAM is ideal so you can allocate 16GB to the heap.
If you're just at the proof-of-concept stage, there's a tier on DataStax Astra that's free forever and doesn't require a credit card to create an account. I recommend it to most people because you can launch a cluster in a few clicks and you can quickly focus on developing your app. Cheers!

Why is Matlab allocating so much memory?

I used to run 5-6 Matlab instances on a server (Win Server 2008) and they allocated about 50-60Mb each. Now, as I plan to run up to 10 instances simultaneously, I upgraded the server RAM from 1 to 2 GB (running on a VPS) on a Win Server 2012 platform. Now after migrating to the new platform, I see that each Matlab instance is allocating more than 100Mb or RAM each, some up to 150Mb, meaning that the whole migration feels useless. Why is this the case? Does Matlab simply allocate whatever is available, or?
Thanks in advance.

http streaming large files with gwan

FINAL
Further testing revealed that in a newer version of G-WAN everything works as expected.
ORIGINAL
I'm working with large files and G-WAN seems perfect for my use case, but I can't seem to wrap my head around streaming content to the client.
I would like to avoid buffered responses as memory will be consumed very fast.
source code is published now
Thanks. The value you got is obviously wrong and this is likely to come from a mismatch in the gwan.h file where the CLIENT_SOCKET enum is defined. Wait for the next release for a file in sync with the executable.
Note that, as explained below, you won't have to deal with CLIENT_SOCKET for streaming files - either local or remote files - as local files are served streamed by G-WAN and remote files will be better served using G-WAN's reverse proxy.
copying to disk and serve from gwan is inefficient, and buffering the file in memory is also inefficient
G-WAN, like Nginx and many others, is already using sendfile() so you don't have anything to do in order to "stream large files to the client".
I've looked at sendfile() but I couldn't find where gwan stores the client socket. I've tried to use CLIENT_SOCKET but it didn't work
The only way for CLIENT_SOCKET to fail to return the client socket is to use a gwan.h header that does not match the version of your gwan executable.
By using a G-WAN connection handler, you can bypass G-WAN's default behavior (I assume that's what you tried)... but again, that's unecessary as G-WAN already does what you are trying to achieve (as explained above).
This in mind, here are a few points regarding G-WAN and sendfile():
an old release of G-WAN accidentally disabled sendfile() - don't use it, make sure you are using a more recent release.
the April public release was too careful at closing connections, (slowing down non-keep-alived connections) and was using sendfile() only for files greater than a certain size.
more recent development releases are using sendfile() for all static files (by default, as it confused too many users, we have disabled caching which can be explicitly restored either globally, per-connection, or for a specific resource).
As a result, for large files test loads, G-WAN is now faster than all the other servers that we have tested.
We have also enormously reworked memory consumption to reach unparalleled levels (a small fraction of Nginx's memory consumption) - even with large files served with sendfile().
G-WAN at startup on a 6-Core Xeon takes 2.2 MB of RAM (without compiled and loaded scripts like servlets and handlers):
> Server 'gwan' process topology:
---------------------------------------------
6] pid:4843 Thread
5] pid:4842 Thread
4] pid:4841 Thread
3] pid:4840 Thread
2] pid:4839 Thread
1] pid:4838 Thread
0] pid:4714 Process RAM: 2.19 MB
---------------------------------------------
Total 'gwan' server footprint: 2.19 MB
In contrast, Nginx with worker_connections 4096; eats 15.39 MB at startup:
> Server 'nginx' process topology:
---------------------------------------------
6] pid:4703 Process RAM: 2.44 MB
5] pid:4702 Process RAM: 2.44 MB
4] pid:4701 Process RAM: 2.44 MB
3] pid:4700 Process RAM: 2.44 MB
2] pid:4699 Process RAM: 2.44 MB
1] pid:4698 Process RAM: 2.44 MB
0] pid:4697 Process RAM: 0.77 MB
---------------------------------------------
Total 'nginx' server footprint: 15.39 MB
And, unlike, Nginx, G-WAN can handle more than 1 million of concurrent connections without reserving the memory upfront (nor any configuration by the way).
If you configure Nginx with worker_connections 1000000; then you have:
> Server 'nginx' process topology:
---------------------------------------------
6] pid:4568 Process RAM: 374.71 MB
5] pid:4567 Process RAM: 374.71 MB
4] pid:4566 Process RAM: 374.71 MB
3] pid:4565 Process RAM: 374.71 MB
2] pid:4564 Process RAM: 374.71 MB
1] pid:4563 Process RAM: 374.71 MB
0] pid:4562 Process RAM: 0.77 MB
---------------------------------------------
Total 'nginx' server footprint: 2249.05 MB
Nginx is eating 2.2 GB of RAM even before receiving any connection!
Under the same scenario, G-WAN needs only 2.2 MB of RAM (1024x less).
And G-WAN is now faster than Nginx for large files.
I want to stream large files from a remote source
sendfile() might not be what you are looking for as you state: "I want to stream large files from a remote source".
Here, if I correctly understand your question, you would like to RELAY large files from a remote repository, using G-WAN as a reverse-proxy, which is a totally different game (as opposed to serving local files).
The latest G-WAN development release has a generic TCP reverse-proxy feature which can be personalized with a G-WAN connection handler.
But in your case, you would just need a blind relay (without traffic rewrite) to go as fast as possible instead of allowing you to filter and alter the backend server replies.
The splice() syscall mentionned by Griffin is the (zero-copy) way to go - and G-WAN's (efficient event-based and multi-threaded) achitecture will do marvels - especially with its low RAM usage.
G-WAN can do this in a future release (this is simpler than altering the traffic), but that's a pretty vertical application as opposed to G-WAN's primary target which is to let Web/Cloud developers write applications.
Anyway, if you need this level of efficiency, G-WAN can help to reach new levels of performance. Contact us at G-WAN's website.
There is a nice example of the require functionality, also included with gwan application.
http://gwan.com/source/comet.c
Hope this helps.
I think you probably mean http streaming, not comet - in this case, there is a flv.c connection handler example provided with gwan. Also, you can use c sendfile() for zero copy transfering of files, or splice() syscall depending on what you need.

MongoDB in the cloud hosting, benefits

Im still fighting with mongoDB and I think this war will end is not soon.
My database has a size of 15.95 Gb;
Objects - 9963099;
Data Size - 4.65g;
Storage Size - 7.21g;
Extents - 269;
Indexes - 19;
Index Size - 1.68g;
Powered by:
Quad Xeon E3-1220 4 × 3.10 GHz / 8Gb
For me to pay dearly for a dedicated server.
On VPS 6GB memory, database is not imported.
Migrate to the cloud service?
https://www.dotcloud.com/pricing.html
I try to pick up the rate but there max 4Gb memory mongoDB (USD 552.96/month o_0), I even import your base can not, not enough memory.
Or something I do not know about cloud services (no experience with)?
Cloud services are not available to a large database mongoDB?
2 x Xeon 3.60 GHz, 2M Cache, 800 MHz FSB / 12Gb
http://support.dell.com/support/edocs/systems/pe1850/en/UG/p1295aa.htm
Will work my database on that server?
This is of course all the fun and get the experience in the development, but already beginning to pall ... =]
You shouldn't have an issue with a db of this size. We were running a mongodb instance on Dotcloud with 100's of GB of data. It may just be because Dotcloud only allow 10GB of HDD space by default per service.
We were able to backup and restore that instance on 4GB of RAM - albeit that it took several hours
I would suggest you email them directly support#dotcloud.com to get help increasing the HDD allocation of your instance.
You can also consider using ObjectRocket which is a MOngoDB as a service. For a 20Gb database the price is $149 per month - http://www.objectrocket.com/pricing