I have a Project with about 85 micro service functions. The problem that I am facing is that due to 85 individual services the total deployment size goes up to around 7 gb with having individual WAR's of about 100 mb for each service.
Is there a way in which the size of each of the war files can be reduced?
Thanks in advance.
Related
I am aware of
spark.sparkContext.statusTracker
but I can only have access to the number of Executors and the number of active Tasks.
I am wondering if it is possible to get information about the physical machines ? Like the number of CPU, memory, etc... I am hosting a cluster on EC2 and I would like to be able to keep track in my main log of the cluster configuration something like:
6 r3.2xlarge 16 CPU 12 GB
12 r3.xlarge 8 CPU 12GB RAM
^ The number are bs here but it is to give an idea of the type of information I'd like to print out.
Thank you.
I am hoping some one can help as I'm at a loss.
On my machine I can run a process that retrieves quite a lot of records using Entity Framework 4 and it uses at maximum about 190 000 KB. On a client machine the same process uses about 800 000 KB.
Client machine is Windows 7.
Does anyone at all have any ideas on what I can look for as to why it's using more memory on the Client machine?
Im still fighting with mongoDB and I think this war will end is not soon.
My database has a size of 15.95 Gb;
Objects - 9963099;
Data Size - 4.65g;
Storage Size - 7.21g;
Extents - 269;
Indexes - 19;
Index Size - 1.68g;
Powered by:
Quad Xeon E3-1220 4 × 3.10 GHz / 8Gb
For me to pay dearly for a dedicated server.
On VPS 6GB memory, database is not imported.
Migrate to the cloud service?
https://www.dotcloud.com/pricing.html
I try to pick up the rate but there max 4Gb memory mongoDB (USD 552.96/month o_0), I even import your base can not, not enough memory.
Or something I do not know about cloud services (no experience with)?
Cloud services are not available to a large database mongoDB?
2 x Xeon 3.60 GHz, 2M Cache, 800 MHz FSB / 12Gb
http://support.dell.com/support/edocs/systems/pe1850/en/UG/p1295aa.htm
Will work my database on that server?
This is of course all the fun and get the experience in the development, but already beginning to pall ... =]
You shouldn't have an issue with a db of this size. We were running a mongodb instance on Dotcloud with 100's of GB of data. It may just be because Dotcloud only allow 10GB of HDD space by default per service.
We were able to backup and restore that instance on 4GB of RAM - albeit that it took several hours
I would suggest you email them directly support#dotcloud.com to get help increasing the HDD allocation of your instance.
You can also consider using ObjectRocket which is a MOngoDB as a service. For a 20Gb database the price is $149 per month - http://www.objectrocket.com/pricing
We have a Windows Azure Web Role on two extra-small instances that has been running for weeks without problems. This morning, we unintentionally passed some spending limit, which resulted in Windows Azure shutting down our complete service, without any prior warning!
We removed the spending cap and began to re-deploy the Web Role, with the same codebase that has been running for weeks. To our astonishment, we got the deployment error
Validation Errors: Total requested resources are too large for the specified VM size.
We upgraded the deployment to two small instances instead of the extra-small instances, whereupon deployment was working again. Now, the web role is back in the web.
However, we still don't understand why our deployment was suddenly too big for an extra-small instance. We didn't change one bit since the last successful deployment to extra-small instances. We then tried to remove the deployment size by moving some files to Azure Storage, but even after reducing the package file by more than 1 MB, deployment still failed.
The cspkg file, the deployment package, is currently at 9'359 KB. If unzipped, the complete sitesroot folder's size is 14 MB. Which is way below the 19'480 KB limit for the x-small instance.
Before we lose more time with trial-and-error, here's my question: How exactly are those 19'480 KB calculated? Is it just the sitesroot folder, or the zipped package, or is it the sitesroot and approot folder together, or the whole unzipped package?
Thank you!
EDIT:
Could you verify if your local resources exceed 20 GB:
We started using memcached on the test server for our social media project and having some problems on ram usage.
We have created a cluster with 1 server node running with just 1 cache bucket sized 128 mb but when we check memcached.exe ram usage from the task manager it' s ram usage rises continously 1mb per second.
Any workaround on this?
Thanks!
If you're using our 1.0.3 product (the current version of our Memcached server) there is a known issue where deleting the default bucket causes a memory leak. Can you let me know whether you deleted the default bucket?
Also, we just released beta 4 of our 1.6.0 product which has support for both Membase buckets as well as Memcached buckets. I would certainly appreciate you taking a look and trying it out. I know it has fixed the memory leak issue.
Thanks so much.
Perry