Does Memcached in my server is efficient enough? - memcached

I have installed Memcached along with OPCache Zend. And I wonder why it is using so low memory. Does this stat's are good ?.
I have configured about 20sites to use it. Wordpress, Magento, Joomla
My Server spec are :
Intel Xeon X3440 Quad-Core
Memory 16GB
x2 1500 GB SATA II-HDD 7.200 rpm
CentOS 5 - Plesk 10 - RAID 1
This is output from Memcached Admin window:
This is my configuration from memcache.ini
extension=memcache.so
session.save_handler=memcache
session.save_path="tcp://localhost:11211?persistent=1&weight=1&timeout=1&retry_interval=15"
This is my configuration from memcached in sysconfig:
PORT="11211"
USER="memcached"
MAXCONN="512"
CACHESIZE="32"
OPTIONS="-f 1.5 -I 2800 -l 127.0.0.1"

It is using a very low amount of memory because you're CACHESIZE variable in /etc/sysconfig/memcached is 32, it will never use more than 32mb. Try increasing it to 64 and you're server might perform slightly faster (depending on your coding implementation of course).

Related

Jenkins and PostgreSQL is consuming a lot of memory

We have a Data ware house server running on Debian linux ,We are using PostgreSQL , Jenkins and Python.
It's been few day the memory of the CPU is consuming a lot by jenkins and Postgres.tried to find and check all the ways from google but the issue is still there.
Anyone can give me a lead on how to reduce this memory consumption,It will be very helpful.
below is the output from free -m
total used free shared buff/cache available
Mem: 63805 9152 429 16780 54223 37166
Swap: 0 0 0
below is the postgresql.conf file
Below is the System configurations,
Results from htop
Please don't post text as images. It is hard to read and process.
I don't see your problem.
Your machine has 64 GB RAM, 16 GB are used for PostgreSQL shared memory like you configured, 9 GB are private memory used by processes, and 37 GB are free (the available entry).
Linux uses available memory for the file system cache, which boosts PostgreSQL performance. The low value for free just means that the cache is in use.
For Jenkins, run it with these JAVA Options
JAVA_OPTS=-Xms200m -Xmx300m -XX:PermSize=68m -XX:MaxPermSize=100m
For postgres, start it with option
-c shared_buffers=256MB
These values are the one I use on a small homelab of 8GB memory, you might want to increase these to match your hardware

CPU high load and free ram

I have a cloud server in cloudways, the CPU load is very high even after I upgrade my server up 2 levels but the strange thing is the ram is almost free ( server 16 GB ram 6 Core) is there anything we can do to take advantage of that free ram to reduce CPU load.
Regards
No CPU and RAM are different things
Check the reason why your CPU is highly loaded.
Maybe your host where your VM runs on is overloaded. Did you try to
contact your cloud provider?

Allocating more than 4 gigs of RAM to a VM causes the VM to not start

I'm currently using a combination of qemu-kvm, libvirtd, and virt-manager to host some virtual RHEL 6 machines. When I go ahead and attempt to bump the ram usage up above 4gb, the machines fail to start. What would even be the cause of this? Any information is helpful.
I'm running a 10 core, 3GHz Xeon processor, with 64 GB of ram.

http streaming large files with gwan

FINAL
Further testing revealed that in a newer version of G-WAN everything works as expected.
ORIGINAL
I'm working with large files and G-WAN seems perfect for my use case, but I can't seem to wrap my head around streaming content to the client.
I would like to avoid buffered responses as memory will be consumed very fast.
source code is published now
Thanks. The value you got is obviously wrong and this is likely to come from a mismatch in the gwan.h file where the CLIENT_SOCKET enum is defined. Wait for the next release for a file in sync with the executable.
Note that, as explained below, you won't have to deal with CLIENT_SOCKET for streaming files - either local or remote files - as local files are served streamed by G-WAN and remote files will be better served using G-WAN's reverse proxy.
copying to disk and serve from gwan is inefficient, and buffering the file in memory is also inefficient
G-WAN, like Nginx and many others, is already using sendfile() so you don't have anything to do in order to "stream large files to the client".
I've looked at sendfile() but I couldn't find where gwan stores the client socket. I've tried to use CLIENT_SOCKET but it didn't work
The only way for CLIENT_SOCKET to fail to return the client socket is to use a gwan.h header that does not match the version of your gwan executable.
By using a G-WAN connection handler, you can bypass G-WAN's default behavior (I assume that's what you tried)... but again, that's unecessary as G-WAN already does what you are trying to achieve (as explained above).
This in mind, here are a few points regarding G-WAN and sendfile():
an old release of G-WAN accidentally disabled sendfile() - don't use it, make sure you are using a more recent release.
the April public release was too careful at closing connections, (slowing down non-keep-alived connections) and was using sendfile() only for files greater than a certain size.
more recent development releases are using sendfile() for all static files (by default, as it confused too many users, we have disabled caching which can be explicitly restored either globally, per-connection, or for a specific resource).
As a result, for large files test loads, G-WAN is now faster than all the other servers that we have tested.
We have also enormously reworked memory consumption to reach unparalleled levels (a small fraction of Nginx's memory consumption) - even with large files served with sendfile().
G-WAN at startup on a 6-Core Xeon takes 2.2 MB of RAM (without compiled and loaded scripts like servlets and handlers):
> Server 'gwan' process topology:
---------------------------------------------
6] pid:4843 Thread
5] pid:4842 Thread
4] pid:4841 Thread
3] pid:4840 Thread
2] pid:4839 Thread
1] pid:4838 Thread
0] pid:4714 Process RAM: 2.19 MB
---------------------------------------------
Total 'gwan' server footprint: 2.19 MB
In contrast, Nginx with worker_connections 4096; eats 15.39 MB at startup:
> Server 'nginx' process topology:
---------------------------------------------
6] pid:4703 Process RAM: 2.44 MB
5] pid:4702 Process RAM: 2.44 MB
4] pid:4701 Process RAM: 2.44 MB
3] pid:4700 Process RAM: 2.44 MB
2] pid:4699 Process RAM: 2.44 MB
1] pid:4698 Process RAM: 2.44 MB
0] pid:4697 Process RAM: 0.77 MB
---------------------------------------------
Total 'nginx' server footprint: 15.39 MB
And, unlike, Nginx, G-WAN can handle more than 1 million of concurrent connections without reserving the memory upfront (nor any configuration by the way).
If you configure Nginx with worker_connections 1000000; then you have:
> Server 'nginx' process topology:
---------------------------------------------
6] pid:4568 Process RAM: 374.71 MB
5] pid:4567 Process RAM: 374.71 MB
4] pid:4566 Process RAM: 374.71 MB
3] pid:4565 Process RAM: 374.71 MB
2] pid:4564 Process RAM: 374.71 MB
1] pid:4563 Process RAM: 374.71 MB
0] pid:4562 Process RAM: 0.77 MB
---------------------------------------------
Total 'nginx' server footprint: 2249.05 MB
Nginx is eating 2.2 GB of RAM even before receiving any connection!
Under the same scenario, G-WAN needs only 2.2 MB of RAM (1024x less).
And G-WAN is now faster than Nginx for large files.
I want to stream large files from a remote source
sendfile() might not be what you are looking for as you state: "I want to stream large files from a remote source".
Here, if I correctly understand your question, you would like to RELAY large files from a remote repository, using G-WAN as a reverse-proxy, which is a totally different game (as opposed to serving local files).
The latest G-WAN development release has a generic TCP reverse-proxy feature which can be personalized with a G-WAN connection handler.
But in your case, you would just need a blind relay (without traffic rewrite) to go as fast as possible instead of allowing you to filter and alter the backend server replies.
The splice() syscall mentionned by Griffin is the (zero-copy) way to go - and G-WAN's (efficient event-based and multi-threaded) achitecture will do marvels - especially with its low RAM usage.
G-WAN can do this in a future release (this is simpler than altering the traffic), but that's a pretty vertical application as opposed to G-WAN's primary target which is to let Web/Cloud developers write applications.
Anyway, if you need this level of efficiency, G-WAN can help to reach new levels of performance. Contact us at G-WAN's website.
There is a nice example of the require functionality, also included with gwan application.
http://gwan.com/source/comet.c
Hope this helps.
I think you probably mean http streaming, not comet - in this case, there is a flv.c connection handler example provided with gwan. Also, you can use c sendfile() for zero copy transfering of files, or splice() syscall depending on what you need.

MongoDB in the cloud hosting, benefits

Im still fighting with mongoDB and I think this war will end is not soon.
My database has a size of 15.95 Gb;
Objects - 9963099;
Data Size - 4.65g;
Storage Size - 7.21g;
Extents - 269;
Indexes - 19;
Index Size - 1.68g;
Powered by:
Quad Xeon E3-1220 4 × 3.10 GHz / 8Gb
For me to pay dearly for a dedicated server.
On VPS 6GB memory, database is not imported.
Migrate to the cloud service?
https://www.dotcloud.com/pricing.html
I try to pick up the rate but there max 4Gb memory mongoDB (USD 552.96/month o_0), I even import your base can not, not enough memory.
Or something I do not know about cloud services (no experience with)?
Cloud services are not available to a large database mongoDB?
2 x Xeon 3.60 GHz, 2M Cache, 800 MHz FSB / 12Gb
http://support.dell.com/support/edocs/systems/pe1850/en/UG/p1295aa.htm
Will work my database on that server?
This is of course all the fun and get the experience in the development, but already beginning to pall ... =]
You shouldn't have an issue with a db of this size. We were running a mongodb instance on Dotcloud with 100's of GB of data. It may just be because Dotcloud only allow 10GB of HDD space by default per service.
We were able to backup and restore that instance on 4GB of RAM - albeit that it took several hours
I would suggest you email them directly support#dotcloud.com to get help increasing the HDD allocation of your instance.
You can also consider using ObjectRocket which is a MOngoDB as a service. For a 20Gb database the price is $149 per month - http://www.objectrocket.com/pricing