Can someone help me with sharing memory between three or more machines, each machine having its own copy of the memory to speed up the read operation
For example, I first create a socket to communicate between these processes, but how can I make memory visible between the machines? I know how make it visible in one machine.
EDIT: Maybe we should use server machine to manage shared memory read and write operation?
You cannot share memory across machine boundaries. You have to serialize the data being shared, such as with an IPC mechanism like a named pipe or a socket. Transmit the shared data to each machine, where they then copy it into their own local memory. Any changes to the local memory has to be transmitted to the other machines so they have an updated local copy.
If you are having problems implementing that, they you need to show what you have actually attempted.
Related
I creating a database in Filemaker, the database is about 1GB and includes around 500 photos.
Filemaker maker server is having performance issues, its crashes and takes it’s time when searching though the database. My IT department recommended to raise the cache memory.
I raised the memory 252MB but it's still struggling to give a consistent performance. The database shows now peaks in the CPU.
What can cause this problem?
Verify at FileMaker.com that your server meets the minimum requirements for your version.
For starters:
Increase the cache to 50% of the total memory available to FileMaker server.
Verify that the hard disk is unfragmented and has plenty of free space.
FM Server should be extremely stable.
FMS only does two things:
reads data from the disk and sends it to the network
takes data from the network and writes it to the disk
Performance bottlenecks are always disk and network. FMS is relatively easy on CPU and RAM unless Web Direct is being used.
Things to check:
Are users connecting through ethernet or wifi? (Wifi is slow and unreliable.)
Is FMS running in a virtual machine?
Is the machine running a supported operating system?
Is the database using Web Direct? (Use a 2-machine deployment for web direct.)
Is there anything else running on the machine? (Disable virus and indexing.)
Make sure users are accessing the live databases through FMP client and not through file sharing.
How are the database being backed up? NEVER let anything other than FMS see the live files. Only let OS-level backup processes see backup copies, never the live files.
Make sure all the energy saving options on the server are DISABLED. You do NOT want the CPU or disks sleeping or powering down.
Put the server onto an uninterruptible power supply (UPS). Bad power could be causing problems.
For example, I heard in class that global variables are just put in a specific location in memory. What is to prevent two programs from accidentally using the same memory location for different variables?
Also, do both programs use the same stack for their arguments and local variables? If so, what's to prevent the variables from interleaving with each other and messing up the indexing?
Just curious.
Most modern processors have a memory management unit (MMU) that provide the OS the ability to create protected separate memory sections for each process including a separate stack for each process. With the help of the MMU the processor can restrict each process to modifying / accessing only memory that has been allocated to it. This prevents one process from writing into a another processes memory space.
Most modern operating systems will use the features of the MMU to provide protection for each process.
Here are some useful links:
Memory Management Unit
Virtual Memory
This is something that modern operating systems do by loading each process in a separate virtual address space. Multiple processes may reference the same virtual address, but the operating system, helped by modern hardware, will map each one to a separate physical address, and make sure that one process cannot access physical memory allocated to another process1.
1 Debuggers are a notable exception: operating system often provide special mechanisms for debuggers to attach to other processes and examine their memory space.
The short answer to your question is that the operating system deals with these issues. They are very serious issues, and a significant percentage of an operating systems job is keeping everything in a separate space. The operating system runs programs that track all the other programs and make sure they are each using a space. This keeps the stacks separate too. Each program is running its own stack assigned by the OS. How the OS does this assigning is actually a complex task.
According to this thread (not very reliable, I know) memcached does not use the disk, not even virtual memory.
My questions are:
Is this true?
If so, how does memcached ensures that the memory he gets assigned never overflows to disk?
memcached avoids going to swap through two mechanisms:
Informing the system administrators that the machines should never go to swap. This allows the admins to maybe not configure swap space for the machine (seems like a bad idea to me) or configure the memory limits of the running applications to ensure that nothing ever goes into swap. (Not just memcached, but all applications.)
The mlockall(2) system call can be used (-k) to ensure that all the process's memory is always locked in memory. This is mediated via the setrlimit(2) RLIMIT_MEMLOCK control, so admins would need to modify e.g. /etc/security/limits.conf to allow the memcached user account to lock a lot more memory than is normal. (Locked memory is mediated to prevent untrusted user accounts from starving the rest of the system of free memory.)
Both these steps are fair assuming the point of the machine is to run memcached and perhaps very little else. This is often a fair assumption, as larger deployments will dedicate several (or many) machines to memcached.
You configure memcached to use a fixed amount of memory. When that memory is full memcached just deletes old data to stay under the limit. It is that simple.
Let's assume we have a process that allocates a socket listening on a specific port, does something with it and then terminates abnormaly. Now a second process starts and wants to allocate a socket listening on the same port that was previously held by the crahsed process. Is this socket available for re-allocation?
How does the Operating System recover resources that weren't released properly? Does the OS track the process id along with each allocated resource?
Is this cleanup something I can expect every POSIX compliant system to do?
This is up to the operating system but generally an OS maintains a process control structure to, among other things, manage its resources. When a process allocates a resource from the system (such as opening a file or allocating memory), details of the allocation are placed in that structure. When the process terminates, anything left in it gets cleaned up - but it's best to explicitly clean up as you go.
Specific details will depend upon the operating system, but generally speaking user-code is run in a virtual address space/sandbox where it does not have any direct access to hardware resources. Anything that the user process wants to access/allocate must be provided by calling the OS and asking it for the desired resource.
Thus the OS has a simple way of knowing who has been allocated which resources, and as long as it keeps track of this information, cleaning up resources in the event of a crashed process is as simple as fetching the list of resources allocated to that process, and marking them all as available again.
We need to write software that would continuously (i.e. new data is sent as it becomes available) send very large files (several Tb) to several destinations simultaneously. Some destinations have a dedicated fiber connection to the source, while some do not.
Several questions arise:
We plan to use TCP sockets for this task. What failover procedure would you recommend in order to handle network outages and dropped connections?
What should happen upon upload completion: should the server close the socket? If so, then is it a good design decision to have another daemon provide file checksums on another port?
Could you recommend a method to handle corrupted files, aside from downloading them again? Perhaps I could break them into 10Mb chunks and calculate checksums for each chunk separately?
Thanks.
Since no answers have been given, I'm sharing our own decisions here:
There is a separate daemon for providing checksums for chunks and whole files.
We have decided to abandon the idea of using multicast over VPN for now; we use a multi-process server to distribute the files. The socket is closed and the worker process exits as soon as the file download is complete; any corrupted chunks need to be downloaded separately.
We use a filesystem monitor to capture new data as soon as it arrives to the tier 1 distribution server.