How to create servers in Matlab (one computer) - matlab

I need to make a simple simulation in Matlab. It consists of me (the client) sending a binary vector b to a server. This server contains already a vector x. I want it to calculate the innerproduct between vectors b and x and send it back to me.
Is it possible to create different independent servers in matlab (in one computer) that can exchange information with each other? is Using TCP/IP Server Sockets a good idea?
Please help me

Using TCP or UDP would make the simulation generalizable to servers on other computers. Interfaces for these protocols can be found in the Instrument Control Toolbox, but if you don't have that then several toolboxes on the File Exchange also provide interfaces. This one seems the most up to date.
An alternative is to communicate via a memory-mapped file, which MATLAB supports natively, but this would not work if the server were running on a different computer.

If you don't want to use any other Matlab toolboxes use could write a simple Java TCP sever and client and instantiate them from Matlab.
This previous question is on calling Java from Matlab.
There are many tutorials for setting up Java TCP servers and clients.

Related

Sending data from MATLAB to Processing in real time?

Is there any way to send strings from one code to another while they're both running on the same machine? I'm trying to collect information using MATLAB and send a string whenever an event triggers. On Processing, I'm waiting for the string to be received before updating a GUI. I've been able to get both codes to work separately, but I'm having trouble figuring out how to actually send the information. Is it more viable to rebuild the GUI in Matlab?
Depending on the speed requirements of the real time communication, a low tech way of doing this is to use a common file where Matlab writes time-stamped data and Processing periodically checks the file for new data.
This is one way of doing interprocess communication between two independently running processes. Another, more reliable way, is to use some kind of socket communication (tcp or udp sockets, for example) between the two processes. But programming this might be fairly complicated if you are not fluent with both Matlab and Java.
A third way is that Matlab is actually capable of running Java code directly. So if you can call the Processing code from Matlab, then you might be able to pass the strings directly to your Processing code using Java method arguments, etc.

How can I save Data from an Arduino mkr zreo to matlab? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am using an Arduino mkr zero for recording audio with a mems-microphone. Now I want to get the Data from the Arduino into matlab for further evaluation. The Arduino support package doesn´t support the mkr zero only the mkr1000. Is there an easy way to get data saved directly to matlab or at least save it to a .txt and read it afterwards in matlab?
So there are a couple of options for doing this, the choice of which probably just depends on preference / experience.
Send the data directly to Matlab
Matlab already has built a built in Serial Object for reading and writing to the Serial ports. Below is an example of using Serial in Matlab to open, read, write and close the serial port (with "COM1" being a windows based device, see docs for more info).
s = serial('COM1','BaudRate',1200);
fopen(s)
fprintf(s,'*IDN?')
idn = fscanf(s);
fclose(s)
So you can write the data directly from the Arduino's Serial.write() and if Matlab is listening at the correct baudrate then job's a good'n. I would recommend as high a baudrate it can handle if you wanted anything in real-time, but it doesn't seem like you need it, so you can always send the data a bit delayed and buffer it (maybe to a text file or .mat file if you needed it).
Send the data to another program
Since Matlab to the user, (generally speaking excluding toolboxes) appears single threaded, it may make more sense to program another program specifically for recieving this serial data. Using another language may also run a lot quicker than Matlab and could help learn new skills in the process.
You will find many examples of reading Serial data in many languages including just a few I have attached for reference; the idea being you write the data to a text file and read it from Matlab when Matlab is ready: C#, Java, Rust, Python and etc...
If you wanted to get fancy you could do all of the Serial reading in another language and send it to Matlab via local network sockets. Or even use Java's native interface with Matlab to handle multiple Arduino's sending data at the same time (probably unecessary).
Summary
I would probably go with the first option if you want something simple to setup and gets the job done quick, but would maybe look at an alternative for a more permanent solution.
Extra hardcore method (You've been warned)
I'm assuming you're using the I2S for audio recording? So the SPI would be free to transmit all of the data to the PC via SPI for example. You could use a breakout module to convert your SPI messages to I2C and then the one I linked has Virtual COM ports so could again act as serial. Or you could build a custom driver to read the messages coming in over I2C. Maybe you could push the speed higher than the current Serial USB port? Would be cool to compare them.

Multiple systems sharing resources on multiple SoC's

I have some Raspberry Pi's from previous projects/learning and I would like to pool their resources to make a differential drive robot.
Two Pi's would have one camera each for a vision system, one connected to an Arduino to read analog sensors, one for driving motors, and the last pi is the "control" and hosting a user interface (web app).  Nothing really special here! But I would like to be able to share the resources of all the Pi's for improved performance...
My thoughts on sharing resources is one of two approaches:
1) Use distributed memcached as a RAM cluster and run each sub system on one CPU only to avoid data races.
or
2) Use a messaging layer to distribute processing on all CPU.
To avoid a lot of headache, I thought I could use MPI since it does a lot of heaving lifting when it comes to messaging. However I can't seem to find any examples of any robotics projcets using MPI.
It looks like MPI is simplest to design when it's for supervised learning, or genomics (same code and large data sets).
In my case, each sub system runs very different code from the other.  But for example, the vision system runs the same code on a stream of hundred/thousand images. So why not use MPI for the vision, and let the "contorl" schedule when its starts / stops.
Then use its output as input for the next system, which also runs the same code, so can be paralleled.
So my question is:
Is there a reason why MPI is not a common approach for things like
this in Robotics? If so, why and what is a good alternative?
There's a CUDA-MPI for GPU's so maybe this approach is not too far fetched?

Simulink: Possible to connect blocks without creating port to port connection line?

I'm starting on Simulink and just wondering if it is possible to connect two blocks without actually running an explicit connection line from port to port? Because with system getting larger, these connections create too much jumble and mess.
Like is it possible to give a name to some signal and use that name to specify output of one block, and the same name at input of destination block to show implied connection. Just as how its done in schematic building of for example SPICE tools. Or some other mechanism just to reduce the complexity of connections in model?
Thanks a lot for your time.

Best way to generate million tcp connection

I need to find a best way to generate a million tcp connections. (More is good,less is bad). As quickly as possible machinely :D
Why do I need this ? I am testing a nat, and I want to load it with as many entries as possible.
My current method is to generate a subnet on a dummy eth and serially connect from that dummy to actual eth to lan to nat to host.
subnetnicfake----routeToRealEth----RealEth---cable---lan----nat---host.
|<-------------on my machine-------------------->|
One million simultaneous TCP sessions might be difficult: If you rely on standard connect(2) sockets API to create the functions, you're going to use a lot of physical memory: each session will require a struct inet_sock, which includes a struct sock, which includes a struct sock_common.
I quickly guessed at sizes: struct sock_common requires at roughly 58 bytes. struct sock requires roughly 278 bytes. struct inet_sock requires roughly 70 bytes.
That's 387 megabytes of data before you have receive and send buffers. (See tcp_mem, tcp_rmem, tcp_wmem in tcp(7) for some information.)
If you choose to go this route, I'd suggest setting the per-socket memory controls as low as they go. I wouldn't be surprised if 4096 is the lowest you can set it. (SK_MEM_QUANTUM is PAGE_SIZE, stored into sysctl_tcp_rmem[0] and sysctl_tcp_wmem[0].)
That's another eight gigabytes of memory -- four for receive buffers, four for send buffers.
And that's in addition to what the system requires for your programs to open one million file descriptors. (See /proc/sys/fs/file-max in proc(5).)
All of this memory is not swappable -- the kernel pins its memory -- so you're really only approaching this problem on a 64-bit machine with at least eight gigabytes of memory. Probably 10-12 would do better.
One approach taken by the Paketto Keiretsu tools is to open a raw connection, perform all the TCP three-way handshakes using a single raw socket, and try to compute whatever is needed, rather than store it, to handle much larger amounts of data than usual. Try to store as little as possible for each connection, and don't use naive lists or trees of structures.
The Paketto Keiretsu tools were last updated around 2003, so they still might not scale into the million range well, but they would definitely be my starting point if this were my problem to solve.
After searching for many days, I found the problem. Apparently this problem is well thought over, and it should be ,since its so very basic. The problem was, I didnt know what this problem should be called . Among know-ers, it apparently called as c10k problem. What I wanted is c1m problem. However there seems to be some effort done to get C500k . or Concurrent 500k connections.
http://www.kegel.com/c10k.html AND
http://urbanairship.com/blog/2010/09/29/linux-kernel-tuning-for-c500k/
#deadalnix.
Read above links ,and enlighten yourself.
Have you tried using tcpreplay? You could prepare - or capture - one or more PCAP network capture files with the traffic that you need, and have one or more instances of tcpreplay replay them to stress-test your firewall/NAT.
as long as you have 65536 port available in TCP, this is impossible to achive unless you have an army of servers to connect to.
So, then, what is the best way ? Just open as many connection as you can on servers and see what happens.