sending image as byte array from matlab to java through sockets - matlab

I am trying to send images from a matlab server to a java client through a sockets. First I converted the image to a byte array and I used the readFully method on the client side which requires a length to be specified.
The problem is that the size of the byte array changes from one image to the other and when I send the size through the write () method it does not read correctly on the client side.
Here is a snippet of my code
Matlab server
%% convert image
Javaimage=im2java2d (image);
Arraystream=ByteArrayOutputStream;
ImageIO.write (javaimage,'jpg', arraystream)
Arraystream.flush ();
Bytearray=arraystream.toByteArray;
%% send size through output stream
Outstream=write (size (byteArray, 1))
%% send array
outStream=write (byteArray);
On the client side I have tried many methods such as read, readInt, readDouble, but I think the trouble is the write () method I tried to use writeInt () but matlab did not recognize it even though I included the libraries needed:
Java.io
Java.net
Java.io.dataoutputstream
Java.io.datainputstream
I should also include that the size of the array is usually over 10000
I would appreciate any help
Thank you in advance

I have finally found the answer which is so simple that it was the last thing to check.
When I created the socket input and output streams I did the following
InStream =socket.getInputStream;
OutStream=socket.getOutputStream;
This allows you to use the methods from the inputStream class such as read () and write() which are methods that only read bytes.To read any other types of data we must use the dataInputStream and dataOutput stream classes which are a type of filter class that allow reading primitive types such as int , double , etc using methods such as readInt () and writeInt
To do this you need to change the above as follows
Instream=dataInputStream (socket.getInputStream);
OutStream=dataOutputStream (socket.getOutputStream);
And its that simple.

Related

in web-audio api how to obtain an array(eg. FLOAT32 array) from a stream (eg a microphone stream) for several seconds

I would like to fill an array from a stream for around ten seconds.{I wish to do some processing on the data)So far I can:
(a) obtain the microphone stream using mediaRecorder
(b) use analyser and analyser.getFloatTimeDomainData(dataArray) to obtain an array but it is size limited to only a little over half a second of data.I can also successfully output the data after processing back onto a stream and to outDestination.
(c) I have also experimented with obtaining a 'chunks' array from mediaRecorder directly but the problem then is that I can't find any mime type that would give me a simple array of values - ie an uncompressed sample by sample single channel set of value - ie a longer version of 'dataArray' in (b).
I am wondering if I am missing a simple way round this problem?
Solutions I have seen tend to use step (b) and do regular polls then reassemble a longer array - however it seems the timing is a bit tricky ..
I'v also seen suggestions to use audio workouts - I might have to do this but would prefer a simpler solution!
Or again, if someone knows how to drive mediaRecorder to output the chunks array in a simple array format FLOAT32.of one channel.That would do the trick.
Or maybe I'm missing something simpler?
I have code showing those steps that have been successful and will upload if anyone requests.

Will Boost.Asio's async_read_some the whole message?

I am trying to receive a package via http in my C++ program. I am using Boost.Asio for that purpose.
In my handleAccept method I have this code to read the package:
// m_buffer is declared as: std::array<char, 8192> m_buffer;
auto buf = boost::asio::buffer(m_buffer);
newConnection->getSocket().async_read_some(
buf,
boost::bind(&TcpServer::handleRead, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)
);
Is this enough to get any package? How do I make sure that it will get packages that are bigger than 8192 bytes?
async_read_some reads a received message up to the size of the buffer that you give it. The number of bytes in a received message is returned in the bytes_transferred placeholder of the callback function, see basic_stream_socket::async_read_some
If you know the maximum size of the messages that you expect to receive, you can simply use a buffer that is large enough for that message. Otherwise, you'll have to copy the contents of the async_read_some buffer elsewhere and build up the whole received message there.

Basic UVM sequence simulation query

I have a couple of issues with a basic UVM based TB I'm trying out to understand sequences and their working.
bvalid is being always picked as 0 in the driver when being updated in the response item
Couple of error messages for last 2 transactions (# UVM_ERROR # 18: uvm_test_top.axi_agent1.axi_base_seqr1##axi_base_seq1 [uvm_test_top.axi_agent1.axi_base_seqr1.axi_base_seq1] Response queue overflow, response was dropped)
Here is the link to the compiling code on EDA Playground
http://www.edaplayground.com/x/3x9
Any suggestions on what I'm missing??
Thanks
venkstart
Having a look at the specification for $urandom_range it shows the signature as: function int unsigned $urandom_range( int unsigned maxval, int unsigned minval = 0 ). Change your call to $urandom_range(1, 0) and it should work.
The second error comes from the fact that you are sending responses from the driver and not picking them up in your sequence. This is the line that does it: seq_item_port.item_done(axi_item_driv_src);. Either just do seq_item_port.item_done(); (don't send responses) or put a call to get_response() inside your sequence after finish_item(). What I usually do is update the fields of the original request and just call item_done(). For example, if I start a read transaction, in my driver I would drive the control signals and wait for the DUT to respond, update the data field of the request with the data I got from the DUT and call item_done() in my driver to mark the request as done. This way if I need this data in my sequence (to constrain some future item, for example) I have it.

Reading a varint using cocoaasyncsocket framework

Has anyone had experience of using cocoaasyncsocket together with google protobuf? I want to seperate frames using a varint, which is simple enough using a client/server combo based on netty, but I don't see a simple way of decoding the initial varint when reading using cocoaasync.
On the C++ side of things, you'll have to use a combination of ReadVarint32() and VarintSize32() do something like this:
char buf[4];
buf = some_api_call(); /* _Copy/peak_ up to 4 bytes off the wire */
CodedInputStream cos(buf, sizeof(buf) - 1);
uint32_t frame_sz;
if (!cos.ReadInputStream(&frame_sz)) {
// Unable to read the frame size
return false;
}
frame_sz should have a valid value that will tell you how many bytes it needs to read.
uint32_t consumed_bytes = CodedOutputStream::VarintSize32(frame_sz);
Now you know how many bytes were consumed to generate frame_sz.
I'd wrap these calls in a library that I extern "C" and would link in from your application.

Create a certain size file and filled with no data on iOS

I'm developing an iphone app, I need to create a certain size file on filesystem and filled with NO data first, then seek to a offset and write data when get data from somewhere else
How can I do it?
The lseek BSD function is explicitly capable of that.
man lseek:
The lseek() function allows the file offset to be set beyond the end of the
existing end-of-file of the file. If data is later written at this point,
subsequent reads of the data in the gap return bytes of zeros (until data is
actually written into the gap).
NSMutableData or fseek is probably what you want