How to determine compressed size from zlib for gzipped data? - sockets

I'm using zlib to perform gzip compression. zlib writes the data directly to an open TCP socket after compressing it.
/* socket_fd is a file descriptor for an open TCP socket */
gzFile gzf = gzdopen(socket_fd, "wb");
int uncompressed_bytes_consumed = gzwrite(gzf, buffer, 1024);
(of course all error handling is removed)
The question is: how do you determine how many bytes were written to the socket? All the gz* functions in zlib deal with byte counts/offsets in the uncompressed domain, and tell (seek) doesn't work for sockets.
The zlib.h header says "This library can optionally read and write gzip streams in memory as well." Writing to a buffer would work (then I can write the buffer to the socket subsequently), but I can't see how to do that with the interface.

You'll be able to do this with the deflate* series of calls. I'm not going to show you everything, but this example program (which I had named "test.c" in my directory) should help you get started:
#include <zlib.h>
#include <stdlib.h>
#include <stdio.h>
char InputBufferA[4096];
char OutputBufferA[4096];
int main(int argc, char *argv[])
{
z_stream Stream;
int InputSize;
FILE *FileP;
Stream.zalloc = malloc;
Stream.zfree = free;
/* initialize compression */
deflateInit(&Stream, 3);
FileP = fopen("test.c", "rb");
InputSize = fread((void *) InputBufferA, 1, sizeof(InputBufferA), FileP);
fclose(FileP);
Stream.next_in = InputBufferA;
Stream.avail_in = InputSize;
Stream.next_out = OutputBufferA;
Stream.avail_out = sizeof(OutputBufferA);
deflate(&Stream, Z_SYNC_FLUSH);
/* OutputBufferA is now filled in with the compressed data. */
printf("%d bytes input compressed to %d bytes\n", Stream.total_in, Stream.total_out);
exit(0);
}
Consult the deflate documentation from zlib.h.

zlib can, in fact, write gzip formatted data to a buffer in memory.
This zlib faq entry defers to comments in zlib.h. In the header file, the comment for deflateInit2() mentions that you should (arbitrarily?) add 16 to the 4th parameter (windowBits) in order to cause the library to format the deflate stream with the gzip format (instead of the default "zlib" format).
This code gets the zlib state set up properly to encode gzip to a buffer:
#include <zlib.h>
z_stream stream;
stream.zalloc = Z_NULL;
stream.zfree = Z_NULL;
stream.opaque = Z_NULL;
int level = Z_DEFAULT_COMPRESSION;
int method = Z_DEFLATED; /* mandatory */
int windowBits = 15 + 16; /* 15 is default as if deflateInit */
/* were used, add 16 to enable gzip format */
int memLevel = 8; /* default */
int strategy = Z_DEFAULT_STRATEGY;
if(deflateInit2(&stream, level, method, windowBits, memLevel, strategy) != Z_OK)
{
fprintf(stderr, "deflateInit failed\n");
exit(EXIT_FAILURE);
}
/* now use the deflate function as usual to gzip compress */
/* from one buffer to another. */
I confirmed that this procedure yields the exact same binary output as the gzopen/gzwrite/gzclose interface.

Related

How can we determine whether a socket is ready to read/write?

How can we determine whether a socket is ready to read/write in socket programming.
On Linux, use select() or poll().
On Windows, you can use WSAPoll() or select(), both from winsock2.
Mac OS X also has select() and poll().
#include <sys/select.h>
int select(int nfds, fd_set *readfds, fd_set *writefds,
fd_set *exceptfds, struct timeval *timeout);
select() and pselect() allow a program to monitor multiple file descriptors, waiting until one or more of the file descriptors become "ready" for some class of I/O operation (e.g., input possible). A file descriptor is considered ready if it is possible to perform the corresponding I/O operation (e.g., read(2)) without blocking. – https://linux.die.net/man/3/fd_set
#include <poll.h>
int poll(struct pollfd *fds, nfds_t nfds, int timeout);
poll() performs a similar task to select(2): it waits for one of a set of file descriptors to become ready to perform I/O.
– https://linux.die.net/man/2/poll
Example of select usage:
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
#include <sys/types.h>
#include <unistd.h>
int
main(void)
{
fd_set rfds;
struct timeval tv;
int retval;
/* Watch stdin (fd 0) to see when it has input. */
FD_ZERO(&rfds);
FD_SET(0, &rfds);
/* Wait up to five seconds. */
tv.tv_sec = 5;
tv.tv_usec = 0;
retval = select(1, &rfds, NULL, NULL, &tv);
/* Don't rely on the value of tv now! */
if (retval == -1)
perror("select()");
else if (retval)
printf("Data is available now.\n");
/* FD_ISSET(0, &rfds) will be true. */
else
printf("No data within five seconds.\n");
exit(EXIT_SUCCESS);
}
Explanation of the above code:
FD_ZERO initializes the rfds set. FD_SET(0, &rfds) adds fd 0 (stdin) to the set. FD_ISSET can be used to check whether a specific file descriptor is ready after select returns.
The select call in this example waits until rfds has input or until 5 seconds passes. The two NULLs in the select call are where file descriptor sets (fd_sets) to be checked for ready to write status and exceptions, respectively, would be passed. The tv argument is the number of seconds and microseconds to wait. The first argument to select, nfds, is the highest numbered file descriptor in any of the three sets (read, write, exceptions sets) plus one.
Example of poll usage (from man7.org):
/* poll_input.c
Licensed under GNU General Public License v2 or later.
*/
#include <poll.h>
#include <fcntl.h>
#include <sys/types.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#define errExit(msg) do { perror(msg); exit(EXIT_FAILURE); \
} while (0)
int
main(int argc, char *argv[])
{
int nfds, num_open_fds;
struct pollfd *pfds;
if (argc < 2) {
fprintf(stderr, "Usage: %s file...\n", argv[0]);
exit(EXIT_FAILURE);
}
num_open_fds = nfds = argc - 1;
pfds = calloc(nfds, sizeof(struct pollfd));
if (pfds == NULL)
errExit("malloc");
/* Open each file on command line, and add it 'pfds' array. */
for (int j = 0; j < nfds; j++) {
pfds[j].fd = open(argv[j + 1], O_RDONLY);
if (pfds[j].fd == -1)
errExit("open");
printf("Opened \"%s\" on fd %d\n", argv[j + 1], pfds[j].fd);
pfds[j].events = POLLIN;
}
/* Keep calling poll() as long as at least one file descriptor is
open. */
while (num_open_fds > 0) {
int ready;
printf("About to poll()\n");
ready = poll(pfds, nfds, -1);
if (ready == -1)
errExit("poll");
printf("Ready: %d\n", ready);
/* Deal with array returned by poll(). */
for (int j = 0; j < nfds; j++) {
char buf[10];
if (pfds[j].revents != 0) {
printf(" fd=%d; events: %s%s%s\n", pfds[j].fd,
(pfds[j].revents & POLLIN) ? "POLLIN " : "",
(pfds[j].revents & POLLHUP) ? "POLLHUP " : "",
(pfds[j].revents & POLLERR) ? "POLLERR " : "");
if (pfds[j].revents & POLLIN) {
ssize_t s = read(pfds[j].fd, buf, sizeof(buf));
if (s == -1)
errExit("read");
printf(" read %zd bytes: %.*s\n",
s, (int) s, buf);
} else { /* POLLERR | POLLHUP */
printf(" closing fd %d\n", pfds[j].fd);
if (close(pfds[j].fd) == -1)
errExit("close");
num_open_fds--;
}
}
}
}
printf("All file descriptors closed; bye\n");
exit(EXIT_SUCCESS);
}
Explanation of above code:
This code is a bit more complex than the previous example.
argc is the number of arguments. argv is the array of arguments given to the program. argc[0] is usually the name of the program. If argc is less than 2 (which means only one argument was given), the program outputs a usage message and exits with a failure code.
pfds = calloc(nfds, sizeof(struct pollfd)); allocates memory for an array of struct pollfd which is nfds elements long and zeroes the memory. Then there is a NULL check; if pfds is NULL, that means calloc failed (usually because the program ran out of memory), so the program prints the error with perror and exits.
The for loop opens each filename specified in argv and assigns it to corresponding elements of the pfd array. Then sets .events on each element to POLLIN to tell poll to check each file descriptor for whether it is ready to read
The while loop is where the actual call to poll() happens. The array of struct pollfds, pfds, the number of fds, nfds, and a timeout of -1 is passed to poll. Then the return value is checked for error (-1 is what poll return when there is an error) and if there is an error, the program prints an error message and exits. Then the number of ready file descriptors is printed.
In the second for loop inside the while loop, the program iterates over the array of pollfds and checks the .revents field of each structure. If that field is nonzero, an event occurred on the corresponding file descriptor. The program prints the file descriptor, and the event, which can be POLLIN (ready for input), POLLHUP (hang up), or POLLERR (error condition). If the event was POLLIN, the file is ready to be read.
The program then reads 10 bytes into buf. If an error happens when reading, the program prints an error and exits. Otherwise, the program prints the number of bytes read and the contents of the buffer buf.
In case of error or hang up (POLLERR, POLLHUP) the program closes the file descriptor and decrements num_open_fds.
Finally the program says that all file descriptors are closed and exits with EXIT_SUCCESS.

TMS320F2812 FatFs f_write returns FR_DISK_ERR

I have problem with an SD card. I'm using the FatFs library ver R0.10b to access the SD card.
My code:
// .... //
FATFS fatfs;
FIL plik;
FRESULT fresult,res1,res2,res3,res4,res5;
UINT zapisanych_bajtow = 0 , br;
UINT zapianie_bajtow = 0;
char * buffor = "123456789abcdef\r\n";
unsigned short int i;
void main(void) {
// ... //
res1 = f_mount(0,&fatfs); // returns FA_OK
res2 = f_open( &plik, "f721.txt", FA_OPEN_ALWAYS | FA_WRITE ); // returns FA_OK
if( res2 == FR_OK )
{
res3 = f_write( &plik, ( const void * ) buffor, 17, &zapisanych_bajtow ); // returns FR_DISK_ERR
}
res4 = f_close( &plik );// returns FR_DISK_ERR
for(;;)
{
}
}
Any idea what might be wrong?
I had similar error with just one difference. I tried to write 4096bytes with f_write function at once. And it always returned FR_DISK_ERR.
And this was caused because I tried to write more then is size of IO buffer in FIL structure in FatFS (defined in ff.h).
typedef struct {
FATFS* fs; /* Pointer to the related file system object (**do not change order**) */
WORD id; /* Owner file system mount ID (**do not change order**) */
BYTE flag; /* Status flags */
BYTE err; /* Abort flag (error code) */
DWORD fptr; /* File read/write pointer (Zeroed on file open) */
DWORD fsize; /* File size */
DWORD sclust; /* File start cluster (0:no cluster chain, always 0 when fsize is 0) */
DWORD clust; /* Current cluster of fpter (not valid when fprt is 0) */
DWORD dsect; /* Sector number appearing in buf[] (0:invalid) */
DWORD dir_sect; /* Sector number containing the directory entry */
BYTE* dir_ptr; /* Pointer to the directory entry in the win[] */
DWORD* cltbl; /* Pointer to the cluster link map table (Nulled on file open) */
UINT lockid; /* File lock ID origin from 1 (index of file semaphore table Files[]) */
BYTE buf[_MAX_SS]; /* File private data read/write window */
} FIL;
The last array buf[_MAX_SS] is the file IO buffer. But _MAX_SS is user defined parameter (defined in ff.h) so you can decrease the amount of bytes written at once or eventually change the _MAX_SS value.
I know this is not your case because you only write 17 bytes at once, but this can be helpful for others.
It's few years when I finished with TMS but maybe it will help you:
FA_OPEN_ALWAYS Opens the file if it is existing. If not, a new file is created.
To append data to the file, use f_lseek() function after file open in this method.
If file does not exists use:
FA_CREATE_NEW Creates a new file. The function fails
with FR_EXIST if the file is existing.
I had the same issue with implementation of Chan FatFs on MSP430- always received FR_DISK_ERR result on calling disk_write().
My reason of the issue was the following:
operation failed on xmit_datablock() call, it returned 0.
xmit_datablock() failed because of xmit_spi_multi() failed.
xmit_spi_multi() failed because it was not enough to just transmit bytes from buffer.
It was necessary to read from RXBUF after every write.
Here it is how it looks after the issue was fixed:
/* Block SPI transfers */
static void xmit_spi_multi (
const BYTE* buff, /* Data to be sent */
UINT cnt /* Number of bytes to send */
)
{
do {
volatile char x;
UCA1TXBUF= *buff++; while(! (UCA1IFG & UCRXIFG)) ; x = UCA1RXBUF;
UCA1TXBUF= *buff++; while(! (UCA1IFG & UCRXIFG)) ; x = UCA1RXBUF;
} while (cnt -= 2);
}
Before fixing the issue there was no read from UCA1RXBUF following every write to UCA1TXBUF.
After fixing xmit_spi_multi() my issue with FR_DISK_ERR in disk_write() was solved.

Chunked Encoding using Flac on iOS

I found a library that helps to convert WAV file to Flac:
https://github.com/jhurt/wav_to_flac
Also succeed to compile Flac to the platform and it works fine.
I've been using this library after capturing the audio on wav format to convert it to Flac and then send to my server.
Problem is that the audio file could be long and then precious time is wasted.
The thing is that I want to encode the audio as Flac format and send that to server on the same time when capturing and not after capturing stops, So, I need a help here on how to do that (encode Flac directly from the audio so I could send it to my server)...
In my library called libsprec, you can see an example of both recording a WAV file (here) and converting it to FLAC (here). (Credits: the audio recording part heavily relies on Erica Sadun's work, for the record.)
Now if you want to do this in one step, you can do that as well. The trick is that you have to do the initialization of both the Audio Queues and the FLAC library first, then "interleave" the calls to them, i. e. when you get some audio data in the callback function of the Audio Queue, you immediately FLAC-encode it.
I don't think, however, that this would be much faster than recording and encoding in two separate steps. The heavy part of the processing is the recording and the maths in the encoding itself, so re-reading the same buffer (or I dare you, even a file!) won't add much to the processing time.
That said, you may want to do something like this:
// First, we initialize the Audio Queue
AudioStreamBasicDescription desc;
desc.mFormatID = kAudioFormatLinearPCM;
desc.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
desc.mReserved = 0;
desc.mSampleRate = SAMPLE_RATE;
desc.mChannelsPerFrame = 2; // stereo (?)
desc.mBitsPerChannel = BITS_PER_SAMPLE;
desc.mBytesPerFrame = BYTES_PER_FRAME;
desc.mFramesPerPacket = 1;
desc.mBytesPerPacket = desc.mFramesPerPacket * desc.mBytesPerFrame;
AudioQueueRef queue;
status = AudioQueueNewInput(
&desc,
audio_queue_callback, // our custom callback function
NULL,
NULL,
NULL,
0,
&queue
);
if (status)
return status;
AudioQueueBufferRef buffers[NUM_BUFFERS];
for (i = 0; i < NUM_BUFFERS; i++) {
status = AudioQueueAllocateBuffer(
queue,
0x5000, // max buffer size
&buffers[i]
);
if (status)
return status;
status = AudioQueueEnqueueBuffer(
queue,
buffers[i],
0,
NULL
);
if (status)
return status;
}
// Then, we initialize the FLAC encoder:
FLAC__StreamEncoder *encoder;
FLAC__StreamEncoderInitStatus status;
FILE *infile;
const char *dataloc;
uint32_t rate; /* sample rate */
uint32_t total; /* number of samples in file */
uint32_t channels; /* number of channels */
uint32_t bps; /* bits per sample */
uint32_t dataoff; /* offset of PCM data within the file */
int err;
/*
* BUFFSIZE samples * 2 bytes per sample * 2 channels
*/
FLAC__byte buffer[BUFSIZE * 2 * 2];
/*
* BUFFSIZE samples * 2 channels
*/
FLAC__int32 pcm[BUFSIZE * 2];
/*
* Create and initialize the FLAC encoder
*/
encoder = FLAC__stream_encoder_new();
if (!encoder)
return -1;
FLAC__stream_encoder_set_verify(encoder, true);
FLAC__stream_encoder_set_compression_level(encoder, 5);
FLAC__stream_encoder_set_channels(encoder, NUM_CHANNELS); // 2 for stereo
FLAC__stream_encoder_set_bits_per_sample(encoder, BITS_PER_SAMPLE); // 32 for stereo 16 bit per channel
FLAC__stream_encoder_set_sample_rate(encoder, SAMPLE_RATE);
status = FLAC__stream_encoder_init_stream(encoder, flac_callback, NULL, NULL, NULL, NULL);
if (status != FLAC__STREAM_ENCODER_INIT_STATUS_OK)
return -1;
// We now start the Audio Queue...
status = AudioQueueStart(queue, NULL);
// And when it's finished, we clean up the FLAC encoder...
FLAC__stream_encoder_finish(encoder);
FLAC__stream_encoder_delete(encoder);
// and the audio queue and its belongings too
AudioQueueFlush(queue);
AudioQueueStop(queue, false);
for (i = 0; i < NUM_BUFFERS; i++)
AudioQueueFreeBuffer(queue, buffers[i]);
AudioQueueDispose(queue, true);
// In the audio queue callback function, we do the encoding:
void audio_queue_callback(
void *data,
AudioQueueRef inAQ,
AudioQueueBufferRef buffer,
const AudioTimeStamp *start_time,
UInt32 num_packets,
const AudioStreamPacketDescription *desc
)
{
unsigned char *buf = buffer->mAudioData;
for (size_t i = 0; i < num_packets * channels; i++) {
uint16_t msb = *(uint8_t *)(buf + i * 2 + 1);
uint16_t usample = (msb << 8) | lsb;
union {
uint16_t usample;
int16_t ssample;
} u;
u.usample = usample;
pcm[i] = u.ssample;
}
FLAC__bool succ = FLAC__stream_encoder_process_interleaved(encoder, pcm, num_packets);
if (!succ)
// handle_error();
}
// Finally, in the FLAC stream encoder callback:
FLAC__StreamEncoderWriteStatus flac_callback(
const FLAC__StreamEncoder *encoder,
const FLAC__byte buffer[],
size_t bytes,
unsigned samples,
unsigned current_frame,
void *client_data
)
{
// Here process `buffer' and stuff,
// then:
return FLAC__STREAM_ENCODER_SEEK_STATUS_OK;
}
You are welcome.
Your question is not very specific, but you need to use Audio Recording Services, which will let you get access to the audio data in chunks, and then move the data you get from there into the streaming interface of the FLAC encoder. You can not use the WAV to FLAC program you linked to, you have to tap into the FLAC library yourself. API docs here.
Example on how to use a callback here.
can't you record your audio in wav using audio queue services and process output packets with your lib ?
edit from apple dev doc :
"Applications writing AIFF and WAV files must either update the data header’s size field at the end of recording—which can result in an unusable file if recording is interrupted before the header is finalized—or they must update the size field after recording each packet of data, which is inefficient."
apparently it seems quite hard to encode a wav file on the fly

a program that allocates huge chunks of memory using mmap(say 1GB) [duplicate]

I am writing a program that allocates huge chunks of memory using mmap and then accesses random memory locations to read and write into it.
I just tried out the following code:
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
int main() {
int fd,len=1024*1024;
fd=open("hello",O_READ);
char*addr=mmap(0,len,PROT_READ+PROT_WRITE,MAP_SHARED,fd,0);
for(fd=0;fd<len;fd++)
putchar(addr[fd]);
if (addr==MAP_FAILED) {perror("mmap"); exit(1);}
printf("mmap returned %p, which seems readable and writable\n",addr);
munmap(addr,len);
return 0;
}
But I cannot execute this program, is there anything wrong with my code?
First of all, the code won't even compile on my debian box. O_READ isn't a correct flag for open() as far as I know.
Then, you first use fd as a file descriptor and the you use it as a counter in your for loop.
I don't understand what you're trying to do, but I think you misunderstood something about mmap.
mmap is used to map a file into the memory, this way you can read / write to the created memory mapping instead of using functions to access the file.
Here's a short program that open a file, map it the the memory and print the returner pointer :
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
int main() {
int fd;
int result;
int len = 1024 * 1024;
fd = open("hello",O_RDWR | O_CREAT | O_TRUNC, (mode_t) 0600);
// stretch the file to the wanted length, writting something at the end is mandatory
result = lseek(fd, len - 1, SEEK_SET);
if(result == -1) { perror("lseek"); exit(1); }
result = write(fd, "", 1);
if(result == -1) { perror("write"); exit(1); }
char*addr = mmap(0, len, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
if (addr==MAP_FAILED) { perror("mmap"); exit(1); }
printf("mmap returned %p, which seems readable and writable\n",addr);
result = munmap(addr, len);
if (result == -1) { perror("munmap"); exit(1); }
close(fd);
return 0;
}
I left out the for loop, since I didn't understood its purpose. Since you create a file and you want to map it on a given length, we have to "stretch" the file to the given length too.
Hope this helps.

collect packet length in pcap file

hi guys how can i collect the packet length for each packet in the pcap file? thanks a lot
I suggest a high-tech method, which very few people know: reading the documentation.
man pcap tells us there are actually two different lengths available:
caplen a bpf_u_int32 giving the number of bytes of the packet that are
available from the capture
len a bpf_u_int32 giving the length of the packet, in bytes (which
might be more than the number of bytes available from the cap-
ture, if the length of the packet is larger than the maximum num-
ber of bytes to capture)
An example in C:
/* Grab a packet */
packet = pcap_next(handle, &header);
if (packet == NULL) { /* End of file */
break;
}
printf ("Got a packet with length of [%d] \n",
header.len);
Another one in Python with the pcapy library:
import pcapy
reader = pcapy.open_offline("packets.pcap")
while True:
try:
(header, payload) = reader.next()
print "Got a packet of length %d" % header.getlen()
except pcapy.PcapError:
break
Those two examples below work fine:
using C, WinPcap
using python, SCAPY
(WinPcap)(Compiler CL , Microsoft VC)
I have wrote this function (in C) to get packet size and it works fine.
Don't forget to include pcap.h and set HAVE_REMOTE in compiler preprocessors
u_int getpkt_size(char * pcapfile){
pcap_t *indesc;
char errbuf[PCAP_ERRBUF_SIZE];
char source[PCAP_BUF_SIZE];
u_int res;
struct pcap_pkthdr *pktheader;
u_char *pktdata;
u_int pktsize=0;
/* Create the source string according to the new WinPcap syntax */
if ( pcap_createsrcstr( source, // variable that will keep the source string
PCAP_SRC_FILE, // we want to open a file
NULL, // remote host
NULL, // port on the remote host
pcapfile, // name of the file we want to open
errbuf // error buffer
) != 0)
{
fprintf(stderr,"\nError creating a source string\n");
return 0;
}
/* Open the capture file */
if ( (indesc= pcap_open(source, 65536, PCAP_OPENFLAG_PROMISCUOUS, 1000, NULL, errbuf) ) == NULL)
{
fprintf(stderr,"\nUnable to open the file %s.\n", source);
return 0;
}
/* get the first packet*/
res=pcap_next_ex( indesc, &pktheader, &pktdata);
if (res !=1){
printf("\nError Reading PCAP File");
return 0;
}
/* Get the packet size*/
pktsize=pktheader->len;
/* Close the input file */
pcap_close(indesc);
return pktsize;
}
Another wroking Example in Python using the wonderful SCAPY
from scapy.all import *
pkts=rdpcap("data.pcap",1) # reading only 1 packet from the file
OnePkt=pkts[0]
print len(OnePkt) # prints the length of the packet