Print boost::multiprecision::cpp_dec_float_50 without trailing zeros - boost-multiprecision

I was able to use boost::multiprecision and print a sample number:
#include <boost/multiprecision/cpp_dec_float.hpp>
#include <iostream>
using namespace std;
using boost::multiprecision::cpp_dec_float_50;
int main() {
std::string number = "92233720368.54775807";
cpp_dec_float_50 decimal(number);
cout << fixed << setprecision(50) << "boost: " << decimal << endl;
}
the output is:
boost: 92233720368.54775807000000000000000000000000000000000000000000
How to print it without trailing zeros, exactly as 92233720368.54775807?
And is it possible to print it to std::wcout?

Related

Segmentation fault disappears when running command from Perl

I have a simple c++ program that raises SIGSEGV:
#include <iostream>
#include <signal.h>
int main() {
std::cout << "Raising seg fault..." << std::endl;
raise(SIGSEGV);
return 0;
}
When running this program I get the following output
Raising seg fault...
Segmentation fault
But when I run my program inside a perl script using pipe, the segmentation fault disappears.
Here is my Perl script:
use strict;
use warnings;
my $cmd ="./test";
open (OUTPUT, "$cmd 2>&1 |") || die();
while (<OUTPUT>) {
print;
}
close (OUTPUT);
my $exit_status = $?>>8;
print "exit status: $exit_status\n";
I get the following output when running the script:
Raising seg fault...
exit status: 0
How could this be possible? Where is the segmentation fault and why is the exit status 0?
You are specifically ignoring the parts of $? that indicate if the process was killed by a signal.
Replace
my $exit_status = $?>>8;
print "exit status: $exit_status\n";
with
die("Killed by signal ".( $? & 0x7F )."\n") if $? & 0x7F;
die("Exited with error ".( $? >> 8 )."\n") if $? >> 8;
print("Completed successfully\n");

Wrong data is written to binary file [duplicate]

I'm using a large array of floats. After a lot of fiddling I've managed to write it to a binary file. When opening that file at a later time, the reading process only reads a couple of handfuls of floats (according to the return-value of fread(), and it's all values 0.0f). The reading is supposed to put the floats into an (the original) array, and it does not contain the original values.
I'm using Code::Blocks and MinGW doing a program in the 32bit realm on a 64bit pc .. and I'm not very proficient on c/c++ and pointers.
#include<...>
const int mSIZE = 6000 ;
static float data[mSIZE*mSIZE] ;
void myUseFunc(){
const char *chN = "J:/path/flt_632_55.bin" ;
FILE *outFile=NULL ;
# .. sucessfully filling and using data[]
..
size_t st = mSIZE*mSIZE ;
outFile = fopen( chN , "w" ) ;
if(!outFile){ printf("error opening file %s \n", chN); exit(0);}
else{
size_t indt;
indt = fwrite( data , sizeof(float), st , outFile );
std::cout << "floats written to file: " << indt << std::endl ;
#.. value shows that all values ar written
# and a properly sized file has appeared at the proper place
}
fclose( outFile ) ;
}
void myLoadFunc( const char *fileName){
FILE *inFile = NULL ;
inFile = fopen( fileName, "r");
if(!inFile){ printf("error opening file %s \n", fileName); exit(0); }
size_t blok = mSIZE*mSIZE ;
size_t out;
out = fread( dataOne, sizeof(GLfloat), blok , inFile);
fclose(inFile);
if(out != blok){
std::cout<< out << std::endl ;
fputs ("Reading error",stderr);
# no stderr presented at the console ..
printf("some error\n") ;
exit(0);
# .. program exits at out=14
}
...
}
int main( ){
...
const char *FileName = "J:/path/flt_632_55.bin" ;
myLoadFunc( FileName ) ;
...
}
You are not writing to/reading from a binary file, you open the files as text files.
You need to add the "b" to the open mode, like
outFile = fopen( chN , "wb" ) ;

MongoDB verbose logging. What does every v add to output?

What does every 'v' (from one to five) add to log output?
Sure, I can experiment. But does anybody provide a concrete answer?
There isn't a prescriptive list of what log lines each level of verbosity adds. Most of the extra details are really only meaningful for the MongoDB developers (particularly as the log levels increase).
You can grep the log entries from the source code if you're curious.
For example to see what's logged at level 1:
$ grep -r "LOG(1)" * | wc -l
185
$ grep -r "LOG(1)" * | head
client/connpool.cpp: LOG(1) << "Exception thrown when checking pooled connection to " <<
client/dbclient.cpp: LOG(1) << "creating new connection to:" << _servers[0] << endl;
client/dbclient.cpp: LOG(1) << "connected connection!" << endl;
client/dbclient_rs.cpp: LOG(1) << "checking replica set: " << name << endl;
client/dbclient_rs.cpp: if( wasFound ){ LOG(1) << "slave '" << prev << ( wasMaster ? "' is master node, trying to find another node" :
client/dbclient_rs.cpp: else{ LOG(1) << "slave '" << prev << "' was not found in the replica set" << endl; }
client/dbclient_rs.cpp: else LOG(1) << "slave '" << prev << "' is not initialized or invalid" << endl;
client/dbclient_rs.cpp: LOG(1) << "dbclient_rs getSlave falling back to a non-local secondary node" << endl;
client/dbclient_rs.cpp: LOG(1) << "dbclient_rs getSlave no member in secondary state found, "
client/dbclient_rs.cpp: LOG(1) << "_check : " << getServerAddress() << endl;

md5 "%02x" fprintf

I have to calculate md5 hash for a file. I succesfully find libraries to do it, and they print the hash on screen.
I have to print the hash on a txt file, but I have some problems. It only prints 00 intead of the all 32 bit hash. This is the print function. I only add the lines to open the file and to print on it, the rest of the function is from the library and works fine, because on the screen the hash is printed in the right way.
Seems to be some kind of problems with fprintf and %02x". Thanks.
static void MDPrint (mdContext)
MD5_CTX *mdContext;
{
int i;
FILE *fp;
if((fp=fopen("userDatabase.txt", "ab"))==NULL) printf("Error while opening the file..\n");
else {
for (i = 0; i < 16; i++)
printf ("%02x", mdContext->digest[i]);
fprintf(fp, "%02x", mdContext->digest[i]);
}
fclose(fp);
}
Your problem is here;
for (i = 0; i < 16; i++)
printf ("%02x", mdContext->digest[i]);
fprintf(fp, "%02x", mdContext->digest[i]);
Since there are no curly braces, only the printf line will be inside the loop. You need to add braces to make both lines be inside the loop;
for (i = 0; i < 16; i++)
{
printf ("%02x", mdContext->digest[i]);
fprintf(fp, "%02x", mdContext->digest[i]);
}

Returning c++ pointers to perl

I have a function in C++ such as:
void* getField(interface* p)
{
int* temp = new int( p->intValue);
cout<< "pointer value in c++" << temp << "value of temp = " << temp << endl;
return temp;
}
I am using SWIG to generate wrappers for the above class. Now I am trying to get the returned pointer value in Perl. How do i do it??
I have written the following perl script to call the function:
perl
use module;
$a = module::create_new_interface();
$b = module::getField();
print $b,"\n";
print $$b, "\n";
I ahve correctly defined the create_interface function since on calling the function getField() the correct intValue of the interface gets printed.
The output of the program is as follows:
_p_void=SCALAR(0x5271b0)
5970832
pointer value in c++ 0x5b1b90 value of temp 22
Why are the two values - the pointer in C++ and the reference in perl different? How do i get the intValue from the refernce? (i.e the value 22 )
Because you printed one in decimal and one in hexadecimal:
printf "pointer is 0x%x\n", $$b; # prints "pointer is 0x5b1b90"
Perl doesn't normally use pointers. The pack and unpack functions can deal with pointers (see Packing and Unpacking C Structures), but the normal Perl idiom would be to have the function return an integer instead of a pointer to an integer.