Mbedtls entropy generation runs forever - stm32f7

I'm trying to write an test function for mbedtls which randomly generates a key for AES encryption.
I use the original tutorial Code from mbedtls.
My Programm always stops when executing "mbedtls_ctr_drbg_seed()".
About my environmet: Basic Sourcefiles from STM_CUBEmx, Board: ST32F767 Nucleo, Compiling based on Makefile from STM_Cube
mbedtls_ctr_drbg_context ctr_drbg;
mbedtls_entropy_context entropy;
char *pers="anything";
int ret;
//Start
mbedtls_entropy_init(&entropy);
debugPrintln("Init entropy done");
mbedtls_ctr_drbg_init(&ctr_drbg);
debugPrintln("Init ctr_drbg done");
if((ret=mbedtls_ctr_drbg_seed(&ctr_drbg,mbedtls_entropy_func,&entropy,(unsigned char *) pers,strlen(pers)))!=0){
//Error info
debugPrintln("ERROR ctr_drbg_seed ");
return -1;
}
debugPrintln("Init ctr_drbg_seed done");
if((ret=mbedtls_ctr_drbg_random(&ctr_drbg,key,32))!=0){
return -1;
}
Thank you in advance

From your description,
I am assuming that your application is stuck in the call to mbedtls_ctr_drbg_seed().
Most probable reason, IMHO, is in the functionmbedtls_entropy_func():
do
{
if( count++ > ENTROPY_MAX_LOOP )
{
ret = MBEDTLS_ERR_ENTROPY_SOURCE_FAILED;
goto exit;
}
if( ( ret = entropy_gather_internal( ctx ) ) != 0 )
goto exit;
done = 1;
for( i = 0; i < ctx->source_count; i++ )
if( ctx->source[i].size < ctx->source[i].threshold )
done = 0;
}
while( ! done );
You should check that your entropy collection increases the collected size, that the threshold is not MAX_INT or something of the sort, and that your hw entropy collector actually returns entropy data.

I have found the reason
STM32 Cube MX places the HAL Init function for the RNG after the mbedtls init
So when I call mbedtls_ctr_drbg_seed() inside mbedtls_init() the RNG hasn't yet been initialized and it iterates forever inside:
do
{
if( count++ > ENTROPY_MAX_LOOP )
{
ret = MBEDTLS_ERR_ENTROPY_SOURCE_FAILED;
goto exit;
}
if( ( ret = entropy_gather_internal( ctx ) ) != 0 )
goto exit;
done = 1;
for( i = 0; i < ctx->source_count; i++ )
if( ctx->source[i].size < ctx->source[i].threshold )
done = 0;
}
while( ! done );
Solution
swap the lines

Related

Is there any way to take control over Fluentbit open descriptors?

i am trying to fix the next issue:
A lot of file descriptors opened by fluent-bit are kept always opened, even after calling flb_destroy().
Scenario:
I am developing a C++ application that wraps fluent-bit as library, stopping and starting the fluent-bit engine upon received events.
Problem:
Fluent-bit runs out of available file descriptors some time later, after restarting the engine several times without stopping the C++ app, ending with the error trace "lib backend failed".
Details:
I extracted a couple of methods available in the fluent-bit.c source file of the project as a helper on loading the configuration for runtime, by calling the next one:
flb_service_conf("path/to/fluent-bit.conf")
Every time an event is received, flb_stop() and flb_destroy() are called first, doing the same with flb_service_conf() and flb_start() inmediately after.
I got stucked at this point and am still researching about how could fix the issue.
Any clue, idea or suggestion would be very appreciated.
Here is a bit of code, hoping it helps to explain:
bool FluentBit::start( ) {
if ( isStopped() ) {
LOG_DEBUG( ) << "Starting fluent-bit";
m_ctx = ::flb_create( );
if( m_ctx == nullptr ) {
throw std::bad_alloc( );
}
running = configureEngine() && startEngine();
}
return running;
}
void FluentBit::stop( ) {
if ( isRunning() ) {
LOG_DEBUG( ) << "Stopping fluent-bit";
::flb_stop( m_ctx );
while( m_ctx->status == FLB_LIB_OK ) {
std::this_thread::sleep_for( std::chrono::seconds( 1 ) );
}
::flb_destroy( m_ctx );
running = false;
}
}
bool FluentBit::isRunning( ) {
return running;
}
bool FluentBit::isStopped( ) {
return ( !running ) ;
}
bool FluentBit::configureEngine( ) {
auto returnConf = ::flb_service_conf( m_ctx->config, settings.filePath() );
if ( returnConf < 0 ) {
LOG_ERROR() << "Failed loading fluent-bit configuration file: " << settings.filePath();
::flb_destroy( m_ctx );
}
return ( returnConf < 0 ) ? false : true;
}
bool FluentBit::startEngine( ) {
LOG_DEBUG( ) << "Starting fluent-bit engine";
auto retStart = ::flb_start( m_ctx );
if( retStart < 0 ) {
LOG_ERROR( ) << "Could not start fluent-bit engine";
::flb_destroy( m_ctx );
}
running = ( retStart < 0 ) ? false : true;
return running;
}
Thanks in advance for your (possible) replies :)
Kind Regards,
Pablo.

Using GetPrivateProfileStringW with UTF-8 encoded files?

Up until now my INI text files have been Unicode encoded and I have been doing things like this to read it:
::GetPrivateProfileStringW(
strUpdateSection,
strTalkKey,
_T(""),
strTalk.GetBuffer( _MAX_PATH ),
_MAX_PATH,
m_strPathINI );
strTalk.ReleaseBuffer();
The file gets downloaded from the internet first and then accessed. But I was finding that for Arabic the text file was getting corrupted. Unless I change the encoding to UTF-8. When I do that, it downloads right. But then it doesn't read the data right from the INI.
So does a INI file have to be Unicode encoded or should it also be OK to use UTF-8 with the function calls?
I think it is about time I convert this part of my program to UTF-8 encoded XML files instead! Way to go.
But wanted to ask the question first.
I should clarify that the text file was initial saved as UTF-8 using NotePad.
Update
This is what it looks like when I read the file:
This is how I download the file:
BOOL CUpdatePublicTalksDlg::DownloadINI()
{
CInternetSession iSession;
CHttpFile *pWebFile = NULL;
CWaitCursor wait;
CFile fileLocal;
TCHAR szError[_MAX_PATH];
int iRead = 0, iBytesRead = 0;
char szBuffer[4096];
BOOL bOK = FALSE;
DWORD dwStatusCode;
CString strError;
// ask user to go online
if( InternetGoOnline( (LPTSTR)(LPCTSTR)m_strURLPathINI, GetSafeHwnd(), 0 ) )
{
TRY
{
// our session should already be open
// try to open up internet session to my URL
// AJT V10.4.0 Use flag INTERNET_FLAG_RELOAD
pWebFile = (CHttpFile*)iSession.OpenURL( m_strURLPathINI, 1,
INTERNET_FLAG_TRANSFER_BINARY |
INTERNET_FLAG_DONT_CACHE | INTERNET_FLAG_RELOAD);
if(pWebFile != NULL)
{
if( pWebFile->QueryInfoStatusCode( dwStatusCode ) )
{
// 20x codes mean success
if( (dwStatusCode / 100) == 2 )
{
// Our downloaded file is only temporary
m_strPathINI = theApp.GetTemporaryFilename();
if( fileLocal.Open( m_strPathINI,
CFile::modeCreate|CFile::modeWrite|CFile::typeBinary ) )
{
iRead = pWebFile->Read( szBuffer, 4096 );
while( iRead > 0 )
{
iBytesRead += iRead;
fileLocal.Write( szBuffer, iRead );
iRead = pWebFile->Read( szBuffer, 4096 );
}
fileLocal.Close();
bOK = TRUE;
}
}
else
{
// There was a problem!
strError.Format( IDS_TPL_INVALID_URL, dwStatusCode );
AfxMessageBox( strError,
MB_OK|MB_ICONERROR );
}
}
}
else
{
AfxMessageBox( ID_STR_UPDATE_CHECK_ERR, MB_OK|MB_ICONERROR );
}
}
CATCH( CException, e )
{
e->GetErrorMessage( szError, _MAX_PATH );
AfxMessageBox( szError, MB_OK|MB_ICONERROR );
}
END_CATCH
// Tidy up
if( pWebFile != NULL )
{
pWebFile->Close();
delete pWebFile;
}
iSession.Close();
}
return bOK;
}
This is how I read the file:
int CUpdatePublicTalksDlg::ReadTalkUpdateINI()
{
int iLastUpdate = 0, iUpdate;
int iNumTalks, iTalk;
NEW_TALK_S sTalk;
CString strUpdateSection, strTalkKey, strTalk;
// How many possible updates are there?
m_iNumUpdates = ::GetPrivateProfileIntW(
_T("TalkUpdates"),
_T("NumberUpdates"),
0,
m_strPathINI );
// What what the last talk update count?
iLastUpdate = theApp.GetLastTalkUpdateCount();
// Loop available updates
for( iUpdate = iLastUpdate + 1; iUpdate <= m_iNumUpdates; iUpdate++ )
{
// Build section key
strUpdateSection.Format( _T("Update%d"), iUpdate );
// How many talks are there?
iNumTalks = ::GetPrivateProfileIntW(
strUpdateSection,
_T("NumberTalks"),
0,
m_strPathINI );
// Loop talks
for( iTalk = 1; iTalk <= iNumTalks; iTalk++ )
{
// Build key
strTalkKey.Format( _T("Talk%d"), iTalk );
// Get talk information
::GetPrivateProfileStringW(
strUpdateSection,
strTalkKey,
_T(""),
strTalk.GetBuffer( _MAX_PATH ),
_MAX_PATH,
m_strPathINI );
strTalk.ReleaseBuffer();
// Decode talk information
DecodeNewTalk( strTalk, sTalk );
// Does it already exists in the database?
// AJT v11.2.0 Bug fix - we want *all* new talks to show
//if( !IsExistingTalk( sTalk.iTalkNumber ) )
//{
//if(!LocateExistingTheme(sTalk, false))
AddNewTalkToListBox( sTalk );
//}
}
}
// Return the actual amount of updates possible
return m_iNumUpdates - iLastUpdate;
}
This is the file being downloaded:
http://publictalksoftware.co.uk/TalkUpdate_Arabic2.ini
Update
It seems that the file is corrupt at the point of being downloaded:
Please see updated Watch:
Making progress, I now confirm the data OK at this point:

Command Line Program - Sending 'Enter' commands

I have command line program that has a menu with multiple choice. I can pass the parameter via program.exe < input.txt but I am having problems sending the 'Enter' command (the program crashes if I attempt to send anything else; such as a ' ' character).
I enter 'All' as Proc name. It then prints 'enter proc name' again and I have to hit Enter for the code to continue. What I cannot work out is how to send this Enter to the program. I have tried a black line in input.txt but that did not work.
while( !end )
{
for( legal=0; !legal; )
{
printf("\tEnter Proc Name: ");
if( fgets( str, 70, stdin ) == NULL )
{
printf("Bye.\n");
exit(0);
}
ret = sscanf(str, "%s", p.name );
if( ret > 0 || str[0] == '\n' ) legal = 1;
else printf("Please enter a legal proc name, none, or all\n");
}
if( str[0] == '\n' ) {
end = 1;
} else if( !strcmp( p.name, "all" )) {
for( i=0; i < Conf_num_procs( &Cn ); i++ )
Status_vector[i] = 1;
} else if( !strcmp( p.name, "none" )) {
for( i=0; i < Conf_num_procs( &Cn ); i++ )
Status_vector[i] = 0;
} else {
proc_index = Conf_proc_by_name( p.name, &p );
if( proc_index != -1 ) {
Status_vector[proc_index] = 1;
} else printf("Please! enter a legal proc name, none, or all\n");
}
}

How to cluster based on ssdeep?

Hi I am trying to find the groups out of files based on ssdeep.
I have generated ssdeep of files and kept it in csv file.
I am parsing the file in perl script as follows:
foreach( #all_lines )
{
chomp;
my $line = $_;
my #split_array = split(/,/, $line);
my $md5 = $split_array[1];
my $ssdeep = $split_array[4];
my $blk_size = (split(/:/, $ssdeep))[0];
if( $blk_size ne "")
{
my $cluster_id = check_In_Cluster($ssdeep);
print WFp "$cluster_id,$md5,$ssdeep\n";
}
}
This also checks whether the ssdeep is present in previously clustered group and if not creates new group.
Code for chec_In_Cluster
my $ssdeep = shift;
my $cmp_result;
if( $cluster_cnt > 0 ) {
$cmp_result = ssdeep_compare( $MRU_ssdeep, $ssdeep );
if( $cmp_result > 85 ) {
return $MRU_cnt;
}
}
my $d = int($cluster_cnt/4);
my $thr1 = threads->create(\&check, 0, $d, $ssdeep);
my $thr2 = threads->create(\&check, $d, 2*$d, $ssdeep);
my $thr3 = threads->create(\&check, 2*$d, 3*$d, $ssdeep);
my $thr4 = threads->create(\&check, 3*$d, $cluster_cnt, $ssdeep);
my ($ret1, $ret2, $ret3, $ret4);
$ret1 = $thr1->join();
$ret2 = $thr2->join();
$ret3 = $thr3->join();
$ret4 = $thr4->join();
if($ret1 != -1) {
$MRU_ssdeep = $ssdeep;
$MRU_cnt = $ret1;
return $MRU_cnt;
} elsif($ret2 != -1) {
$MRU_ssdeep = $ssdeep;
$MRU_cnt = $ret2;
return $MRU_cnt;
} elsif($ret3 != -1) {
$MRU_ssdeep = $ssdeep;
$MRU_cnt = $ret3;
return $MRU_cnt;
} elsif($ret4 != -1) {
$MRU_ssdeep = $ssdeep;
$MRU_cnt = $ret4;
return $MRU_cnt;
} else {
$cluster_base[$cluster_cnt] = $ssdeep;
$MRU_ssdeep = $ssdeep;
$MRU_cnt = $cluster_cnt;
$cluster_cnt++;
return $MRU_cnt;
}
and the code for chech:
sub check($$$) {
my $from = shift;
my $to = shift;
my $ssdeep = shift;
for( my $icnt = $from; $icnt < $to; $icnt++ ) {
my $cmp_result = ssdeep_compare( $cluster_base[$icnt], $ssdeep );
if( $cmp_result > 85 ) {
return $icnt;
}
}
return -1;
}
But this process takes very much time( for 20-30MB csv file it takes 8-9Hours).
I have also tried using multithreading while checking in Cluster but not much help i got from this.
Since their is no need of csv parser like Text::CSV (because of less operation on csv) i didn't used it.
can anybody please solve my issue? Is it possible to use hadoop or some other frameworks for grouping based on ssdeep?
There is a hint from Optimizing ssDeep for use at scale (2015-11-27).
Depends on your purpose, loop and match SSDEEP in different chunk size will create a N x (N-1) hash comparison. Unless you need to find partial contents, otherwise, avoid it.
It is possible to breakdown of the hash index in step 1 as suggested in the article. This is a better way for partial contents match with different chunk size.
It is possible to reduce SSDEEP hash by grouping similar hash by generate a "distance cousin" hash.

socks 5 proxy on Perl

I am new in Perl and trying to understand this cod in Link : http://codepaste.ru/1374/but I have some problem in understanding this part of code:
while($client || $target) {
my $rin = "";
vec($rin, fileno($client), 1) = 1 if $client;
vec($rin, fileno($target), 1) = 1 if $target;
my($rout, $eout);
select($rout = $rin, undef, $eout = $rin, 120);
if (!$rout && !$eout) { return; }
my $cbuffer = "";
my $tbuffer = "";
if ($client && (vec($eout, fileno($client), 1) || vec($rout, fileno($client), 1))) {
my $result = sysread($client, $tbuffer, 1024);
if (!defined($result) || !$result) { return; }
}
if ($target && (vec($eout, fileno($target), 1) || vec($rout, fileno($target), 1))) {
my $result = sysread($target, $cbuffer, 1024);
if (!defined($result) || !$result) { return; }
}
if ($fh && $tbuffer) { print $fh $tbuffer; }
while (my $len = length($tbuffer)) {
my $res = syswrite($target, $tbuffer, $len);
if ($res > 0) { $tbuffer = substr($tbuffer, $res); } else { return; }
}
while (my $len = length($cbuffer)) {
my $res = syswrite($client, $cbuffer, $len);
if ($res > 0) { $cbuffer = substr($cbuffer, $res); } else { return; }
}
}
can any body explain to me exactly whats happen in these lines:
vec($rin, fileno($client), 1) = 1 if $client;
vec($rin, fileno($target), 1) = 1 if $target;
and
select($rout = $rin, undef, $eout = $rin, 120);
Basically, the select operator is used to find which of your file descriptors are ready (readable, writable or there's an error condition). It will wait until one of the file descriptors is ready, or timeout.
select RBITS, WBITS, EBITS, TIMEOUT
RBITS is a bit mask, usually stored as a string, representing a set of file descriptors that select will wait for readability. Each bit of RBITS represent a file descriptor, and the offset of the file descriptor in this bit mask should the file descriptor number in system. Thus, you could use vec to generate this bit mask.
vec EXPR, OFFSET, BITS
The vec function provides storage of lists of unsigned integers. EXPR is a bit string, OFFSET is the offset of bit in EXPR, and BITS specifies the width of each element you're reading from / writing to EXPR.
So these 2 lines:
vec($rin, fileno($client), 1) = 1;
vec($rin, fileno($target), 1) = 1;
They made up a bit mask string $rin with setting the bit whose offset equals the file descriptor number of $client, as well as the one of $target.
Put it into select operator:
select($rout = $rin, undef, $eout = $rin, 120);
Then select will monitor the readability of the two file handlers ($client and $target), if one of them is ready, select will return. Or it will return after 120s if no one is ready.
WBITS, EBITS use the same methodology. So you could infer that the above select line will also return when the two file handler have any exceptions.