Example for creating split-transaction in Perl Finance::QIF - perl

I need to import my bank-exported transactions (CSV) into GNUcash.
I am almost finished with the perl script using Finance::QIF
I parse the CSV and write it out like this:
my $record = {
header => "Type:Bank",
date => $outdatum,
memo => $outtext,
transaction => $outbetrag,
};
$out->header( $record->{header} );
$out->write($record);
....
But my problem is creating a split.
http://finance-qif.sourceforge.net/ says " If the transaction contains splits this will be defined and consist of an array of hash references. With each split potentially having the following values." - so I tried this:
my $record = {
header => "Type:Bank",
date => $outdatum,
memo => $outtext,
transaction => $outbetrag,
#splits = (
{
category => "Gesundheit:Arzt:Kind1",
memo => "L",
amount => "-161,66"
},
{
category => "Gesundheit:Arzt:Kind2",
memo => "F",
amount => "-162,66"
}
)
};
This leads to the error:
Unsupported field 'HASH(0x221c9e8)' found in record ignored in file '>_TESTqif.qif' line 22 at convert_bank_CSV.pl line 195.
Unfortunately, I nowhere found an example for creating a split, just for a normal transaction.
Can someone please help how Finance::QIF can be used to create split-transactions?

I know nothing about Finance::QIF but your #splits code makes no sense.
Try this instead:
my $record = {
header => "Type:Bank",
date => $outdatum,
memo => $outtext,
transaction => $outbetrag,
splits => [
{
category => "Gesundheit:Arzt:Kind1",
memo => "L",
amount => "-161,66",
},
{
category => "Gesundheit:Arzt:Kind2",
memo => "F",
amount => "-162,66",
}
],
};
See perldoc perlreftut for more information about references and data structures in Perl.

Related

How to use aggregations framework in new Elasticsearch using Perl language

I am working on Elasticsearch and I have to do aggregations (i.e used for summarize our data) I shared my code below....
CODE:
my $portal_es = Search::Elasticsearch->new(nodes => [$es_ip.':'.$es_port],request_timeout => 180);
my $result = $portal_es->scroll_helper(index => 10002500,size =>"10000", params=>{rest_total_hits_as_int =>true},{
#"size": 0,
aggs=> {
"my-agg-name"=> {
terms=> {
field=> "tcode"
}
}
}
});
I am getting following error
[Param] ** Expecting a HASH ref or a list of key-value pairs, called from sub Search::Elasticsearch::Role::Client::Direct::Main::scroll_helper at /home/prity/Desktop/BL_script/search_index_processor/aggregation.pl line 15. With vars: {'params' => ['index','10002500','size','10000','params',{'rest_total_hits_as_int' => 'true'},{'aggs' => {'my-agg-name' => {'terms' => {'field' => 'tcode'}}}}]}
The data structure in your argument list is probably broken.
my $result = $portal_es->scroll_helper(
index => 10002500,
size => "10000",
params => {
rest_total_hits_as_int => 'true'
},
{ # <---- here
aggs => {
# ...
You are passing a list of key/value pairs into the scroll_helper, but there is one extra last argument after params. It's a hash reference with aggs in it that has no key for it.
Turn on use warnings and you'll get a warning that your data structure is missing a value (because that hashref will stringify into a key).
You probably shouldn't have closed the hashref for params and opened a new one. But that's just a guess, I don't know what this method expects.

Inserting one hash into another using Perl

I've tried many different versions of using push and splice, but can't seem to combine two hashes as needed. Trying to insert the second hash into the first inside the 'Item' array:
(
ItemData => { Item => { ItemNum => 2, PriceList => "25.00", UOM => " " } },
)
(
Alternate => {
Description => "OIL FILTER",
InFile => "Y",
MfgCode => "FRA",
QtyAvailable => 29,
Stocked => "Y",
},
)
And I need to insert the second 'Alternate' hash into the 'Item' array of the first hash for this result:
(
ItemData => {
Item => {
Alternate => {
Description => "OIL FILTER",
InFile => "Y",
MfgCode => "FRA",
QtyAvailable => 29,
Stocked => "Y",
},
ItemNum => 2,
PriceList => "25.00",
UOM => " ",
},
},
)
Can someone suggest how I can accomplish this?
Assuming you have two hash references, this is straight-forward.
my $item = {
'ItemData' => {
'Item' => {
'PriceList' => '25.00',
'UOM' => ' ',
'ItemNum' => '2'
}
}
};
my $alt = {
'Alternate' => {
'MfgCode' => 'FRA',
'Description' => 'OIL FILTER',
'Stocked' => 'Y',
'InFile' => 'Y',
'QtyAvailable' => '29'
}
};
$item->{ItemData}->{Item}->{Alternate} = $alt->{Alternate};
The trick here is not to actually merge $alt into some part of $item, but to only take the specific part you want and put it where you want it. We take the Alternate key from $alt and put it's content into a new Alternate key inside the guts of $item.
Adam Millerchip pointed out in a hence deleted comment that this is not a copy. If you alter any of the keys inside of $alt->{Alternative} after sticking it into $item, the data will be changed inside of $item as well because we are dealing with references.
$item->{ItemData}->{Item}->{Alternate} = $alt->{Alternate};
$alt->{Alternate}->{InFile} = 'foobar';
This will actually also change the value of $item->{ItemData}->{Item}->{Alternate}->{InFile} to foobar as seen below.
$VAR1 = {
'ItemData' => {
'Item' => {
'ItemNum' => '2',
'Alternate' => {
'Stocked' => 'Y',
'MfgCode' => 'FRA',
'InFile' => 'foobar',
'Description' => 'OIL FILTER',
'QtyAvailable' => '29'
},
'UOM' => ' ',
'PriceList' => '25.00'
}
}
};
References are supposed to do that, because they only reference something. That's what's good about them.
To make a real copy, you need to dereference and create a new anonymous hash reference.
# create a new ref
# deref
$item->{ItemData}->{Item}->{Alternate} = { %{ $alt->{Alternate} } };
This will create a shallow copy. The values directly inside of the Alternate key will be copies, but if they contain references, those will not be copied, but referenced.
If you do want to merge larger data structures where more than the content of one key needs to be merged, take a look at Hash::Merge instead.

How can I join a nested Perl hash?

I have a Perl hash, where I store information about LUNs. It has the following structure:
my %luns = (
360000 => {
Devices => [
{ Major_Minor => "8:144",
SCSI_Address => "1:0:0:8",
SCSI_Device => "sdj",
SCSI_Host => "host1",
},
{ Major_Minor => "129:48",
SCSI_Address => "3:0:0:8",
SCSI_Device => "sder",
SCSI_Host => "host3",
},
],
DM_Device => "dm-13",
Size => "45G",
WWID => 360000,
},
360001 => {
Devices => [
{ Major_Minor => "70:144",
SCSI_Address => "1:0:1:39",
SCSI_Device => "sddb",
SCSI_Host => "host1",
},
{ Major_Minor => "135:48",
SCSI_Address => "3:0:1:39",
SCSI_Device => "sdij",
SCSI_Host => "host3",
},
],
DM_Device => "dm-53",
Size => "200G",
WWID => 360000,
},
);
How can I use join to get a comma-separated list of all SCSI_Devices, for example, of 360000?
You're working with a Hash of Hash of Array of Hash. To learn how to work with such structures, I recommend reading perldsc - Perl Data Structures Cookbook.
In this instance, the following loop will print out each of your device lists:
for my $id ( sort { $a <=> $b } keys %luns ) {
my #devices = map { $_->{SCSI_Device} } #{ $luns{$id}{Devices} };
print "$id - #devices\n";
}
Outputs:
360000 - sdj sder
360001 - sddb sdij
Live Demo
You say you want a list of values for LUN 360000, so for a start you need
$luns->{36000}
which is another hash with a Devices element, which has an array reference as a value, and DM_Device, Size, and WWID elements, whose values are simple scalars.
So presumably you want the list that is
$luns->{36000}{Devices}
which is an array of references to hashes, each of which has Major_Minor, SCSI_Address, SCSI_Device, and SCSI_Host elements.
It sounds like you want the SCSI_Device element, and map is the ideal tool to help you with this
my #scsi_devices = map { $_->{SCSI_Device} } #{ $luns->{360000}{Devices} };
That last step is a big leap, and it may help to separate it in your code. For instance, you can copy the reference to the list of devices for 360000, like this
my $devices = $luns->{360000}{Devices};
and extract the SCSI_Device from each of the hashes in that array with
my #scsi_devices = map { $_->{SCSI_Device} } #$devices;
Either way, the array reference must be dereferenced and the required element from each hash in that array must be extracted.
To get a CSV record, unless the data may contain commas of double-quotes, you simply need to join the result of that map
print join(',', #scsi_devices), "\n";
output
sdj,sder
Although I think this falls short of what you actually need. If this isn't clear then please ask.

How can I look and search for a key inside a heavily nested hash?

I am trying to check if a BIG hash has any keys from small hash and see if they exist, and if they do modify the BigHash with updated values from small hash.
So the lookup hash would look like this :
configure =(
CommonParameter => {
'SibSendOverride' => 'true',
'SibOverrideEnabledFlag' => 'true',
'SiPosition' => '8',
'Period' => '11'
}
)
But the BigHash is very very nested.. The key/hash CommonParameter from the small hash configure is there in the BigHash.
Can somebody help/suggest some ideas for me please?
Here is an example BigHash :
%BigHash = (
'SibConfig' => {
'CELL' => {
'Sib9' => {
'HnbName' => 'HnbName',
'CommonParameter' => {
'SibSendOverride' => 'false',
'SibMaskOverrideEnabledFlag' => 'false',
'SiPosition' => '0',
'Period' => '8'
}
}
}
},
)
I hope I was clear in my question. Trying to modify values of heavily nested BigHash based on Lookup Hash if those keys exist.
Can somebody help me? I am not approaching this in the right way. Is there a neat little key lookup fucntion or something available perhaps?
Give Data::Search a try.
use Data::Search;
#results = Data::Search::datasearch(
data => $BigHash, search => 'keys',
find => 'CommonParameter',
return => 'hashcontainer');
foreach $result (#results) {
# result is a hashref that has 'CommonParameter' as a key
if ($result->{CommonParameter}{AnotherKey} ne $AnotherValue) {
print STDERR "AnotherKey was ", $result->{CommonParameter}{AnotherKey},
" ... fixing\n";
$result->{CommonParameter}{AnotherKey} = $AnotherValue;
}
}

How can I do a scrolled search on MetaCPAN?

I'm trying to convert this script to use the new Elasticsearch official client instead of the older (now deprecated) ElasticSearch.pm, but I can't get the scrolled search to work. Here's what I've got:
#! /usr/bin/perl
use strict;
use warnings;
use 5.010;
use Elasticsearch ();
use Elasticsearch::Scroll ();
my $es = Elasticsearch->new(
nodes => 'http://api.metacpan.org:80',
cxn => 'NetCurl',
cxn_pool => 'Static::NoPing',
#log_to => 'Stderr',
#trace_to => 'Stderr',
);
say 'Getting all results at once works:';
my $results = $es->search(
index => 'v0',
type => 'release',
body => {
filter => { range => { date => { gte => '2013-11-28T00:00:00.000Z' } } },
fields => [qw(author archive date)],
},
);
foreach my $hit (#{ $results->{hits}{hits} }) {
my $field = $hit->{fields};
say "#$field{qw(date author archive)}";
}
say "\nUsing a scrolled search does not work:";
my $scroller = Elasticsearch::Scroll->new(
es => $es,
index => 'v0',
search_type => 'scan',
size => 100,
type => 'release',
body => {
filter => { range => { date => { gte => '2013-11-28T00:00:00.000Z' } } },
fields => [qw(author archive date)],
},
);
while (my $hit = $scroller->next) {
my $field = $hit->{fields};
say "#$field{qw(date author archive)}";
} # end while $hit
The first search, where I'm just getting all the results in 1 chunk, works fine. But the second search, where I'm trying to scroll through the results, produces:
Using a scrolled search does not work:
[Request] ** [http://api.metacpan.org:80]-[500]
ActionRequestValidationException[Validation Failed: 1: scrollId is missing;],
called from sub Elasticsearch::Transport::try {...}
at .../Try/Tiny.pm line 83. With vars: {'body' =>
'ActionRequestValidationException[Validation Failed: 1: scrollId is missing;]',
'request' => {'path' => '/_search/scroll','serialize' => 'std',
'body' => 'c2Nhbjs1OzE3MjU0NjM2MjowakFELUU3VFFibTJIZW1ibUo0SUdROzE3MjU0NjM2NDowakFELUU3VFFibTJIZW1ibUo0SUdROzE3MjU0NjM2MTowakFELUU3VFFibTJIZW1ibUo0SUdROzE3MjU0NjM2MDowakFELUU3VFFibTJIZW1ibUo0SUdROzE3MjU0NjM2MzowakFELUU3VFFibTJIZW1ibUo0SUdROzE7dG90YWxfaGl0czoxNDQ7',
'method' => 'GET','qs' => {'scroll' => '1m'},'ignore' => [],
'mime_type' => 'application/json'},'status_code' => 500}
What am I doing wrong? I'm using Elasticsearch 0.75 and Elasticsearch-Cxn-NetCurl 0.02, and Perl 5.18.1.
I finally got it working with the newer Search::Elasticsearch official client. Here's the short version:
#! /usr/bin/perl
use strict;
use warnings;
use 5.010;
use Search::Elasticsearch ();
my $es = Search::Elasticsearch->new(
cxn_pool => 'Static::NoPing',
nodes => 'api.metacpan.org:80',
);
my $scroller = $es->scroll_helper(
index => 'v0',
type => 'release',
search_type => 'scan',
scroll => '2m',
size => 100,
body => {
fields => [qw(author archive date)],
query => { range => { date => { gte => '2015-02-01T00:00:00.000Z' } } },
},
);
while (my $hit = $scroller->next) {
my $field = $hit->{fields};
say "#$field{qw(date author archive)}";
} # end while $hit
Note that the records are not sorted when you do a scrolled search. I wound up dumping the records into a temporary database and sorting them locally. The updated script is on GitHub.
I don't have a direct answer, but I might have an approach to trouble shooting:
I followed your link to the Elasticsearch::Client and found a scroll() method:
https://metacpan.org/pod/Elasticsearch::Client::Direct#scroll
This method takes scroll and scroll_id as parameters. scroll is the number of minutes that you can keep calling the scroll method before the search expires. scroll_id is a marker to the place where the last call to scroll() ended.
$results = $e->scroll(
scroll => '1m',
scroll_id => $id
);
Elasticsearch::Scroll is an object oriented wrapper around scroll() which hides scroll and scroll_id.
I would run perl -d on your script, and step in to $scroller->next and follow that as far down the rabbit hole as you can. Something in there is trying a search which should be populating scroll_id or scrollId and is failing.
My description here is admittedly pretty rough... I ran across an accurate description of what the scroll id is and does during my googling, but I can't seem to find it again.