merge part in merge sort - mergesort

we have merge sort for two arrays or linked list how can I write merge part for more than two linked lists?
please help me thanks

Either merge two at a time and merge the result with the third or alter the merging logic to take the min element from all three lists.

Recursively split the set of arrays up into two sets of arrays that need to be merged. When the set contains only one array return it. Merge the resulting list from each call using your standard merge sort.
array merge( list_of_arrays )
{
if (sizeof(list_of_arrays) == 1)
return list
else
return mergesort( merge( first_half( list_of_arrays ) ), merge( second_half( list_of_arrays ) ) )
}

Related

PostgreSQL: json object where keys are unique array elements and values are the count of times they appear in the array

I have an array of strings, some of which may be repeated. I am trying to build a query which returns a single json object where the keys are the distinct values in the array, and the values are the count of times each value appears in the array.
I have built the following query;
WITH items (item) as (SELECT UNNEST(ARRAY['a','b','c','a','a','a','c']))
SELECT json_object_agg(distinct_values, counts) item_counts
FROM (
SELECT
sub2.distinct_values,
count(items.item) counts
FROM (
SELECT DISTINCT items.item AS distinct_values
FROM items
) sub2
JOIN items ON items.item = sub2.distinct_values
GROUP BY sub2.distinct_values, items.item
) sub1
DbFiddle
Which provides the result I'm looking for: { "a" : 4, "b" : 1, "c" : 2 }
However, it feels like there's probably a better / more elegant / less verbose way of achieving the same thing, so I wondered if any one could point me in the right direction.
For context, I would like to use this as part of a bigger more complex query, but I didn't want to complicate the question with irrelevant details. The array of strings is what one column of the query currently returns, and I would like to convert it into this JSON blob. If it's easier and quicker to do it in code then I can, but I wanted to see if there was an easy way to do it in postgres first.
I think a CTE and json_object_agg() is a little bit of a shortcut to get you there?
WITH counter AS (
SELECT UNNEST(ARRAY['a','b','c','a','a','a','c']) AS item, COUNT(*) AS item_count
GROUP BY 1
ORDER BY 1
)
SELECT json_object_agg(item, item_count) FROM counter
Output:
{"a":4,"b":1,"c":2}

Is there a Scala collection that maintains the order of insert?

I have a List:hdtList which contain columns that represent the columns of a Hive table:
forecast_id bigint,period_year bigint,period_num bigint,period_name string,drm_org string,ledger_id bigint,currency_code string,source_system_name string,source_record_type string,gl_source_name string,gl_source_system_name string,year string
I have a List: partition_columns which contains two elements: source_system_name, period_year
Using the List: partition_columns, I am trying to match them and move the corresponding columns in List: hdtList to the end of it as below:
val (pc, notPc) = hdtList.partition(c => partition_columns.contains(c.takeWhile(x => x != ' ')))
But when I print them as: println(notPc.mkString(",") + "," + pc.mkString(","))
I see the output unordered as below:
forecast_id bigint,period_num bigint,period_name string,drm_org string,ledger_id bigint,currency_code string,source_record_type string,gl_source_name string,gl_source_system_name string,year string,period string,period_year bigint,source_system_name string
The columns period_year comes first and the source_system_name last. Is there anyway I can make data as below so that the order of columns in the List: partition_columns is maintained.
forecast_id bigint,period_num bigint,period_name string,drm_org string,ledger_id bigint,currency_code string,source_record_type string,gl_source_name string,gl_source_system_name string,year string,period string,source_system_name string,period_year bigint
I know there is an option to reverse a List but I'd like to learn if I can implement a collection that maintains that order of insert.
It doesn't matter which collections you use; you only use partition_columns to call contains which doesn't depend on its order, so how could it be maintained?
But your code does maintain order: it's just hdtList's.
Something like
// get is ugly, but safe here
val pc1 = partition_columns.map(x => pc.find(y => y.startsWith(x)).get)
after your code will give you desired order, though there's probably more efficient way to do it.

How to merge collection in EF query

An order have many order-detail
I want to query all order-detail of some order and combine them into only one IEnumerable
How can I do that? The code below return IEnumerable<IEnumerable<OrderDetail>>
db.Order.Where(o=>o.OrderDate > date1).select(o=>o.OrderDetail)
Use SelectMany()
Projects each element of a sequence to an IEnumerable<T> and flattens the resulting sequences into one sequence.
db.Order.Where(o => o.OrderDate > date1).SelectMany(o => o.OrderDetail);

Access nested hash in Perl HoH without using keys()?

Consider the following HoH:
$h = {
a => {
1 => x
},
b => {
2 => y
},
...
}
Is there a way to check whether a hash key exists on the second nested level without calling keys(%$h)? For example, I want to say something like:
if ( exists($h->{*}->{1}) ) { ...
(I realize you can't use * as a hash key wildcard, but you get the idea...)
I'm trying to avoid using keys() because it will reset the hash iterator and I am iterating over $h in a loop using:
while ( (my ($key, $value) = each %$h) ) {
...
}
The closest language construct I could find is the smart match operator (~~) mentioned here (and no mention in the perlref perldoc), but even if ~~ was available in the version of Perl I'm constrained to using (5.8.4), from what I can tell it wouldn't work in this case.
If it can't be done I suppose I'll copy the keys into an array or hash before entering my while loop (which is how I started), but I was hoping to avoid the overhead.
Not really. If you need to do that, I think I'd create a merged hash listing all the second level keys (before starting your main loop):
my $h = {
a => {
1 => 'x'
},
b => {
2 => 'y'
},
};
my %all = map { %$_ } values %$h;
Then your exists($h->{*}->{1}) becomes exists($all{1}). Of course, this won't work if you're modifying the second-level hashes inside the loop (unless you update %all appropriately). The code also assumes that all values in $h are hashrefs, but that would be easy to fix if necessary.
No. each uses the hash's iterator, and you cannot iterate over a hash without using its iterator, not even in the C API. (That means smart match wouldn't help anyway.)
Since each hash has its own iterator, you must be calling keys on the same hash that you are already iterating over using each to run into this problem. Since you have no problem calling keys on that hash, could you just simply use keys instead of each? Or maybe call keys once, store the result, then iterate over the stored keys?
You will almost certainly find that the 'overhead' of aggregating the second-level hashes is less than that of any other solution. A simple hash lookup is far faster than iterating over the entire data structure every time you want to make the check.
are you trying to do this without any while loop? You can test for existence in a hash just by referencing it, without generating an error
while ( my ($key, $value) = each %{$h} ) {
if ($value->{1}) { .. }
}
Why not do this in Sybase itself instead of Perl?
You are trying to do a set operation which is what Sybase is built to do in the first place.
Assuming you retrieved the data from table with columns "key1", "key2", "valye" as "select *", simply do:
-- Make sure mytable has index on key1
SELECT key1
FRIN mytable t1
WHERE NOT EXISTS (
SELECT 1 FROM mytable t2
WHERE t1.key1=t2.key1
AND t2.key2 = 1
)
-----------
-- OR
-----------
SELECT DISTINCT key1
INTO #t
FROM mytable
CREATE INDEX idx1_t on #t (key1)
DELETE #t
FROM mytable
WHERE #t.key1=mytable.key1
AND mytable.key2 = 1
SELECT key1 from #t
Either query returns a list of 1st level keys that don't have key2 of 1

Zend_Db_Adapter_Mysqli::fetchAssoc() I don't want primary keys as array indexes!

According to ZF documentation when using fetchAssoc() the first column in the result set must contain unique values, or else rows with duplicate values in the first column will overwrite previous data.
I don't want this, I want my array to be indexed 0,1,2,3... I don't need rows to be unique because I won't modify them and won't save them back to the DB.
According to ZF documentation fetchAll() (when using the default fetch mode, which is in fact FETCH_ASSOC) is equivalent to fetchAssoc(). BUT IT'S NOT.
I've used print_r()function to reveal the truth.
print_r($db->fetchAll('select col1, col2 from table'));
prints
Array
(
[0] => Array
(
[col1] => 1
[col2] => 2
)
)
So:
fetchAll() is what I wanted.
There's a bug in ZF documentation
From http://framework.zend.com/manual/1.11/en/zend.db.adapter.html
The fetchAssoc() method returns data in an array of associative arrays, regardless of what value you have set for the fetch mode, **using the first column as the array index**.
So if you put
$result = $db->fetchAssoc(
'SELECT some_column, other_column FROM table'
);
you'll have as result an array like this
$result['some_column']['other_column']