Postgresql get the distinct values with another max value - postgresql

I have a table with this values:
id - idProduction - historical - idVehicle - km
1 - 200 - 1 - 258 - 100
2 - 200 - 2 - 258 - 100
3 - 200 - 2 - 259 - 150
4 - 200 - 3 - 258 - 120
5 - 200 - 3 - 259 - 170
6 - 300 - 1 - 100 - 80
7 - 100 - 1 - 258 - 140
8 - 300 - 2 - 325 - 50
I need to get the values with the max historical, for all the distinct idProduction. In that case:
4 - 200 - 3 - 258 - 120
5 - 200 - 3 - 259 - 170
7 - 100 - 1 - 258 - 140
8 - 300 - 2 - 325 - 50
It my first work with postgresql, so I don't have to much idea on how to do it, does anyone can help me?
Thank you!

I think with that I can solve my problem, but I'm not sure... :
SELECT id, productions_vehicles.id_production, nu_task_number, id_historical, id_vehicle
FROM productions_vehicles
INNER JOIN
(SELECT id_production, MAX(id_historical) AS idHistorico
FROM productions_vehicles
GROUP BY id_production) topHistorico
ON productions_vehicles.id_production = topHistorico.id_production
AND productions_vehicles.id_historical = topHistorico.idHistorico;

You effectively need two requests, your solution looks good, you can also use the WITH clause to do the first request :
WITH topHistorico (
SELECT id_production, MAX(id_historical) SA idHistorico
FROM productions_vehicles
GROUP BY id_production)
SELECT id, pv.id_production, nu_task_number, id_historical, id_vehicle
FROM production_vehicles pv
INNER JOIN topHistorico th ON pv.id_production = th.id_production AND pv.id_historical = th.idHistorico
PostgreSQL: Documentation: 9.1: WITH Queries (Common Table Expressions)

Related

Kafka Streams: Kafka Streams application stuck rebalancing

After all Kafka brokers restart to upgrade offset.retention.minutes setting (to increase it to 60 days), the Kafka Streams application consuming there were stuck, and the consumer group shows rebalancing:
bin/kafka-consumer-groups.sh --bootstrap-server ${BOOTSTRAP_SERVERS} --describe --group stream-processor | sort
Warning: Consumer group 'stream-processor' is rebalancing.
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
customers 0 84263 84288 25 - - -
customers 1 71731 85068 13337 - - -
customers 10 71841 84801 12960 - - -
customers 11 84273 84336 63 - - -
customers 12 84189 84297 108 - - -
customers 13 83969 84051 82 - - -
customers 14 84693 84767 74 - - -
customers 15 84472 84556 84 - - -
customers 2 84175 84239 64 - - -
customers 3 71719 71719 0 - - -
customers 4 71446 84499 13053 - - -
customers 5 84291 84361 70 - - -
customers 6 71700 71700 0 - - -
customers 7 72003 85235 13232 - - -
customers 8 84521 84587 66 - - -
customers 9 71513 71513 0 - - -
customers-intermediate 0 102774 102792 18 - - -
customers-intermediate 1 102931 103028 97 - - -
customers-intermediate 10 102883 102965 82 - - -
customers-intermediate 11 102723 102861 138 - - -
customers-intermediate 12 102797 102931 134 - - -
customers-intermediate 13 102339 102460 121 - - -
customers-intermediate 14 103227 103321 94 - - -
customers-intermediate 15 103231 103366 135 - - -
customers-intermediate 2 102637 102723 86 - - -
customers-intermediate 3 84620 103297 18677 - - -
customers-intermediate 4 102596 102687 91 - - -
customers-intermediate 5 102980 103071 91 - - -
customers-intermediate 6 84385 103058 18673 - - -
customers-intermediate 7 103559 103652 93 - - -
customers-intermediate 8 103243 103312 69 - - -
customers-intermediate 9 84211 102772 18561 - - -
events 15 11958555 15231834 3273279 - - -
events 3 1393386 16534651 15141265 - - -
events 4 1149540 15390069 14240529 - - -
visitors 15 2774874 2778873 3999 - - -
visitors 3 603242 603324 82 - - -
visitors 4 565266 565834 568
The streaming application was restarted too, and afterwards I could see some processing logs for about 20 hours and then stopped processing.
It's been like this for two days. But it is also worth mentioning that all topics you see above have 16 partitions, but some show three of them (visitors, events). However I can describe the topics and they have their partitions well distributed as usual and I can find nothing strange there.
What could have happened?
After application restart, I can see all partitions again, and the applications consuming from topic partitions. However many (most) partitions had lost their offsets. Since I changed the offset.retention.minutes setting, this should not have happened.
events 0 - 14538261 - stream-processor-01f7ecea-4e50-4505-a8e7-8c536058b7bc-StreamThread-1-consumer-a8a6e989-d6c1-472f-aec5-3ae637d87b9e /ip2 stream-processor-01f7ecea-4e50-4505-a8e7-8c536058b7bc-StreamThread-1-consumer
events 1 49070 13276094 13227024 stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer-69da45eb-b0b2-4ad8-a831-8aecf6849892 /ip1 stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer
events 10 - 15593746 - stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer-69da45eb-b0b2-4ad8-a831-8aecf6849892 /ip1 stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer
events 11 - 15525487 - stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer-69da45eb-b0b2-4ad8-a831-8aecf6849892 /ip1 stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer
events 12 - 21863908 - stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer-69da45eb-b0b2-4ad8-a831-8aecf6849892 /ip1 stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer
events 13 - 15810925 - stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer-69da45eb-b0b2-4ad8-a831-8aecf6849892 /ip1 stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer
events 14 - 13509742 - stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer-69da45eb-b0b2-4ad8-a831-8aecf6849892 /ip1 stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer
events 15 11958555 15231834 3273279 stream-processor-01f7ecea-4e50-4505-a8e7-8c536058b7bc-StreamThread-1-consumer-a8a6e989-d6c1-472f-aec5-3ae637d87b9e /ip2 stream-processor-01f7ecea-4e50-4505-a8e7-8c536058b7bc-StreamThread-1-consumer
...
Kafka 1.1.0, Kafka Streams 1.0
UPDATE Still happening on 2.1.1.

kdb how to pass a column name into a function

As a simplifying example, I have
tbl:flip `sym`v1`v2!(`a`b`c`d; 50 280 1200 1800; 40 190 1300 1900)
and I d like to pass a column name into a function like
f:{[t;c];:update v3:2 * c from t;}
In this form it doesnt work. any suggestion how I can make this happen?
Thanks
Another option is to use the functional form of the update statement.
https://code.kx.com/q/ref/funsql/#functional-sql
q)tbl:flip `sym`v1`v2!(`a`b`c`d; 50 280 1200 1800; 40 190 1300 1900)
q)parse"update v3:2*x from t"
!
`t
()
0b
(,`v3)!,(*;2;`x)
q){![x;();0b;enlist[`v3]!enlist(*;2;y)]} [tbl;`v2]
sym v1 v2 v3
------------------
a 50 40 80
b 280 190 380
c 1200 1300 2600
d 1800 1900 3800
One option to achieve this is using # amend:
q){[t;c;n] #[t;n;:;2*t c]}[tbl;`v1;`v3]
sym v1 v2 v3
------------------
a 50 40 100
b 280 190 560
c 1200 1300 2400
d 1800 1900 3600
This updates the column c in table t saving the new value as column n. You could also alter this to allow you to pass in custom functions too:
{[t;c;n;f] #[t;n;:;f t c]}[tbl;`v1;`v3;{2*x}]

JSX/Photoshop: layer opacity issue

Photoshop CS6/JSX. I'm changing the opacity of the selected layer by increasing or reducing it by 10. The problems I'm getting:
When reducing the value by 10 I get this secuence of reductions:
100 - 90 - 80 - 71 - 61 - 51 - 41 - 31 - 22 - 12 - 2
When increasing the results are:
0 - 10 - 20 - 31 - 41 - 51 - 61 - 71 - 82 - 92
The code is something like this:
var opc = app.activeDocument.activeLayer.opacity;
desc2.putUnitDouble(cTID('Opct'), cTID('#Prc'), opc - 10.0);
/* or
desc2.putUnitDouble(cTID('Opct'), cTID('#Prc'), opc + 10.0); */
Any idea on how to fix it in order to get only multiples of 10?
Thanks in advance
Math.round() do the trick. First, force the opacity of the layer to be round:
var opc = Math.round(app.activeDocument.activeLayer.opacity)
Now you can change the opacity by adding or substracting the desired value:
app.activeDocument.activeLayer.opacity = opc -10; //or +10
Thanks to Anna Forrest for the help.

Missing data (JasperReports Server reports)

Ok so when I run my report in iReport I only get one row as output
100 100 - BA - 7294 - 1 - 3
But when I copy the query created by the report out of the server logs and run it I get 80 rows as output
100 100 - BA - 7294 - 1 - 3
100 101 - BA - 7294 - 1 - 3
100 102 - BA - 7294 - 1 - 3
100 103 - BA - 7294 - 1 - 3
100 104 - BA - 7294 - 1 - 3
100 106 - BA - 7294 - 1 - 3
100 107 - BA - 7294 - 1 - 3
100 108 - BA - 7294 - 1 - 3
100 109 - BA - 7294 - 1 - 3
100 110 - BA - 7294 - 1 - 3
etc...
I have done these kinds of reports a hundred times and this has never happened and I can't seem to find a solution
What can be the cause of this missing data ?
Here is the query I copied from the logs (Its a simple query, nothing fancy)
select
`quallev_qualificationlevel`.`quallev_fieldofstudy` as
`quallev_qualificationlevel_quallev_fieldofstudy`,
`qde_qualificationdescription`.`qde_shortdescription` as `qde_qualificationdescription_qde_shortdescription`,
`qde_qualificationdescription`.`qde_namepurpose_cd_id` as `qde_qualificationdescription_qde_namepurpose_cd_id`,
`cn_campusname`.`cn_campusid` as `cn_campusname_cn_campusid`,
`qde_qualificationdescription`.`qde_langauge_id` as `qde_qualificationdescription_qde_langauge_id`,
`fv_factorvalue_ft`.`fv_1` as `fv_factorvalue_ft_fv_1`,
`fv_factorvalue_ft`.`fv_2` as `fv_factorvalue_ft_fv_2`,
`fv_factorvalue_ft`.`fv_3` as `fv_factorvalue_ft_fv_3`,
`fosfte_fieldofstudyftefactor`.`fosfte_factor_cd_id` as `fosfte_fieldofstudyftefactor_fosfte_factor_cd_id`,
`fv_factorvalue_ft`.`fv_4` as `fv_factorvalue_ft_fv_4`,
`fv_factorvalue_ft`.`fv_5` as `fv_factorvalue_ft_fv_5`,
`fv_factorvalue_ft`.`fv_6` as `fv_factorvalue_ft_fv_6`,
`fv_factorvalue_ft`.`fv_7` as `fv_factorvalue_ft_fv_7`,
`fv_factorvalue_ft`.`fv_8` as `fv_factorvalue_ft_fv_8`,
`fv_factorvalue_ft`.`fv_9` as `fv_factorvalue_ft_fv_9`,
`fos_fieldofstudy`.`fos_code` as `fos_fieldofstudy_fos_code`,
`fosfte_fieldofstudyftefactor`.`fosfte_ftefactorvalue` as `fosfte_fieldofstudyftefactor_fosfte_ftefactorvalue`,
`qual_qualification`.`qual_code` as `qual_qualification_qual_code`,
`oun_organisationunitname`.`oun_ou_id` as `oun_organisationunitname_oun_ou_id`,
`fos_fieldofstudy`.`fos_startdate` as `fos_fieldofstudy_fos_startdate`,
`qh_qualificationhemis`.`qh_weight` as `qh_qualificationhemis_qh_weight`
from `qo_qualificationorganisation`
inner join `quallev_qualificationlevel` on (`qo_qualificationorganisation`.`qo_quallev_id` = `quallev_qualificationlevel`.`quallev_id`)
inner join `org_organisation` on (`org_organisation`.`org_be_id` = `qo_qualificationorganisation`.`qo_org_be_id`)
inner join `oun_organisationunitname` on (`oun_organisationunitname`.`oun_ou_id` = `org_organisation`.`org_ou_id`)
inner join `cn_campusname` on (`org_organisation`.`org_campusid` = `cn_campusname`.`cn_campusid`)
inner join `fos_fieldofstudy` on (`quallev_qualificationlevel`.`quallev_fos_id` = `fos_fieldofstudy`.`fos_id`)
inner join `qual_qualification` on (`quallev_qualificationlevel`.`quallev_qual_id` = `qual_qualification`.`qual_id`)
inner join `qh_qualificationhemis` on (`qh_qualificationhemis`.`qh_qual_id` = `qual_qualification`.`qual_id`)
inner join `fosfte_fieldofstudyftefactor` on (`fosfte_fieldofstudyftefactor`.`fosfte_fos_id` = `fos_fieldofstudy`.`fos_id`)
inner join `qde_qualificationdescription` on (`qde_qualificationdescription`.`qde_qual_id` = `qual_qualification`.`qual_id`)
inner join `fv_factorvalue_ft` on (`fos_fieldofstudy`.`fos_id` = `fv_factorvalue_ft`.`fv_fos_id`)
where `qde_qualificationdescription`.`qde_langauge_id` = 3
and `fos_fieldofstudy`.`fos_startdate` <= '2013-11-08'
and `qde_qualificationdescription`.`qde_namepurpose_cd_id` = 7294
and `cn_campusname`.`cn_campusid` = '1'
and `oun_organisationunitname`.`oun_ou_id` = '11'
and `fosfte_fieldofstudyftefactor`.`fosfte_factor_cd_id` = 7699
group by `quallev_qualificationlevel`.`quallev_fieldofstudy`
Note: I am using iReport 5.0.0 on Win7 Pro 64bit for JasperReports Server 5.0.1 on Tomcat 7
Ok so I finally found the solution
The problem was jasperserver had a default row limit set of 200,000
I raised that limmit to 2,000,000 and BOOOM !!! it works now :)

iphone float vs integer rounding?

Okay, from what I understand, an integer that is a fraction will be rounded one way or the other so that if a formula comes up with say 5/6 - it will automatically round it to 1. I have a calculation:
xyz = ((1300 - [abc intValue])/6) + 100;
xyz is defined as an NSInteger, abc is an NSString that is chosen via a UIPicker. I want the calculation (1300 - [abc intValue]) to add 1 to 100 for each 6 units below 1300. For example, 1255 should result in xyz having a value of 100 and 1254 should result in a value of 101.
Now, I understand that my formula above is wrong because of the rounding principles, but I am getting some CRAZY results from the program itself. When I punched in 1259 - I got 106. When I punched in 1255 - I got 107. Why would it behave that way?
Your understanding is wrong. Integer division truncates:
5 / 6 == 0
(1300 - 1259) / 6 == 41 / 6 == 6
(1300 - 1255) / 6 == 45 / 6 = 7
You can use:
xyz = ((1300.0 - [abc intValue])/6) + 100;
and make xyz a NSDouble. That will ensure it does floating-point division.
You may also be confusing numbers and time. 1255 is 45 below 1300, not 5 below :-)