There are limitations for some databases how much parameter '?' can exist in an IN statement e.g. in MS SQL the limit is 1000.
However, when I create an IN query with a Pageable, this limitation is not considered. For example, if I search for 1200 names, it will be 1200 '?' used in the IN statement. This crashes under MS SQL with the following error:
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The incoming request has too many parameters. The server supports a maximum of 1200 parameters. Reduce the number of parameters and resend the request.
Repo:
#Repository
public interface CourseRepository extends JpaRepository<Course, UUID> {
Page<Course> findAllByNameIn(Collection<String> names, Pageable pageable);
}
Test class:
#SpringBootTest
public class SqlQueryTest {
#Autowired
private CourseRepository courseRepository;
#Test
public void testInQuery() {
final List<String> names = new ArrayList<>();
IntStream.range(0, 1200).forEach(count -> {
final Course course = new Course();
final String name = RandomStringUtils.randomAlphabetic(10);
names.add(name);
course.setName(name);
courseRepository.save(course);
});
courseRepository.findAllByNameIn(names, Pageable.ofSize(500));
}
}
Statement looks like
select
course0_.id as id1_0_,
course0_.name as name2_0_
from
course course0_
where
course0_.name in (
? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ....
I've also tried EclipseLink Query hints batch_size and batch_type IN. However, without success. See https://www.eclipse.org/eclipselink/documentation/2.5/jpa/extensions/q_batch_size.htm
So my question is that a bug if an invalid statement is executed? Do I have to take care of it myself and split up the selects.
When I load a relationship in an IN statement, the batch size is works and several selects are executed with IN
Is it possible to execute individual selects (e.g. findByName) and execute them in batch?
Related
I have a query param of type array to collect IDs to query from a postgres table. I think I have built everything out appropriately, but the query fails with ERROR: syntax error at or near "$1"
The logs are:
SELECT
professional_leads.first_name
, professional_leads.last_name
, professional_leads.email
, professional_leads.phone_number
, professional_leads.professional_id as proId
, professional_leads.id as proLeadId
, professional_leads.user_id
, professional_leads.interview_offered_at
, professional_leads.sms_enabled
, professional_leads.email_enabled
, professional_leads.resume_pdf_object_key
, professional_leads.created_at
, professional_leads.updated_at
, professional_leads.reschedule_count
, professional_leads.experience_level
, professional_leads.waitlisted_reason
, professional_leads.resume_state
, professional_leads.interview_state
, professional_leads.state
, professional_leads.profession_id
, professional_leads.indicated_specialty_codes
, professional_leads.other_specialties
, professional_leads.professional_id
, professional_leads.license_received_on
, professional_leads.license_expires_on
, professional_leads.region_id
, professional_leads.marketing_channel
, professional_leads.newsletter
, professional_leads.referral_code
, professional_leads.asset_proof_type
, professional_leads.verification_state
, professional_leads.duplicate
FROM
professional_leads
WHERE
id IN :clause
DEBUG 2022-10-01 21:25:45,346 [[MuleRuntime].uber.15: [api-database-sapi].Copy_of_get-Flow.BLOCKING #7f0aedd] [processor: Copy_of_get-Flow/processors/0/processors/0; event: 232aba80-41f1-11ed-b583-f02f4b10a50d] org.mule.db.commons.shaded.internal.domain.executor.AbstractExecutor: Executing query:
SELECT
professional_leads.first_name
, professional_leads.last_name
, professional_leads.email
, professional_leads.phone_number
, professional_leads.professional_id as proId
, professional_leads.id as proLeadId
, professional_leads.user_id
, professional_leads.interview_offered_at
, professional_leads.sms_enabled
, professional_leads.email_enabled
, professional_leads.resume_pdf_object_key
, professional_leads.created_at
, professional_leads.updated_at
, professional_leads.reschedule_count
, professional_leads.experience_level
, professional_leads.waitlisted_reason
, professional_leads.resume_state
, professional_leads.interview_state
, professional_leads.state
, professional_leads.profession_id
, professional_leads.indicated_specialty_codes
, professional_leads.other_specialties
, professional_leads.professional_id
, professional_leads.license_received_on
, professional_leads.license_expires_on
, professional_leads.region_id
, professional_leads.marketing_channel
, professional_leads.newsletter
, professional_leads.referral_code
, professional_leads.asset_proof_type
, professional_leads.verification_state
, professional_leads.duplicate
FROM
professional_leads
WHERE
id IN ?
Parameters:
clause = ('6a379873-93f9-4b16-8752-168aa92c8846','a234570e-a739-4bcc-847a-a875f5202398')
I flatten the array in a var ids:
"(" ++ (attributes.queryParams.*id map "'$'" joinBy ",") ++ ")"
I have my query as follows:
%dw 2.0
output text
---
"SELECT
professional_leads.first_name
, professional_leads.last_name
, professional_leads.email
, professional_leads.phone_number
, professional_leads.professional_id as proId
, professional_leads.id as proLeadId
, professional_leads.user_id
, professional_leads.interview_offered_at
, professional_leads.sms_enabled
, professional_leads.email_enabled
, professional_leads.resume_pdf_object_key
, professional_leads.created_at
, professional_leads.updated_at
, professional_leads.reschedule_count
, professional_leads.experience_level
, professional_leads.waitlisted_reason
, professional_leads.resume_state
, professional_leads.interview_state
, professional_leads.state
, professional_leads.profession_id
, professional_leads.indicated_specialty_codes
, professional_leads.other_specialties
, professional_leads.professional_id
, professional_leads.license_received_on
, professional_leads.license_expires_on
, professional_leads.region_id
, professional_leads.marketing_channel
, professional_leads.newsletter
, professional_leads.referral_code
, professional_leads.asset_proof_type
, professional_leads.verification_state
, professional_leads.duplicate
FROM
professional_leads
WHERE
id IN :clause"
My input parameters in the call is:
{
"clause": vars.ids
}
Grabbing the query and using the bind variable verbatim, the query executes fine.
Is there a limitation with IN and bind variables?
Using DBD::Pg im attempting to make an insert statement with binded variables which one of them is a tsrange. I keep getting syntax errors can some one please explain the proper way to do this?
from perl script :
$sth->{'insert'}->execute($hashRef->{'<NUMBER>'}
, $hashRef->{'<FIRSTNAME>'}
, $hashRef->{'<LASTNAME>'}
, $hashRef->{'<DATEIN>'} . ' ' . $hashRef->{'<TIMEIN>'}
, $hashRef->{'<DATEOUT>'} . ' ' . $hashRef->{'<TIMEOUT>'}
, $hashRef->{'<JOBCODE>'}
, $hashRef->{'<JOBCODEDESC>'}
, $hashRef->{'<COSTCODELEVEL1>'}
, $hashRef->{'<COSTCODELEVEL2>'}
, $hashRef->{'<COSTCODELEVEL3>'}
, $hashRef->{'<DEPARTMENT>'}
)or die $DBI::errstr;
enter code here
from config file:
sql:
insert: |-
insert into etl.timeclock_plus values (
?
, ?
, ?
, [ ? , ? ]
, ?
, ?
, ?
, ?
, ?
, ?
)
The error :
syntax error at or near "$4"
Instead of
[ $1, $2 ]
which is invalid SQL, use a range constructor function:
tstzrange($1, $2, '[)')
There is also tsrange and daterange if you need those data types.
I need help with an interpolation algorithm for my app.
I have a couple of arrays:
(1) my measurements array (days versus weights)
(2) a reference array with minimum weights
(3) a reference array with maximum weights
(4) a reference array with average weights
First I have to fill the gaps in my measurement array (weights for the missing days).
After that, I want to do a prediction of the coming days (day 45 up to 120) making use of the reference data (array 2 t/m 4). The assumption is that the measurement weights will be up to par… but can take a couple of days longer.
I included a line graph of what the final results should look like.
Can this be done with Swift or should I use a framework like Accelerate or Upsurge?
My measurements:
[ (0.0 , 25.4) , (5.0 , 30.3) , (6.0 , 33.5) , (9.0 , 51.2) , (12.0 , 83.1) , (16.0 , 143.0) , (21.0 , 238.6) , (24.0 , 311.7) , (25.0 , 322.8) , (29.0 , 460.9) , (31.0 , 520.4) , (35.0 , 642.2) , (36.0 , 694.0) , (43.0 , 988.3) , (44.0 , 1018.4) ]
Reference average:
[ (0.0 , 20.0) , (1.0 , 22.5), (2.0 , 27.0), (3.0 , 32.0), (4.0 , 37.2), (5.0 , 44.1), (6.0 , 68.4), (7.0 , 76.7), (8.0 , 101.4), (9.0 , 117.7), (10.0 , 148.8), (11.0 , 172.6), (12.0 , 212.6), (13.0 , 238.4), (14.0 , 272.3), (15.0 , 304.8), (16.0 , 335.6), (17.0 , 369.8), (18.0 , 405.3), (19.0 , 444.3), (20.0 , 476.3), (21.0 , 509.1), (22.0 , 546.5), (23.0 , 583.7), (24.0 , 620.8), (25.0 , 657.0), (26.0 , 698.2), (27.0 , 735.3), (28.0 , 769.7), (29.0 , 810.3), (30.0 , 848.2), (31.0 , 885.0), (32.0 , 921.2), (33.0 , 956.4), (34.0 , 984.2), (35.0 , 1012.1), (36.0 , 1038.8), (37.0 , 1069.8), (38.0 , 1096.4), (39.0 , 1119.1), (40.0 , 1145.5), (41.0 , 1162.1), (42.0 , 1179.6), (43.0 , 1204.0), (44.0 , 1222.8), (45.0 , 1240.6), (46.0 , 1255.7), (47.0 , 1269.6), (48.0 , 1277.5), (49.0 , 1290.5), (50.0 , 1300.6), (51.0 , 1312.4), (52.0 , 1317.3), (53.0 , 1324.6), (54.0 , 1332.1), (55.0 , 1339.6), (56.0 , 1340.2), (57.0 , 1346.8), (58.0 , 1347.4), (59.0 , 1349.6), (60.0 , 1348.0), (61.0 , 1348.4), (62.0 , 1345.4), (63.0 , 1340.2), (64.0 , 1333.3), (65.0 , 1329.0), (66.0 , 1325.3), (67.0 , 1324.8), (68.0 , 1313.7), (69.0 , 1301.1), (70.0 , 1297.5), (71.0 , 1292.2), (72.0 , 1287.1), (73.0 , 1277.5), (74.0 , 1271.9), (75.0 , 1262.2), (76.0 , 1250.3), (77.0 , 1242.9), (78.0 , 1225.5), (79.0 , 1220.5), (80.0 , 1200.8), (81.0 , 1184.4), (82.0 , 1178.4), (83.0 , 1163.1), (84.0 , 1149.5), (85.0 , 1135.4), (86.0 , 1117.2), (87.0 , 1109.1), (88.0 , 1092.1), (89.0 , 1088.8), (90.0 , 1079.4), (91.0 , 1067.8), (92.0 , 1065.0), (93.0 , 1060.7), (94.0 , 1058.9), (95.0 , 1055.5), (96.0 , 1055.1), (97.0 , 1050.1), (98.0 , 1051.4), (99.0 , 1041.4), (100.0 , 1050.9), (101.0 , 1051.6) , (102.0 , 1048.1), (103.0 , 1057.2), (104.0 , 1060.5), (105.0 , 1062.4), (106.0 , 1069.4), (107.0 , 1072.0), (108.0 , 1077.0), (109.0 , 1068.1), (110.0 , 1077.7), (111.0 , 1071.0), (112.0 , 1060.0), (113.0 , 1058.9), (114.0 , 1050.6), (115.0 , 1047.2), (116.0 , 1052.2), (117.0 , 1051.8), (118.0 , 1024.1), (119.0 , 1041.6), (120.0 , 1048.4) ]
Reference minimum and maximum also available.
I tried to fill the gaps with the following code:
typealias Weights = (Double, Double)
var myArray1: [Weights] = [ (0.0 , 25.4) , (5.0 , 30.3) , (6.0 , 33.5) , (9.0 , 51.2) , (12.0 , 83.1) , (16.0 , 143.0) , (21.0 , 238.6) , (24.0 , 311.7) , (25.0 , 322.8) , (29.0 , 460.9) , (31.0 , 520.4) , (35.0 , 642.2) , (36.0 , 694.0) , (43.0 , 988.3) , (44.0 , 1018.4) ]
var myArray2: [Weights] = []
for i in 0..<45 { myArray2.append( (Double(i), 0.00)) }
let mergedArrays = myArray2.map({ calculated->Weights in
if let measured = myArray1.first(where: { $0.0 == calculated.0 }) {
return measured
} else {
// interpolate weight??
return calculated
}
})
For the calculations, it would be something like:
(1) 30.3 - 25.4 = 4.9
(2) 4.9 / 5 days = 0.98 per day
so:
[(0.0 , 25.4) , (1.0 , 26.4) , (2.0 , 27.4) , (3.0 , 28.4) , (4.0 , 28.3) , (5.0 , 30.3)
(3) move on to the next weight after a 'weight with value 0.00'
But how do I implement those calculations?
And then after that... the predictions...
In one of my application having a requirement to download a PDF file with report details in form of table.
For creating a PDF file and writing a table in it, using cpan module available in perl. PDF::Report and PDF::Report::Table.
Please find below the code sample:
#!/usr/bin/perl
use strict;
use warnings;
use PDF::Report;
use PDF::Report::Table;
my $pdf = PDF::Report->new( PageSize => 'A4', PageOrientation => 'Portrait' );
my $table = PDF::Report::Table->new( $pdf );
my $data = [
['A1' , 'B1' , 'C1'],
['A2' , 'B2' , 'C2'],
['A3' , 'B3' , 'C3'],
['A4' , 'B4' , 'C4'],
['A5' , 'B5' , 'C5'],
['A6' , 'B6' , 'C6'],
['A7' , 'B7' , 'C7'],
['A8' , 'B8' , 'C8'],
['A9' , 'B9' , 'C9'],
['A10' , 'B10' , 'C10'],
['A11' , 'B11' , 'C11'],
['A12' , 'B12' , 'C12'],
['A13' , 'B13' , 'C13'],
['A14' , 'B14' , 'C14'],
['A15' , 'B15' , 'C15'],
['A16' , 'B16' , 'C16'],
['A17' , 'B17' , 'C17'],
['A18' , 'B18' , 'C18'],
['A19' , 'B19' , 'C19'],
['A20' , 'B20' , 'C20'],
['A21' , 'B21' , 'C21'],
['A22' , 'B22' , 'C22'],
['A23' , 'B23' , 'C23'],
['A24' , 'B24' , 'C24'],
['A25' , 'B25' , 'C25'],
['A26' , 'B26' , 'C26'],
['A27' , 'B27' , 'C27'],
['A28' , 'B28' , 'C28'],
['A29' , 'B29' , 'C29'],
['A30' , 'B30' , 'C30'],
['A31' , 'B31' , 'C31'],
['A32' , 'B32' , 'C32'],
['A33' , 'B33' , 'C33'],
['A34' , 'B34' , 'C34'],
['A35' , 'B35' , 'C35'],
['A36' , 'B36' , 'C36'],
['A37' , 'B37' , 'C37'],
['A38' , 'B38' , 'C38'],
['A39' , 'B39' , 'C39'],
['A40' , 'B40' , 'C40'],
['A41' , 'B41' , 'C41'],
];
$pdf->openpage;
$pdf->setAddTextPos( 50, 50 );
$table->addTable( $data, 400 ); # 400 is table width
$pdf->saveAs( 'table.pdf' );
Result: pdf generated with 2 pages.
at continuity of page missing a row data.
Note: i'm facing issue to attach a span shot of result.
Issues is: missing a row data. missing a row with data [A37, B37, C37].
Please help me in fixing this issues.
Thanks in advance for all your help.
well, when I run your code I get
commandPrompt > ./makepdf.pl
Useless use of greediness modifier '?' in regex; marked by <-- HERE in m/(\S{20}? <-- HERE )(?=\S)/ at /usr/local/share/perl/5.20.2/PDF/Table.pm line 386.
!!! Warning: !!! Incorrect Table Geometry! Setting bottom margin to end of sheet!
at /usr/local/share/perl/5.20.2/PDF/Report/Table.pm line 94.
!!! Warning: !!! Incorrect Table Geometry! Setting bottom margin to end of sheet!
at /usr/local/share/perl/5.20.2/PDF/Report/Table.pm line 94.
I would think that
Setting bottom margin to end of sheet!
and
!!! Warning: !!! Incorrect Table Geometry!
would have something to do with it.
I am trying to import csv file into postgresql data base
I already tried set datestyle = mdy
\copy "Recon".snapdeal_sales (REFERENCES , ORDER_CODE ,SUB_ORDER_CODE ,
PRODUCT_NAME , ORDER_VERIFIED_DATE , ORDER_CREATED_DATE, AWBNO ,
SHIPPING_PROVIDER , SHIPPING_CITY , SHIPPING_METHOD , INVOICE_NUMBER ,
INVOICE_DATE , IMEI_SERIAL , STATUS , MANIFEST_BY_DATE , SHIPPED_ON ,
DELIVERED_ON , RETURN_INITIATED_ON , RETURN_DELIVERED_ON , SKU_CODE ,
PACKAGE_ID ,PRODUCT_CATEGORY, ATTRIBUTES , IMAGE_URL , PDP_URL , FREEBIES
,TRACKING_URL , ITEM_ID , MANIFEST_CODE , PROMISED_SHIP_DATE ,
NON_SERVICABLE_FROM , HOLD_DATE , HOLD_REASON , MRP
,EXPECTED_DELIVERY_DATE ,TAX_PERCENTAGE , CREATED ,RPI_DATE
,RPI_ISSUE_CATEGORY , RPR_DATE) FROM 'C:\Users\YAM\Documents\SALES.csv' DELIMITER ',' CSV HEADER;
First, run this query.
SET datestyle = dmy;
In my case I was getting this error :
psycopg2.errors.DatetimeFieldOverflow: date/time field value out of range 23-09-2021
soln : check what format the date is present in your db and change the date format in your query accordingly.
In my case it was in a format : yyyy-mm-dd, I was query with dd-mm-yyyy that caused the error, just simply change this.
Ex : http://localhost:port/tweets?date=2021-09-23