This query selects random client_id value from the table. And i need that value in another query.
String s= "SELECT client_id FROM clients OFFSET random()*(select count(*) from clients) LIMIT 1";
d=conn.createStatement();
ResultSet rs1 = d.executeQuery(s);
UPDATE: I tried to cast like this
int val = ((Number) rs1).intValue();
sql error shows like this
rg.postgresql.jdbc3g.Jdbc3gResultSet cannot be cast to java.lang.Number
I want to use the value of result set in my another query
insert into invheader(invdate,client_id,amount,tax,total,closed,ship_via,note)
values(
to_date('"+indate+"', 'DD-mon-YYYY'),
'"+rs1+"', //<<---- i need to pass the value of result set in integer format
'"+amont+"',
'"+tx+"',
'"+tot+"',
'"+closd+"',
'"+shipvia+"',
'"+not+"')";
UPDATE: I am passing value from servlet to postgresdb
Avoid the round trip and do it all in SQL
with client_id as (
select client_id
from clients
offset random() * (select count(*) from clients)
limit 1
)
insert into invheader (
invdate, client_id, amount, tax, total, closed, ship_via, note
) values (
to_date('"+indate+"', 'DD-mon-YYYY'),
(select client_id from client_id), //<<---- i need to pass the value of result set in integer format
'"+amont+"',
'"+tx+"',
'"+tot+"',
'"+closd+"',
'"+shipvia+"',
'"+not+"')";
And be very aware of SQL injection:
https://www.google.com/search?q=sql+injection&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a&channel=sb
Related
I'm trying to get the latest "date" so the max value of "date" and from the same table I want the max value of "stand" also from the same ID.
I have tons of dates, stands for one ID but i only want to extract the latest.
im trying to save it into a function i dont know yet if thats the best idea. The rest of my query It's made of inner joins.
Datum is of type date.
stand is decimal(18,6)
DECLARE #MAXDATE DATE
DECLARE #MAXSTAND decimal(18,6)
SELECT #MAXDATE = MAX(Datum) FROM [dbo].[1] WHERE ID = ID
SELECT #MAXSTAND = Stand FROM [dbo].[2]WHERE ID = ID
Result I get: #MAXDATE: 2106-10-13
Result I get: #MAXSTAND: 0.000000
Result I want: #MAXDATE: 2018-01-16
result I want: #MAXSTAND: 1098.000000
Assuming SQL Server 2012 or higher, you can use the first_value window function:
SELECT FIRST_VALUE(Stand) OVER(ORDER BY Datum DESC)
FROM TableName
WHERE Id = #Id
This will return the value of Stand where the Datum column has the latest value for the specific Id.
With the data provided, I don't think you need to do something else :
SELECT MAX(Datum), MAX(Stand)
FROM TableName
WHERE ID = #MyId
edit : You want it by ID, you can do this :
SELECT MAX(Datum), MAX(Stand), ID
FROM TableName
GROUP BY ID
First off, I'm using SQL Server 2008 R2
I am moving data from one source to another. In this particular case there is a field called SiteID. In the source it's not a required field, but in the destination it is. So it was my thought, when the SiteID from the source is NULL, to sort of create a SiteID "on the fly" during the query of the source data. Something like a combination of the state plus the first 8 characters of a description field plus a ten digit number incremented.
At first I thought it might be easy to use a combination of date/time + nanoseconds but it turns out that several records can be retrieved within a nanosecond leading to duplicate SiteIDs.
My second idea was to create a table that contained an identity field plus a function that would add a record to increment the identity field and then return it (the function would also delete all records where the identity field is less than the latest saving space). Unfortunately after I got it written, when trying to "CREATE" the function I got a notice that INSERTs are not allowed in functions.
I could (and did) convert it to a stored procedure, but stored procedures are not allowed in select queries.
So now I'm stuck.
Is there any way to accomplish what I'm trying to do?
This script may take time to execute depending on the data present in the table, so first execute on a small sample dataset.
DECLARE #TotalMissingSiteID INT = 0,
#Counter INT = 0,
#NewID BIGINT;
DECLARE #NewSiteIDs TABLE
(
SiteID BIGINT-- Check the datatype
);
SELECT #TotalMissingSiteID = COUNT(*)
FROM SourceTable
WHERE SiteID IS NULL;
WHILE(#Counter < #TotalMissingSiteID )
BEGIN
WHILE(1 = 1)
BEGIN
SELECT #NewID = RAND()* 1000000000000000;-- Add your formula to generate new SiteIDs here
-- To check if the generated SiteID is already present in the table
IF ( ISNULL(( SELECT 1
FROM SourceTable
WHERE SiteID = #NewID),0) = 0 )
BREAK;
END
INSERT INTO #NewSiteIDs (SiteID)
VALUES (#NewID);
SET #Counter = #Counter + 1;
END
INSERT INTO DestinationTable (SiteID)-- Add the extra columns here
SELECT ISNULL(MainTable.SiteID,NewIDs.SiteID) SiteID
FROM (
SELECT SiteID,-- Add the extra columns here
ROW_NUMBER() OVER(PARTITION BY SiteID
ORDER BY SiteID) SerialNumber
FROM SourceTable
) MainTable
LEFT JOIN ( SELECT SiteID,
ROW_NUMBER() OVER(ORDER BY SiteID) SerialNumber
FROM #NewSiteIDs
) NewIDs
ON MainTable.SiteID IS NULL
AND MainTable.SerialNumber = NewIDs.SerialNumber
This is my result set
i am returning this result set on the base of refid using WHERE refid IN.
Over here, i need to apply a logic without any kind of programming (means SQL query only).
if in result set, i am getting period for particular refid then other rows with the same refid must not returned.
for example, 2667105 having period then myid = 612084598 must not get returned in result set.
according to me this can be achieved using CASE but i have no idea how to use it, i mean that i don't know should i use the CASE statement in SELECT statement or WHERE clause...
EDIT:
This is how it suppose to work,
myid = 612084598 is the default row for refid = 2667105 but if specifically wants the refid for period = 6 then it must return all rows except myid = 612084598
but if i am looking for period = 12, for this period no specific refid present in database.. so for this it must return all rows except first one.. means all rows with the refid which is default one..
Not very clear definition of the problem, but try this:
with cte as (
select
*,
first_value(period) over(partition by refid order by myid) as fv
from test
)
select
myid, refid, period
from cte
where period is not null or fv is null
sql fiddle demo
need 3 rows for each one of the two valid rows displayed below output like below:
Primary key is C_PROCEDURE + C_PROV_TYPE + SPEC_SEQ_NO!
output shall be like bELOW
You could try something like this:
INSERT INTO YourTable (
C_PROCEDURE,
C_PROV_TYPE,
I_PT_SPEC_SEQ_NO,
C_SPECIALTY
)
SELECT
s.C_PROCEDURE,
s.C_PROV_TYPE,
s.MaxSeq + ROW_NUMBER() OVER (
PARTITION BY s.C_PROCEDURE, s.C_PROV_TYPE
ORDER BY v.rn, s.I_PT_SPEC_SEQ_NO),
s.C_SPECIALTY + v.rn
FROM (
SELECT
*,
MAX(I_PT_SPEC_SEQ_NO) OVER (
PARTITION BY C_PROCEDURE, C_PROV_TYPE
) AS MaxSeq
FROM YourTable
) s
CROSS JOIN (
VALUES (1), (2), (3)
) v (rn)
WHERE s.C_PROV_TYPE = '014'
AND s.C_SPECIALTY = '300'
;
Basically, the subquery returns all the YourTable rows supplied with the maximum values of I_PT_SPEC_SEQ_NO for every partition of (C_PROCEDURE, C_PROV_TYPE) using the windowing MAX() function (MAX(...) OVER (...)).
The resulting set of that subquery is then cross-joined to an inline 3-row table (which produces three copies of every row returned) and filtered by the specified values of C_PROV_TYPE and C_SPECIALTY.
New data rows pull C_PROCEDURE and C_PROV_TYPE directly from the subquery. The new C_SPECIALTY values are produced using those from the subquery and the rn values of the inline table. The new sequence numbers are generated with the help of the ROW_NUMBER() function and the maximum sequence numbers returned by the subquery.
As I didn't have access to a working installation of DB2, I was testing my script in SQL Server 2008, trying to stick to features that I understood DB2 supported as well as SQL Server. This SQL Fiddle demo also uses a SQL Server 2008 instance to demonstrate how the query works.
My query looks like this:
SELECT mthreport.*
FROM crosstab
('SELECT
to_char(ipstimestamp, ''mon DD HH24h'') As row_name,
varid::text || log.varid || ''_'' || ips.objectname::text As bucket,
COUNT(*)::integer As bucketvalue
FROM loggingdb_ips_boolean As log
INNER JOIN IpsObjects As ips
ON log.Varid=ips.ObjectId
WHERE ((log.varid = 37551)
OR (log.varid = 27087)
OR (log.varid = 50876)
OR (log.varid = 45096)
OR (log.varid = 54708)
OR (log.varid = 47475)
OR (log.varid = 54606)
OR (log.varid = 25528)
OR (log.varid = 54729))
GROUP BY to_char(ipstimestamp, ''yyyy MM DD HH24h''), row_name, objectid, bucket
ORDER BY to_char(ipstimestamp, ''yyyy MM DD HH24h''), row_name, objectid, bucket' )
As mthreport(item_name text, varid_37551 integer,
varid_27087 integer ,
varid_50876 integer ,
varid_45096 integer ,
varid_54708 integer ,
varid_47475 integer ,
varid_54606 integer ,
varid_25528 integer ,
varid_54729 integer ,
varid_29469 integer)
the query can be tested against a test table with this connection string:
"host=bellariastrasse.com port=5432 dbname=IpsLogging user=guest password=guest"
The query is syntactically correct and runs fine. My problem is that it the COUNT(*) values are always filling the leftmost column. however, in many instances the left columns should have a zero, or a NULL, and only the 2nd (or n-th) column should be filled. My brain is melting and I cannot figure out what is wrong!
The solution for your problem is to use the crosstab() variant with two parameters.
The second parameter (another query string) produces the list of output columns, so that NULL values in the data query (the first parameter) are assigned correctly.
Check the manual for the tablefunc extension, and in particular crosstab(text, text):
The main limitation of the single-parameter form of crosstab is that
it treats all values in a group alike, inserting each value into the
first available column. If you want the value columns to correspond to
specific categories of data, and some groups might not have data for
some of the categories, that doesn't work well. The two-parameter form
of crosstab handles this case by providing an explicit list of the
categories corresponding to the output columns.
Emphasis mine. I posted a couple of related answers recently here or here or here.