sorry in advance if my question is already exists.
I'm a sql beginner and have an issue about how can i update a field with 'Select' where one field already exists in the datatable (Table1) and the another field update by the sintax.
Select
Business,
Month,
Opens
From Table2
I would like in the same syntax to be able to do the sends / opens calculation and update the field named 'Open Rate'
example
Today i have the table1 with
Month -- Send -- Opens -- Open Rate
May -- 100 -- --
The table 2 have
Opens - Month
5 -- May
My goal: update the table 1 with the table 2 with the open rate (Send/Opens):
Month -- Send -- Opens -- Open Rate
May -- 100 -- 5 -- 20
Related
I want to query a large number of rows and displayed to the user, however the user will see only for example 10 rows and I will use LIMIT and OFFSET, he will press 'next' button in the user interface and the next 10 rows will be fetched.
The database will be updated all the time, is there any way to guarantee that the user will see the data of the next 10 rows as they were in the first select, so any changes in the data will not be reflected if he choose to see the next 10 rows of result.
This is like using SELECT statement as a snapshot of the past, any subsequent updates after the first SELECT will not be visible for the subsequent SELECT LIMIT OFFSET.
You can use cursors, example:
drop table if exists test;
create table test(id int primary key);
insert into test
select i from generate_series(1, 50) i;
declare cur cursor with hold for
select * from test order by id;
move absolute 0 cur; -- get 1st page
fetch 5 cur;
id
----
1
2
3
4
5
(5 rows)
truncate test;
move absolute 5 cur; -- get 2nd page
fetch 5 cur; -- works even though the table is already empty
id
----
6
7
8
9
10
(5 rows)
close cur;
Note that it is rather expensive solution. A declared cursor creates a temporary table with a complete result set (a snapshot). This can cause significant server load. Personally I would rather search for alternative approaches (with dynamic results).
Read the documentation: DECLARE, FETCH.
I am working on a SQL Server report which shows the data of size 300 K and therefore it is very slow most of the time is spent on the report processing. So I m thinking if I can do some program to get the data from the database per page. This way call from the db and report processing time will reduce. So in other words if I am showing 50 records per page and when I am on page one and click on page 2 or the next button, my report get the data from 51 to 100. When I click on the next button again then I get the data for page 3 which would be 101 to 150.
So is there any way I can achieve.
Usually when the report is rendered all the data is pulled and SSRS renders the report which causes delay. If performance is the key here, you could use a stored procedure to retrieve 50 rows at a time, instead of passing all the values directly to SSRS - caveat here is you wont be able to use the native next/previous page buttons for this.
A work around is to create custom links to loop back to the report itself with incremented parameters specifying an index row to start from.
Create a stored procedure that takes in a parameter which specifies the starting row from your table:
CREATE PROCEDURE dbo.usp_GetData
#RowNumber BIGINT
AS
BEGIN
DECLARE #FirstRow BIGINT
DECLARE #LastRow BIGINT
SET #FirstRow = #RowNumber
SET #LastRow = #RowNumber + 50
;WITH CTE AS
(
SELECT *,
ROW_NUMBER() OVER (ORDER BY num) AS RowNumber
FROM dbo.TestTable
)
SELECT *
FROM CTE
WHERE RowNumber >= #FirstRow
AND RowNumber < #LastRow
END
GO
Create a stored procedure that retrieves the total rows from your table:
CREATE PROCEDURE dbo.usp_GetTotalRows
AS
BEGIN
SELECT COUNT(*) "TotalRows"
FROM dbo.TestTable
END
GO
Create the report and add the two datasets using the two stored procedures.
The #RowNumber parameter should be automatically generated. You can set the default value to 1 to start the report from row 1.
Create two "custom" buttons on the report (effectively just links back to the same report). You could use text boxes for Previous/Next Page buttons.
For the "Previous Button" Text Box - Properties > Action > Go to report > Specify a report (select the name of your report name). Add Parameter and set the expression to:
=Parameters!RowNumber.Value-50
For the "Next Button" Text Box - Properties > Action > Go to report > Specify a report (select the name of your report name). Add Parameter and set the expression to:
=Parameters!RowNumber.Value+50
You can also change the visibility options for the buttons (such as hiding "Previous Page" button when Parameters!RowNumber.Value = 1, or hiding the "Next Page" button when Parameters!RowNumber.Value + 50 >= DataSetName!TotalRows.Value)
I need to automatically insert a row in a stats table that is identified by the month number, if the new month does not exist as a row.
'cards' is a running count of individual IDs that stores a current value (gets reset at rollover time), a rollover count and a running total of all events on that ID
'stats keeps a running count of all IDs events, and how many rollovers occurred in a given month.
CREATE TABLE IDS (ID_Num VARCHAR(30), Curr_Count INT, Rollover_Count INT, Total_Count INT);
CREATE TABLE stats(Month char(10), HitCount int, RolloverCount int);
CREATE TRIGGER update_Tstats BEFORE UPDATE OF Total_Count ON IDS
WHEN 0=(SELECT HitCount from stats WHERE Month = strftime('%m','now'))
(Also tried a "IS NULL" at the other end of the WHEN clause...still no joy)
BEGIN
INSERT INTO stats (Month, HitCount, RolloverCount) VALUES (strftime('%m', 'now'),0,0);
END;
I did have it working to a point, but as rollover was updated twice per cycle (value changed up and down via SQL query I have in a python script), it gave me doubleups in the stats rollover count. So now I'm running a double query in my script. However, this all fall over if the current month number does not exist in the stats table.
All I need to do is check if a blank record exists for the current month for the python script UPDATE queries to run against, and if not, INSERT one. The script itself can't do a 'run once' type of query on initial runup, because it may run for days, including spanning a new month changeover.
Any assistance would be hugely appreciated.
To check whether a record exists, use EXISTS:
CREATE TRIGGER ...
WHEN NOT EXISTS (SELECT 1 FROM stats WHERE Month = ...)
BEGIN
INSERT INTO stats ...
END;
As my Question says that how can i get maximum value from Table?
In my apps. I have table name dataset_master
And table has field name is dataset_id, it is add manually as auto_inc.
So, First time when no record is inserted in Table and when I insert first record then I add dataset_id is 1. (this is only first time)
And Then after insert next record for dataset_id i fire query for get max value of dataset_id and I insert dataset_id +1. (This is for next record and so on..)
In my case I use following Query for get maximum dataset_id.
SELECT MAX(dataset_id) FROM dataset_master where project_id = 1
Here in my application I want to get maximum value of field name is dataset_id from dataset_master table.
This Query properly work when I insert record to dataset_master table each time I get proper maximum number of dataset_id. But when I delete record in sequins such like (1 to 5 from 10) in table and after I insert new record then I got each time last maximum number such like
if my table has 10 record then my dataset_id is 1 to 10;
When I delete record such like 1 to 5 then remains 6 to 10 record and also dataset_id in Table.
And then after I insert new record then each time I got 10 (maximum Number) so each time new record has dataset_id is dataset_id + 1 so 11.
What is problem I don't know (may be mistake in Query ?), please give your suggestion.
You need to reset the sequence in the sqlite_sequence table. I'd advise you not to worry about this though, as by the time it becomes a problem, this will be the least of your headaches.
I think the problem is not in your query, but in your insert. Do you force dataset_id when inserting new rows?
I know this might be redundant but I have had the same query running for almost 3 days and before I kill it, I would like to get a community sanity check.
DELETE
FROM mytble
WHERE ogc_fid NOT IN
(SELECT MAX(dup.ogc_fid)
FROM mytble As dup
GROUP BY dup.id)
mytble is the name of the table, ogc_fid is the name of the unique id field and id is the name of the field that I want to be the unique id. There are 41 million records in the table and indexes are built and everything so I am still a bit concerned about why its taking so long to complete. Any thoughts on this?
If I understood correctly, you want to delete all the records for which a record with the same dup_id
(but with a higher ogc_fid) exists. And keep only those with the highest ogc_fid.
-- DELETE -- uncomment this line and comment the next line if proven innocent.
SELECT COUNT(*)
FROM mytble mt
WHERE EXISTS (
SELECT *
FROM mytble nx
WHERE nx.dup_id = mt.dup_id -- there exists a row with the same dup_id
AND nx.ogc_fid > mt.ogc_fid -- , ... but with a higher ogc_fid
);
With an index on dup_id (and maybe on ogc_id) this should run maybe a few minutes for 41M records.
UPDATE: if no indexes exist, you could speed up the above queries by first creating an index:
CREATE UNIQUE INDEX sinterklaas ON mytble (dup_id, ogc_id);
Would be nice if you provided explain output, but what you're doing might be faster when done like this (again, I'd look up explain):
DELETE FROM mytable d
USING mytable m
LEFT JOIN (SELECT max(ogc_fid) AS f FROM mytble GROUP BY id) AS q ON m.ogc_fid=q.f
WHERE d.ogc_fid=m.ogc_fid AND q.f IS NULL;