We have 2 tables renewal_bkp and adhoc_bkp and 1 MV as test_mv1.
I basically want to create a script that will update one row of renewal_bkp and adhoc_bkp and then select the data from the above MV.
This needs to be done in a loop fashion. Below is an
example:
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
update renewal_bkp set network_status='provisioned' where msisdn='3234561010240';
update adhoc_bkp set status='provisioned' where msisdn='3234561010240';
select * from test_mv1 where msisdn='3234561010240';
...
...
and so on
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
This same statement needs to get generated 1000 times with different msisdn numbers.
Can you help me to create a script to do so, then manually writing down each statements.
Thanks,
Sandeep
Although it is totally unclear how you are accessing the msisdns, here is a compact version that does all three things in one batch, atomic, with the help of Data-Modifying CTE's:
WITH
ids (id) AS (
VALUES ('3234561010240'), ('...'), ...
),
renewals AS (
UPDATE renewal_bkp SET network_status = 'provisioned'
WHERE msisdn IN (SELECT id FROM ids)
),
adhoc AS (
UPDATE adhoc_bkp SET status = 'provisioned'
WHERE msisdn IN (SELECT id FROM ids)
)
SELECT *
FROM test_mv1
WHERE msisdn IN (SELECT id FROM ids)
Instead of the VALUES clause, you can also put a regular SELECT from a dedicated table, which would make sense if you're going to execute this more often than once or twice a year.
Related
I have an app that vends a 'code' to users through an api. A code belongs to a pool of codes that when a user hits an endpoint, he/she will get a code from this 'pool'. At the moment there is only 1 'pool' of codes where a code can be vended. That idea is best expressed below in the following sql.
<<-SQL
UPDATE codes SET vended_at = NOW()
WHERE id = (
SELECT "codes"."id"
FROM "codes"
INNER JOIN "code_batches" ON "code_batches"."id" = "codes"."code_batch_id"
WHERE "codes"."vended_at" IS NULL
AND "code_batches"."active" = true
ORDER BY "code_batches"."end_at" ASC
FOR UPDATE OF "codes" SKIP LOCKED
LIMIT 1
)
RETURNING *;
SQL
So basically, when the end point is pinged, I am returning a code that is active and its vended_at field is NULL.
Now what I need to do is to build off of this sql so that a user can get a code from this pool or from a second pool. So for example, lets say that if the user couldn't get a code from this pool (we will call it A represented by the above sql), I need to vend a code from another pool (we will call it B).
I looked up the documentation of postgresql and I think what I want to do is to either 1). Use a UNION somehow to combine pools A and B into one megapool to vend a code or if I can't vend a code through pool A, use postgresql's OR clause to select from pool B.
The problem is that I can't seem to be able to use either of these syntaxes. I've tried something along the lines like this, tweaking it with different variations.
<<-SQL
UPDATE codes SET vended_at = NOW()
WHERE id = (
SELECT "codes"."id"
FROM "codes"
INNER JOIN "code_batches" ON "code_batches"."id" = "codes"."code_batch_id"
WHERE "codes"."vended_at" IS NULL
AND "code_batches"."active" = true
ORDER BY "code_batches"."end_at" ASC
FOR UPDATE OF "codes" SKIP LOCKED
LIMIT 1
) UNION (
######## SELECT SOME OTHER STUFF #########
)
RETURNING *;
SQL
or
<<-SQL
UPDATE codes SET vended_at = NOW()
WHERE id = (
SELECT "codes"."id"
FROM "codes"
INNER JOIN "code_batches" ON "code_batches"."id" = "codes"."code_batch_id"
WHERE "codes"."vended_at" IS NULL
AND "code_batches"."active" = true
ORDER BY "code_batches"."end_at" ASC
FOR UPDATE OF "codes" SKIP LOCKED
LIMIT 1
) OR (
######## SELECT SOME OTHER STUFF USING OR #########
)
RETURNING *;
SQL
So far the syntax is off and I'm starting to wonder if I can even use this approach for what I'm trying to do. I can't determine if my approach is wrong or if maybe I am using UNION, OR, and SUB-SELECTS wrong. Does anyone have any advice I can try to accomplish my goal? Thank you.
####### EDIT ########
To illustrate and make the concept even easier, I essentially want to do this.
<<-SQL
UPDATE codes SET vended_at = NOW()
WHERE id = (
CRITERIA 1
)
OR/UNION
(
CRITERIA 2
)
RETURNING *;
SQL
Use one table to store both pools.
Add a pool_number column to the codes table to indicate which pool the code is in, then just add
ORDER BY pool_number
to your existing query.
I'm researching a dataset.
And I just wonder if there is a way to order like below in 1 query
Select * From MyTable where name ='international%' order by id
Select * From MyTable where name != 'international%' order by id
So first showing all international items, next by names who dont start with international.
My question is not about adding columns to make this work, or use multiple DB's, or a largerTSQL script to clone a DB into a new order.
I just wonder if anything after 'Where or order by' can be tricked to do this.
You can use expressions in the ORDER BY:
Select * From MyTable
order by
CASE
WHEN name like 'international%' THEN 0
ELSE 1
END,
id
(From your narrative, it also sounded like you wanted like, not =, so I changed that too)
Another way (slightly cleaner and a tiny bit faster)
-- Sample Data
DECLARE #mytable TABLE (id INT IDENTITY, [name] VARCHAR(100));
INSERT #mytable([name])
VALUES('international something' ),('ACME'),('international waffles'),('ABC Co.');
-- solution
SELECT t.*
FROM #mytable AS t
ORDER BY -PATINDEX('international%', t.[name]);
Note too that you can add a persisted computed column for -PATINDEX('international%', t.[name]) to speed things up.
When you want to use postgres's SELECT FOR UPDATE SKIP LOCKED functionality to ensure that two different users reading from a table and claiming tasks do not get blocked by each other and also do not get tasks already being read by another user:
A join is being used in the query to retrieve tasks. We do not want any other table to have row-level locking except the table that contains the main info. Sample query below - Lock only the rows in the table -'task' in the below query
SELECT v.someid , v.info, v.parentinfo_id, v.stage FROM task v, parentinfo pi WHERE v.stage = 'READY_TASK'
AND v.parentinfo_id = pi.id
AND pi.important_info_number = (
SELECT MAX(important_info_number) FROM parentinfo )
ORDER BY v.id limit 200 for update skip locked;
Now if user A is retrieving some 200 rows of this table, user B should be able to retrieve another set of 200 rows.
EDIT: As per the comment below, the query will be changed to :
SELECT v.someid , v.info, v.parentinfo_id, v.stage FROM task v, parentinfo pi WHERE v.stage = 'READY_TASK'
AND v.parentinfo_id = pi.id
AND pi.important_info_number = (
SELECT MAX(important_info_number) FROM parentinfo) ORDER BY v.id limit 200 for update of v skip locked;
How best to place order by such that rows are ordered? While the order would get effected if multiple users invoke this command, still some order sanctity should be maintained of the rows that are being returned.
Also, does this also ensure that multiple threads invoking the same select query would be retrieving a different set of rows or is the locking only done for update commands?
Just experimented with this a little bit - multiple select queries will end up retrieving different set of rows. Also, order by ensures the order of the final result obtained.
Yes,
FOR UPDATE OF "TABLE_NAME" SKIP LOCKED
will lock only TABLE_NAME
I have the following table:
create table test(
id serial primary key,
firstname varchar(32),
lastname varchar(64),
id_desc char(8)
);
I need to insert 100 rows of data. Getting the names is no problem - I have two tables one containing ten rows of first names and the other containing ten last names. By doing a insert - select query with a cross join I am able to get 100 rows of data (10x10 cross join).
id_desc contains of eight characters (fixed size is mandatory). It always starts with the same pattern (e.g. abcde) followed by 001, 002 etc. up to 999. I have tried to achieve this with the following statement:
update test set id_desc = 'abcde' || num.id
from (select * from generate_series(1, 100) as id) as num
where num.id = (select id from test where id = num.id);
The statement executes but affects zero rows. I know that the where-clause probably does not make much sense; I have been trying to finally get this to work and just started trying a couple of things. Didn't want to omit it though when posting here because I know it is definitely required.
Laurenz's suggestion fits this specific case very well. I recommend using it.
The rest of this is for the more general case where that simplification is not appropriate.
In my tests this doesn't work in this way.
I think you are better off using a WITH clause and a window function.
WITH ranked_ids (id, rank) AS (
select id, row_number() OVER (rows unbounded preceding)
FROM test
)
update test set id_desc = 'abcde' || ranked_ids.rank
from ranked_ids WHERE test.id = ranked_ids.id;
It should be as simple as
UPDATE test SET id_desc = 'abcde' || to_char(id, 'FM099');
I have a Datatable with Id(guid) and Name(string) columns. I traverse through the data table and run a validation criteria on the Name (say, It should contain only letters and numbers) and then adding the corresponding Id to a List If name passes the validation.
Something like below:-
List<Guid> validIds=new List<Guid>();
foreach(DataRow row in DataTable1.Rows)
{
if(IsValid(row["Name"])
{
validIds.Add((Guid)row["Id"]);
}
}
In addition to this validation I should also check If the name is not repeating in the whole datatable (even for the case-sensitiveness), If It is repeating, I should not add the corresponding Id in the List.
Things I am thinking/have thought about:-
1) I can have another List, check for the "Name" in the same, If It exists, will add the corresponding Guild
2) I cannot use HashSet as that would treat "Test" and "test" as different strings and not duplicates.
3) Take the DataTable to another one where I have the disctict names (this I havent tried and the code might be incorrect, please correct me whereever possible)
DataTable dataTableWithDistinctName = new DataTable();
dataTableWithDistinctName.CaseSensitive=true
CopiedDataTable=DataTable1.DefaultView.ToTable(true,"Name");
I would loop through the original datatable and check the existence of the "Name" in the CopiedDataTable, If It exists, I wont add the Id to the List.
Are there any better and optimum way to achieve the same? I need to always think of performance. Although there are many related questions in SO, I didnt find a problem similar to this. If you could point me to a question similar to this, It would be helpful.
EDIT :- The number of records might vary from 2000-3000.
Thanks
if you are looking to prevent duplicates, it may be grueling work, and I don't know how many records your dealing with at at atime... If a small set, I'd consider doing a query before each attempted insert from your LIVE source based on
select COUNT(*) as CountOnFile from ProductionTable where UPPER(name) = UPPER(name from live data).
If the result set CountOnFile > 0, don't add.
If you are dealing with a large dataset, like a bulk import, I would pull all the data into a temp table, then do a query where NOT IN... something like
create table OkToBeAdded as
select distinct upper( TempTable.Name ) as Name, GUID
from TempTable
where upper( TempTable.Name )
NOT IN ( select upper( LiveTable.Name )
from LiveTable
where upper( TempTable.Name ) = upper( LiveTable.Name )
);
insert into LiveTable ( Name, GUID )
select Name, GUID from OkToBeAdded;
Obviously, the SQL is sample and would need to be adjusted based on your specific back-end source
/* I did this entirely in SQL and avoided ADO.NET*/
/*I Pass the CSV of valid object Ids and split that in a table*/
DECLARE #TableTemp TABLE
(
TempId uniqueidentifier
)
INSERT INTO #TableTemp
SELECT cast(Data AS uniqueidentifier )AS ID FROM dbo.Split1(#ValidObjectIdsAsCSV,',')
/*Self join with table1 for any duplicate rows and update the column value*/
UPDATE Table1
SET IsValidated=1
FROM Table1 AS A INNER JOIN #TableTemp AS Temp
ON A.ID=Temp.TempId
WHERE NOT EXISTS (SELECT Name,Count(Name) FROM Table1
WHERE A.Name=B.Name
GROUP BY Name HAVING Count(Name)>1)