How to insert multiple queries with mysqli object oriented - mysqli

$db->query("UPDATE users SET solde = solde - '$aDeduire' WHERE telephone = '$msisdn'"); // works
$db->query("UPDATE users SET solde = solde + '$montant' WHERE telephone = '$numaPayer'"); //works
$db->query("UPDATE systems SET solde = solde + '$fraisTransaction' WHERE nom = 'SYSTEM'"); //works
$db->query("INSERT INTO transactions (day, description, expediteur, destinataire, debit, credit, numserie )
VALUES (NOW(), '$textExp', '$msisdn',null ,'$montant',null , '$numSerie')");// not working
$db->query("INSERT INTO transactions (day, description, expediteur, destinataire, debit, credit, numserie )
VALUES (NOW(), '$textDes't, null, '$numaPayer', null,'$montant', '$numSerie')"); // not working
$db->query("INSERT INTO transactions (day, description, expediteur, destinataire, debit, credit, numserie )
VALUES (NOW(), '$textSyst', null,'SYSTEM', null, '$montant', '$numSerie')");// not working
Please can anyone help with this code.
I'm trying to insert multiple dats at once.
what's wrong with the code?

$db->query("INSERT INTO transactions (day, expediteur, destinataire, debit, credit, description, numserie)
VALUES (NOW(), '$msisdn','' ,'$montant', 0, '$textExp', '$numSerie'),
(NOW(), '', '$numaPayer', 0,'$montant', '$textDest', '$numSerie'),
(NOW(), '$msisdn','', '$fraisTransaction', 0, '$textFrais', '$numSerie'),
(NOW(), '','SYSTEM', 0, '$fraisTransaction', '$textSyst', '$numSerie')");
here is how I solved it
regards

Related

TSQL - Select values with same IS

have a view like this:
Table
The record "NDocumento" is populated only in the first row of a transaction by design. These rows are grouped by the column "NMov" which is the ID.
Since this is a view, I would like to populate each empty "NDocumento" record with the corresponding value contained in the first transaction through a SELECT statement.
As you can see by the picture this is MS-SQL Server 2008, so the lack of LAG makes the game harder.
I would immensely appreciate any help,
thanks
Try this:
SELECT
T1.NDocumento
, T2.NMov
, T2.NRiga
-- , T2. Rest of the fields
FROM NDocumentoTable T1
JOIN NDocumentoTable T2 ON T2.NMov = T1.NMov
WHERE T1.NRiga = 1
I used LAG() over the partition of NMov,Causale by based on your data. You cna change the partition with your requirement. The logic is you get the previous value if the NDocument is empty for the given partition.
CREATE TABLE myTable_1
(
NMov int
,NRiga int
,CodiceAngrafica varchar(100)
,Causale varchar(100)
,DateRegistration date
,DateDocumented date
,NDocument varchar(100)
)
INSERT INTO myTable_1 VALUES (5133, 1, '', 'V05', '01/14/2021', '01/14/2021', 'VI-2100001')
,(5133, 2, '', 'V05', null, null, '')
,(5134, 1, '', 'V05', '01/14/2021', '01/14/2021', 'VI-2100002')
,(5134, 2, '', 'V05', null, null, '')
SELECT
NMov
,NRiga
,CASE WHEN ISNULL(NDocument,'') = ''
THEN LAG(NDocument) OVER (PARTITION BY NMov,Causale ORDER BY NMov)
ELSE NDocument END AS [NDocument]
FROM myTable_1

Interconnecting tables on PostgreSQL

I am a newbie here.
I am using PostgreSQL to manipulate lots of data in my specific field of research. Unfortunately, I am encountering a problem that is not allowing me to continue my analysis. I tried to simplify my problem to clearly illustrate it.
Let's suppose I have a table called "Buyers" with those data:
table_buyers
The buyers can make ONLY ONE purchase in each store or none. There are three stores and there a table for each one. Just like below:
table_store1
table_store2
table_store3
To create the tables, I am using the following code:
CREATE TABLE public.buyer
(
ID integer NOT NULL PRIMARY KEY,
name text NOT NULL,
phone text NOT NULL
)
WITH (
OIDS = FALSE
)
;
CREATE TABLE public.Store1
(
ID_buyer integer NOT NULL PRIMARY KEY,
total_order numeric NOT NULL,
total_itens integer NOT NULL
)
WITH (
OIDS = FALSE
)
;
CREATE TABLE public.Store2
(
ID_buyer integer NOT NULL PRIMARY KEY,
total_order numeric NOT NULL,
total_itens integer NOT NULL
)
WITH (
OIDS = FALSE
)
;
CREATE TABLE public.Store3
(
ID_buyer integer NOT NULL PRIMARY KEY,
total_order numeric NOT NULL,
total_itens integer NOT NULL
)
WITH (
OIDS = FALSE
)
;
To add the information on the tables, I am using the following code:
INSERT INTO buyer (ID, name, phone) VALUES
(1, 'Alex', 88888888),
(2, 'Igor', 77777777),
(3, 'Mike', 66666666);
INSERT INTO Store1 (ID_buyer, total_order, total_itens) VALUES
(1, 87.45, 8),
(2, 14.00, 3),
(3, 12.40, 4);
INSERT INTO Store2 (ID_buyer, total_order, total_itens) VALUES
(1, 785.12, 7),
(2, 9874.21, 25);
INSERT INTO Store3 (ID_buyer, total_order, total_itens) VALUES
(2, 45.87, 1);
As all the tables are interconnected by buyer's ID, I wish I could have a query that generates an output just like this:
desired output table.
Please, note that if the buyer did not buy anything in a store, I must print '0'.
I know this is an easy task, but unfortunately, I have been failing on accomplish it.
Using the 'AND' logical operator, I tried the following code to accomplish this task:
SELECT
buyer.id,
buyer.name,
store1.total_order,
store2.total_order,
store3.total_order
FROM
public.buyer,
public.store1,
public.store2,
public.store3
WHERE
buyer.id = store1.id_buyer AND
buyer.id = store2.id_buyer AND
buyer.id = store3.id_buyer;
But, obviously, it just returned 'Igor' as this was the only buyer that have bought items on all three stores (print screen).
Then, I tried the 'OR' logical operator, just like the following code:
SELECT
buyer.id,
buyer.name,
store1.total_order,
store2.total_order,
store3.total_order
FROM
public.buyer,
public.store1,
public.store2,
public.store3
WHERE
buyer.id = store1.id_buyer OR
buyer.id = store2.id_buyer OR
buyer.id = store3.id_buyer;
But then, it returns 12 lines with wrong values (print screen).
Clearly, my mistake is about not considering that 'Buyers' don't have to on all three stores on my code. I just can't correct it on my own, can you please help me?
I appreciate a lot for an answer that can light up my way. Thanks a lot!
Tips about how I can search for this issue are very welcome as well!
Ok. I doubt that this is the final answer for you, but its a start
SELECT
buyer.id,
buyer.name,
COALESCE( gb_store1.total_orders, 0 ) as store1_total,
COALESCE( gb_store2.total_orders, 0 ) as store2_total,
COALESCE( gb_store3.total_orders, 0 ) as store3_total
FROM
public.buyer,
LEFT OUTER JOIN ( SELECT ID_buyer,
SUM( total_orders ) as total_orders,
SUM( total_itens ) as total_itens
FROM public.store1
GROUP BY ID_buyer ) gb_store1 ON gb_store1.id_buyer = buyer.id ,
LEFT OUTER JOIN ( SELECT ID_buyer,
SUM( total_orders ) as total_orders,
SUM( total_itens ) as total_itens
FROM public.store2
GROUP BY ID_buyer ) gb_store2 ON gb_store2.id_buyer = buyer.id ,
LEFT OUTER JOIN ( SELECT ID_buyer,
SUM( total_orders ) as total_orders,
SUM( total_itens ) as total_itens
FROM public.store3
GROUP BY ID_buyer ) gb_store3 ON gb_store3.id_buyer = buyer.id ;
So, this query has a couple elements should focus on. The subselects/groupby allow you to total within your subtables by ID_buyer. The LEFT OUTER JOIN make its so your query can still return a result, even if a subselect finds no matching record. Finally, the COALESCE allows you to return 0 when one of your totals is NULL (because the subselect found no match).
Hope this helps.

T-Sql update and avoid conflict

I'm trying to migrate a Tomcat app from using Postgres 9.5 to SQL Server 2016 and I've got a problem statement I can't seem to duplicate.
It's basically an upsert but one of the complications is the request supplies arguments to do the update, but when there is conflict I need to use some of the existing values from conflicting rows to insert/update.
The primary keys in the table can sometimes cause a conflict, which requires updating rows and deleting the old ones.
The table schema in MS SQL looks like:
CREATE TABLE [dbo].[signup](
[site_key] [varchar](32) NOT NULL,
[list_id] [bigint] NOT NULL,
[email_address] [varchar](256) NOT NULL,
[customer_id] [bigint] NULL,
[attribute1] [varchar](64) NULL,
[date1] [datetime] NOT NULL,
[date2] [datetime] NULL,
CONSTRAINT [pk_signup] PRIMARY KEY CLUSTERED
(
[site_key] ASC,
[list_id] ASC,
[email_address] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
The old Postgres SQL looked like this:
WITH updated_rows AS (
INSERT INTO signup
(site_key, list_id, email_address, customer_id, attribute1, date1, date2)
SELECT site_key, list_id, :emailAddress, customer_id, attribute1, date1, date2
FROM signup WHERE customer_id = :customerId and email_address <> :emailAddress
ON CONFLICT (site_key, list_id, email_address) DO UPDATE SET customer_id = excluded.customer_id
RETURNING site_key, customer_id, email_address, list_id
)
DELETE FROM signup AS signup_delete USING updated_rows
WHERE
signup_delete.site_key = updated_rows.site_key
AND signup_delete.customer_id = updated_rows.customer_id
AND signup_delete.list_id = updated_rows.list_id
AND signup_delete.email_address <> :emailAddress;
Two arguments are supplied, customer id and email address, shown here as Spring NamedParameterJdbcTemplate values :customerId and :emailAddress
It's trying to change the email address of the customer id to be the supplied one, but sometimes the supplied email address already exists in the primary key constraint.
In which case it needs to change the existing customer id to be supplied one, and remove the rows with that don't match the new email address.
I also need to try and maintain isolation so that nothing can change the data whilst I'm updating.
I'm trying to do it with a MERGE statement but I can't seem to get it to work, it's complaining I cant use values that aren't in the clause scope, but I think I've probably got other issues here too.
This is what I had so far. It doesn't even address the deleting part - only the upserting, but I can't even get this part to work. I was planning to use the OUTPUT from this as input to something to delete the rows similar to the postgres version.
WITH source AS (
SELECT cs.[site_key] as existing_site_key,
cs.list_id as existing_list_id,
cs.email_address as existing_email,
cs.customer_id as existing_customer_id,
cs.attribute1 as existing_attribute1,
cs.date1 as existing_date1,
cs.date2 as existing_date2,
cs2.email_address as conflicting_email,
cs2.customer_id AS conflicting_customer_id
FROM [dbo].[signup] cs
LEFT JOIN [dbo].[signup] cs2 ON cs2.email_address = :emailAddress
AND cs.site_key = cs2.site_key
AND cs.list_id = cs2.list_id
WHERE cs.customer_id = :customerId
)
MERGE signup WITH (HOLDLOCK) AS target
USING source
ON ( source.conflicting_customer_id is not null )
WHEN MATCHED AND source.existing_site_key = target.site_key AND source.existing_list_id = target.list_id AND source.conflicting_email = target.email_address THEN UPDATE
SET customer_id = :customerId
WHEN NOT MATCHED BY target AND source.existing_site_key = target.site_key AND source.existing_list_id = target.list_id AND source.conflicting_customer_id = :customerId THEN INSERT
(site_key, list_id, email_address, customer_id, attribute1, date1, date2) VALUES
(source.existing_site_key, source.existing_list_id, :emailAddress, source.customer_id, source.existing_attribute1, source.existing_date1, source.existing_date2)
Thanks,
mikee

Slow running query comparing dates

Could someone please offer some advice. I have the following query that is using roughly 200,000 records. I need to evaluate a 'DateTime' field to evaluate if the revenue occurs during the correct time slot. I am currently using CASE statements to evaluate the DateTime field and it is an absolute pig, it runs over 5 minutes. Is there a faster more efficient way to do this? Note the variables #cur_date, #end_date, #prev_yr_qtr_start, #cur_date_yr_prev etc are all strings and r.pw_ship_date is of type DATETIME. So in essence I'm comparing r.pw_ship_date to strings ie:'2017-01-01 00:00'
Note: it took 4:00 minutes to run this query when I added 'SELECT TOP(500)' for 200,000 records it would take forever.
Thanks in advance
DECLARE #total TABLE
(
acct_number VARCHAR(50),
pro_nbr VARCHAR(50),
sales_rep VARCHAR(50),
bill_to_name VARCHAR(50),
billing_addr1 VARCHAR(50),
billing_addr2 VARCHAR(50),
billing_city CHAR(50),
billing_state CHAR(2),
billing_zip CHAR(10),
cur_month_bills INT,
cur_month_rev DECIMAL(30, 6),
cur_qtr_bills INT,
cur_qtr_rev DECIMAL(30, 6),
prev_yr_qtr_bills INT,
prev_yr_qtr_rev DECIMAL(30, 6),
cur_ytd_bills INT,
cur_ytd_rev DECIMAL(30, 6),
prev_ytd_bills INT
)
INSERT INTO #total
SELECT TOP(50000) f.acct_number ,
r.pro_nbr ,
r.sales_rep ,
r.bill_to_name ,
r.billing_addr1 ,
r.billing_addr2 ,
r.billing_city ,
r.billing_state ,
r.billing_zip ,
'cur_month_bills' = MAX(( CASE WHEN r.pw_ship_date BETWEEN #cur_date AND #end_date THEN 1 ELSE 0 END )) ,
'cur_month_rev' = MAX(ROUND(( CASE WHEN r.pw_ship_date BETWEEN #cur_date AND #end_date THEN f.tot_revenue ELSE 0 END ), 2)) ,
'cur_qtr_bills' = MAX((CASE WHEN r.pw_ship_date BETWEEN #cur_date AND #end_date THEN 1 ELSE 0 END )) ,
'cur_qtr_rev' = MAX(ROUND(CASE WHEN r.pw_ship_date BETWEEN #cur_date AND #end_date THEN f.tot_revenue ELSE 0 END, 2)) ,
'prev_yr_qtr_bills' = MAX(CASE WHEN r.pw_ship_date BETWEEN #prev_yr_qtr_start AND #cur_date_yr_prev THEN 1 ELSE 0 END ) ,
'prev_yr_qtr_rev' = MAX(ROUND(CASE WHEN r.pw_ship_date BETWEEN #prev_yr_qtr_start AND #cur_date_yr_prev THEN f.tot_revenue ELSE 0 END , 2)) ,
'cur_ytd_bills' = MAX(CASE WHEN r.pw_ship_date BETWEEN #first_day_cur_yr AND #end_date THEN 1 ELSE 0 END ),
'cur_ytd_rev' = MAX(ROUND(CASE WHEN r.pw_ship_date BETWEEN #first_day_cur_yr AND #end_date THEN f.tot_revenue ELSE 0 END , 2)) ,
'prev_ytd_bills' = MAX(CASE WHEN r.pw_ship_date BETWEEN #first_day_prev_yr AND #end_date THEN 1 ELSE 0 END )
FROM #summed f
INNER JOIN #raw r ON f.acct_number = r.acct_number AND f.pro_nbr = r.pro_nbr
GROUP BY f.acct_number ,
r.pro_nbr ,
r.sales_rep ,
r.bill_to_name ,
r.billing_addr1 ,
r.billing_addr2 ,
r.billing_city ,
r.billing_state ,
r.billing_zip;
Change your table variables #raw and #summed to temporary tables. Table variables have no statistics and are extremely limited with regard to indexing (you can only have one). Because of this, SQL Server assumes that your table variables have only one row (2012 and older) or 100 rows (2014+). This means that you almost certainly are getting a bad execution plan for your query, and that's going to ruin you.
Once you've changed #raw and #summed into #raw and #summed, put an index on them - at a minimum, index your foreign keys (the fields you're joining on), acct_number and pro-nbr. It may be worth creating a clustered index and/or a primary key as well, but that's something you'll need to experiment with to find the performance you require.
The other thing that is killing your performance is comparing datetimes to strings. This is causing a type conversion and that can drag you down significantly. If you're working with a date/time, use the appropriate data type - not a string that looks like a date.
If this is still not running quickly enough, move your CASE statements out of your aggregate functions.
MAX(( CASE WHEN r.pw_ship_date BETWEEN #cur_date AND #end_date THEN 1 ELSE 0 END ))
Move the CASE statement into the query that populates #raw.pw_ship_date so that when you're performing the aggregate, you're just looking at integers all the way down.

Struggling with IF statement SQL Beginner

UPDATE dbo.Contact
SET emailMessenger = '63' WHERE personID = #personID
WHILE (SELECT Email FROM Contact WHERE personID = #personID) != #email
BEGIN
IF (SELECT Email FROM Contact WHERE personID = #personID) IS NULL
UPDATE Contact SET Email = #personID WHERE personID = #personID
ELSE IF (SELECT Email FROM Contact WHERE personID = #personID) IS NOT NULL
UPDATE Contact SET SecondaryEmail = email WHERE personID = #personID
UPDATE Contact SET email = #email WHERE personID = #personID
END
What i'm trying to do is add an employees work email to primaryemail. But if they already have a personal email, then I want to first move that to the secondaryemail. But only for the specified employee hence the PersonID.
I've looked at a lot of different examples using CASE and IF-THEN. I think i'm adding too much to the statement. I really thought this Stackoverflow queston would help. I know its going to be the syntax or how it's structured.
Could you can do it with an update avoding trigger
UPDATE dbo.Contact
SET emailMessenger = '63',
dbo.Contact,
SecondaryEmail = CASE when Email is not null then Email else SecondaryEmail end ,
Email = CASE when email is null then 'studentemail.com'
when email <>'studentemail.com' then 'studentemail.com' else email end
where personID = '18403';
If I'm following your logic correctly, the following should do the trick.
I first set up a testing table with all possible problem permutations (email present in one, both, or neither column). If your data doesn't look something like this, my solution may not apply.
drop table Contact
CREATE TABLE Contact
(
PersonId int not null
,Email varchar(100) null
,SecondaryEmail varchar(100) null
)
INSERT Contact values
(1, null, null)
,(2, 'OnlyFirst', null)
,(3, null, 'OnlySecond')
,(4, 'FirstEmail', 'SecondEmail')
And the following to set the change to make, show contents before, make the change, then show contents after.
DECLARE
#PersonId int
,#Email varchar(100) = 'NewEmail#foo.com'
SET #PersonId = 4
-- Before
SELECT * from Contact
-- Modify
UPDATE Contact
set
Email = #Email
,SecondaryEmail = case when Email is not null then Email else SecondaryEmail end
where PersonID = #PersonID
-- After
SELECT * from Contact
Net result: #Email is always set in column Email, and #SecondaryEmail is set to the contents of Email only if Email was not null