sql 2014 problems storing decimal - tsql

i have a stored procedure and i initially create a temp table. i populate the first 3 columns and then i want to get the percentage when jobs are voided. but i can't seem to get a good value. if Void=10 and Total_Jobs=59 then 10/59 = 0.16 but it populates table with zero. what am i doing wrong?
Create table #tbl_WeeklyJobsRpt
(
Region_Code varchar(25),
Void int DEFAULT 0,
Total_Jobs int DEFAULT 0,
Void_Pctg decimal(10,2) DEFAULT 0
)
-- Void_Pctg
Update #tbl_WeeklyJobsRpt
set Void_Pctg = ((Void)/Total_Jobs)
Where Void <> 0

Both values are int so the result is int too and the value is being rounded down.
Try 1.0*Void/Total_Jobs - 1.0 causes implicit cast.

#Alex, that was the correct answer, thank you.
I had to set my fields as float and then when i divided later on, it all worked. I thought I had tried that already but obviously i missed a detail somewhere.
Answer here:

Related

How to store point (x, y) in a database?

I need to store location (x,y point) in my database where, point can be null and X and Y are always less then 999. At the moment I'm using EFCore Code First and Postgresql database, but I'd like to be flexible so that I can switch to MSSql without too much work. I'm not planning to move away from EF Core.
Right now, I have two columns: LocationX and LocationY, both are int? type. I'm not sure if this is good solution, because technically DB allows (X=2, Y=null), and it's should be. It's either both are null, or both are not.
My option two is to store it in a one string column: "123x321", with max length of 7.
Is there a better way?
Thanks,
Check constraint could be used to enforce both column are NULL or NOT NULL at the same time:
CREATE TABLE t(id INT,
x INT,
y INT,
CHECK((x IS NULL AND y IS NULL) OR (x IS NOT NULL AND y IS NOT NULL))
);
db<>fiddle demo
In addition to the check constraint suggested by #LukaszSzozda you can restrict the x,y values with an additional check constraint on each. So assuming they must also be in range 0,999 then
CREATE TABLE t(id INT,
x INT constraint x_range check ( x>=0 and x<=999),
y INT constraint y_range check ( y>=0 and y<=999),
CHECK((x IS NULL AND y IS NULL) OR (x IS NOT NULL AND y IS NOT NULL))
);
As far a your idea of storing a single string - very bad. Not only will you have the issue of separating values every time you need them it allows for distinctly invalid data. Values '1234567' and even 'abcdefg' are completely valid as far as the database is concerned.
So your table definition must account for and eliminate them. With this your table definition becomes:
create table txy
( xy_string varchar(7)
, constraint xy_format check( xy_string ~* '^\d{1,3}x\d{1,3}')
)
insert into txy(xy_string)
( values ('1x2'), ('354X512'), ('38x92') );
Which is actually a reduction as it is back to a single constraint, but your queries now require something like:
select xy_string
, regexp_replace(xy_string, '^(\d+)(X|x)(\d+)','\1') x
, regexp_replace(xy_string, '^(\d+)(X|x)(\d+)','\3') y
from txy;
See demo here.
In short never store groups of numbers values as a single delimited string. The additional work is just not worth it.

Postgres: query for a specific date on a timestampz returns no data while I can see the data in my postgres client

I'm very new at postgres and I've been googling this for about 1h before posting here.
Hopefully, you can help me with this probably trivial issue.
I have a database LTC that was created with this schema:
CREATE TABLE LTC (
id SERIAL PRIMARY KEY,
time timestamptz,
side CHAR(4),
price REAL,
v REAL,
n INT
);
I want to query to get all the data for time='2017-08-12T03:58:26.563Z' (ISO string from javascript). I know there are over a hundred lines of data in the database for that time, i see it in my postgres client.
Here is the query I'm doing:
select * from LTC where time = '2017-08-12T03:58:26.563Z'::timestamptz
Why am I getting no results?
Edit:
Still unsure why it wasn't working, but I wrote a work-around that does:
In JavaScript:
var date = new Date('2017-08-12T03:58:26.563Z').toISOString(); // actual time passed as parameter in my function, hard-coded for the example
var reg = new RegExp("([0-9]{4})-([0-9]{2})-([0-9]{2})T([0-9]{2}):([0-9]{2}):([0-9]{2})\.[0-9]*Z","gmi");
var start = date.replace(reg, function(match,year,month,day,hour,min,sec,ms) {
return year+'-'+month+'-'+day+'T'+hour+':'+min+':00.000Z';
});
var end = date.replace(reg, function(match,year,month,day,hour,min,sec,ms) {
var min = parseInt(min)+1;
if (min<=9) {
min = '0'+min;
}
return year+'-'+month+'-'+day+'T'+hour+':'+min+':00.000Z';
});
var query = "select * from LTC where time >= '"+start+"'::timestamptz and time < '"+end+"'::timestamptz"; // This works
Try using date_trunc('second',timestamp) on both to ensure that numerical precision is not what is throwing you off. There may be additional decimal places that are not being shown by your client and throwing off the equality.
The other possible solution is giving a range (between x and y) to avoid the numerical equality issue.
It would be easier for us to help if you can get an exact copy of the data being represented (using pg_dump perhaps, if you are familiar) so that we can test with the data that you are using.
The final thing you may want to check is explicitly stating the time zones that you are referencing. I generally use timestamp without time zone to avoid this issue, but auto-setting time zones to different values may be throwing you off as well. A good way to test is by selecting the two values, something like
select *, l.time = p.ts as test
from LTC l, (select '2017-08-12T03:58:26.563Z'::timestamptz as ts) p
;
EDIT:
I have built a test to try to reproduce your behavior:
CREATE TABLE LTC (
id serial
, time timestamptz
);
INSERT INTO LTC (time)
values ('2017-08-12T03:58:26.56312345'::timestamptz)
returning *;
select *
from LTC
where time = '2017-08-12T03:58:26.563Z'::timestamptz
;
select *, l.time = p.ts as test
from LTC l, (select '2017-08-12T03:58:26.563Z'::timestamptz as ts) p
;
What I get here is actually:
1;"2017-08-12 03:58:26.563123-04";"2017-08-11 23:58:26.563-04";f
Hopefully you can see what is happening - the '2017-08-12T03:58:26.563Z'::timestamptz is being interpreted as a UTC time and then converted to my time zone (UTC-04), so what is being compared is actually a different date altogether! In the future, showing this type of equality side-by-side is a great way to test that you are executing what you think you are (especially with dates / times where auto-conversion happens often).

T-SQL Join on foreign key that has leading zero

I need to link various tables that each have a common key (a serial number in this case). In some tables the key has a leading zero e.g. '037443' and on others it doesn't e.g. '37443'. In both cases the serial refers to the same product. To confound things serial 'numbers' are not always just numeric e.g. may be "BDO1234", in these cases there is never a leading zero.
I'd prefer to use the WHERE statement (WHERE a.key = b.key) but could use joins if required. Is there any way to do this?
I'm still learning so please keep it simple if possible. Many thanks.
Based on the accepted answer in this link, I've written a small tsql sample to show you what I meant by 'the right direction':
Create the test table:
CREATE TABLE tblTempTest
(
keyCol varchar(20)
)
GO
Populate it:
INSERT INTO tblTempTest VALUES
('1234'), ('01234'), ('10234'), ('0k234'), ('k2304'), ('00034')
Select values:
SELECT keyCol,
SUBSTRING(keyCol, PATINDEX('%[^0]%', keyCol + '.'), LEN(keyCol)) As trimmed
FROM tblTempTest
Results:
keyCol trimmed
-------------------- --------------------
1234 1234
01234 1234
10234 10234
0k234 k234
k2304 k2304
00034 34
Cleanup:
DROP TABLE tblTempTest
Note that the values are alpha-numeric, and only leading zeroes are trimmed.
One possible drawback is that if there is a 0 after a white space it will not be trimmed, but that's an easy fix - just add ltrim:
SUBSTRING(LTRIM(keyCol), PATINDEX('%[^0]%', LTRIM(keyCol + '.')), LEN(keyCol)) As trimmed
You need to create a function
CREATE FUNCTION CompareSerialNumbers(#SerialA varchar(max), #SerialB varchar(max))
RETURNS bit
AS
BEGIN
DECLARE #ReturnValue AS bit
IF (ISNUMERIC(#SerialA) = 1 AND ISNUMERIC(#SerialB) = 1)
SELECT #ReturnValue =
CASE
WHEN CAST(#SerialA AS int) = CAST(#SerialB AS int) THEN 1
ELSE 0
END
ELSE
SELECT #ReturnValue =
CASE
WHEN #SerialA = #SerialB THEN 1
ELSE 0
END
RETURN #ReturnValue
END;
GO
If both are numeric then it compares them as integers otherwise it compares them as strings.

Can Entity Framework assign wrong Identity column value in case of high concurency additions

We have an auto-increment Identity column Id as part of my user object. For a campaign we just did for a client we had up to 600 signups per minute. This is code block doing the addition:
using (var ctx = new {{ProjectName}}_Entities())
{
int userId = ctx.Users.Where(u => u.Email.Equals(request.Email)).Select(u => u.Id).SingleOrDefault();
if (userId == 0)
{
var user = new User() { /* Initializing user properties here */ };
ctx.Users.Add(user);
ctx.SaveChanges();
userId = user.Id;
}
...
}
Then we use the userId to insert data into another table. What happened during high load is that there were multiple rows with same userId even though there shouldn't be. It seems like the above code returned the same Identity (int) number for multiple inserts.
I read through a few blog/forum posts saying that there might be an issue with SCOPE_IDENTITY() which Entity Framework uses to return the auto-increment value after insert.
They say a possible workaround would be writing insert procedure for User with INSERT ... OUTPUT INSERTED.Id which I'm familiar with.
Anybody else experienced this issue? Any suggestion on how this should be handled with Entity Framework?
UPDATE 1:
After further analyzing data I'm almost 100% positive this is the problem. Identity column had skipped auto-increment values 48 times total 2727, (2728 missing), 2729,... and exactly 48 duplicates we have in the other table.
It seems like EF returned random Identity value for each row it wasn't able to insert for some reason.
Anybody have any idea what could possible be going on here?
UPDATE 2:
Possibly important info I didn't mention is that this happened on Azure Website with Azure SQL. We had 4 instances running at the time it happened.
UPDATE 3:
Stored Proc:
CREATE PROCEDURE [dbo].[p_ClaimCoupon]
#CampaignId int,
#UserId int,
#Flow tinyint
AS
DECLARE #myCoupons TABLE
(
[Id] BIGINT NOT NULL,
[Code] CHAR(11) NOT NULL,
[ExpiresAt] DATETIME NOT NULL,
[ClaimedBefore] BIT NOT NULL
)
INSERT INTO #myCoupons
SELECT TOP(1) c.Id, c.Code, c.ExpiresAt, 1
FROM Coupons c
WHERE c.CampaignId = #CampaignId AND c.UserId = #UserId
DECLARE #couponCount int = (SELECT COUNT(*) FROM #myCoupons)
IF #couponCount > 0
BEGIN
SELECT *
FROM #myCoupons
END
ELSE
BEGIN
UPDATE TOP(1) Coupons
SET UserId = #UserId, IsClaimed = 1, ClaimedAt = GETUTCDATE(), Flow = #Flow
OUTPUT DELETED.Id, DELETED.Code, DELETED.ExpiresAt, CAST(0 AS BIT) as [ClaimedBefore]
WHERE CampaignId = #CampaignId AND IsClaimed = 0
END
RETURN 0
Called like this from the same EF context:
var coupon = ctx.Database.SqlQuery<CouponViewModel>(
"EXEC p_ClaimCoupon #CampaignId, #UserId, #Flow",
new SqlParameter("CampaignId", {{CampaignId}}),
new SqlParameter("UserId", {{userId}}),
new SqlParameter("Flow", {{Flow}})).FirstOrDefault();
No, that's not possible. For one, that would be an egregious bug in EF. You are not the first one to put 600 inserts/second on it. Also, SCOPE_IDENTITY is explicitly safe and is the recommended practice.
These statements go for the case that you are using a SQL Server IDENTITY column as ID.
I admit I don't know how Azure SQL Database synchronizes the generation of unique, sequential IDs, but intuitively it must be costly, especially at your rates.
If non-sequential IDs are an option, you might want to consider generating UUIDs at the application level. I know this doesn't answer your direct question, but it would improve performance (unverified) and bypass your problem.
Update: Scratch that, Azure SQL Database isn't distributed, it's simply replicated from a single primary node. So, no real performance gain to expect from alternatives to IDENTITY keys, and supposedly the number of instances is not significant to your problem.
I think your problem may be here:
UPDATE TOP(1) Coupons
SET UserId = #UserId, IsClaimed = 1, ClaimedAt = GETUTCDATE(), Flow = #Flow
OUTPUT DELETED.Id, DELETED.Code, DELETED.ExpiresAt, CAST(0 AS BIT) as [ClaimedBefore]
WHERE CampaignId = #CampaignId AND IsClaimed = 0
This will update the UserId of first record it finds in the campaign that hasn't been claimed. It doesn't look robust to me in the event that inserting a user failed. Are you sure that is correct?

SQL UPDATE SET one Column to another value and change value in same step

Take the following update statement.
UPDATE TABLE_1
SET Units2 = ABS(Units1)
,Dollars2=ABS(Dallars1)
,Units1 =0
,Dollars1 =0
WHERE Units1 < 0
AND Dollars2 = 0
Here are my questions,
1) Is this legal? It parses and it "seems" to work (on the test table), but will it always work or am I just picking the right records to review.
2) is there a better way to do this.
Thanks,
It is legal, and as long as you are wanting to essentially keep the old values of Units1 and Dollars1 in Units2 and Dollars2 that should work
Here's a test:
CREATE TABLE #Table_1
(
Units1 INT,
Dollars1 MONEY,
Units2 INT,
Dollars2 MONEY
)
GO
INSERT INTO #Table_1 (Units1, Dollars1, Units2, Dollars2)
VALUES (-1,12.00,3,0.00)
GO
UPDATE #TABLE_1
SET Units2 = ABS(Units1)
,Dollars2=ABS(Dollars1)
,Units1 =0
,Dollars1 =0
WHERE Units1 < 0
AND Dollars2 = 0
GO
SELECT *
FROM #Table_1
Outputs:
Units1 | Dollars1 | Units2| Dollars2
0 | 0.00 | 1 | 12.00
Assuming that you're talking about the Dollars1 column, I think it should be fine. Reads should use the current value, writes are committed after the calculations are done.
If you're questioning it now, though I would suggest breaking it into two statements. You're the author and it's not clear to you. Take a little pity on the guy that has to maintain it, and make it clear now.
Your query is correct and is pretty much the way to do it.