Consecutive numbers - SQL Server 2008 R2 - sql-server-2008-r2

I need to put numbers in sequential order (1, 2, 3, 4...) in a table column.
These are invoice numbers, they need to be consecutives, no gaps, no repeated. Is a mandatory requirement, in fact is a fiscal requirement, so I can't skip it.
My current aproach is to use a second "Numbers" table with 2 columns (Id, LastNumber) with one record on it, Id = 1.
This is what I'm doing now:
1. SELECT (LastNumber + 1) as Number from Numbers with (xlock, rowlock) where Id = 1
2. assign the number, do other inserts and updates to other tables.
3. UPDATE Numbers set LastNumber = #Number where Id = 1 --#Number is the number retrieved in step 1
4. End of Transaction
I'm trying to use locking, and don't know if I am correct, what is the most efficient solution to do this?
As I stated above, is a mandatory requeriment, the number must be consecutives, no gaps, no repeated.
The table is used in an app from several clients in a network. I use Sql Server 2008 R2.
I have found this similar question:
Sequential Invoice Numbers SQL server
the solution given there is with some T-SQL code that get the max number in the table, but what happens if 2 clients called this code at the same time? 2 repetead numbers will be generated?
Thanks

You'll want a transaction. By doing the update first, you're effectively locking the row from other shared locks until your transaction is completed, then returning the new number.
BEGIN TRAN
UPDATE Numbers SET Number = Number + 1 WHERE Id = 12
SELECT Number FROM Numbers WHERE Id = 12
COMMIT

Related

SQL - using the Min field to achieve desired result

Wondering the best SQL to handle below situation: Client only wants to see invoices that have been declined. I started with only show me when STATUS_ID = 2, but then realized that it was paid as it was resubmitted and accepted so that didn't work. What is the best way to handle 2 records like below where I don't want the SQL to return any records if manifest + order code have a 1. Would you do a Min on Status ID or something of that nature?
VENDOR NAME manifest ORDER_CODE STATUS_ID
VENDOR 12345 BHGSDKJF1234 RU07 2 (invoice decline)
VENDOR 12345 BHGSDKJF1234 RU07 1 (paid)
This trick can be work for you in this case, but it's not solve the general case (what happens if the STATUS_ID for paid is 3, and all possible values are 0-5?)
you can use in general SWICH-CASE clause, that gives you some 1 (true) if the client has STATUS_ID = 1, and 0 otherwise. Then, pick the MAX() for each invoice.
You can also consider another design that might work for you:
Add time\time-stamp column (Maybe, for your purpose, you can use SYSDATE time for insertion time of the record to db).
After you have a time column, you probably can choose the columns with the last time STATUS_ID for each invoice (get the STATUS_ID in the row with the max time).

Insert multiple records into fact table based on fields in single record

I'm working in Pentaho 4.4.1-GA (Kettle / PDI). The database is Postgres.
I need to be able to insert multiple records into a fact table based on the fields that come from a single record. The single record contains fields:
productcode1, price1
productcode2, price2
productcode3, price3
...
productcode10,price10
So if there was a value for each of the 10 productcode / prices then I'd need to insert a total of 10 records into the fact table. If there were values for 4 of the combinations, then I'd need to insert 4 records into the fact table, etcetera. All field values for the fact records would be identical except for the PK (generated by sequence), product codes, and prices.
I figure that I need some type of looping construct which would let me check whether or not a value was present for each productx field, and if so, do an insert/update step on the fact table with the desired field values. I'm just not sure how to do this in Pentaho.
Any ideas? All suggestions are welcome :)
Thank You,
Rakesh
Could you give a sample input and output for your scenario??
From your example data I can infer that if there are 10 different product codes and only 4 product prices you want to have 4 records inserted into your table. Is that so?
Well for a start you can add a constant value of 1 to those records by filtering for NOT NULL and then use an Group BY Step to count the number of 1's. This would give you the count. BTW it would be helpful if you could provide more details on what columns you would be loading as there are ways to make a PDI transformation execute multiple times

The number of times the count is displayed is equal to the value of the count.Ms Access vba 2007

I am working on an MS ACCESS 2007 application. I made a query where I count the number of rows which have a particular "ID". This count is calculated and stored in a column in the same query. This count is stored against another column which is unique and which is related to the column "ID". Hence the count is not repeated in the query.However, when I display this count in a text box along with other related values, the count is repeated the times equal to its value.
I tried using Dlookup() and DCount() with no different results.
I hope someone can help me resolve this problem.
Can't you just use DCount() to count the number of rows with a particular ID?
In your Form code you'd have the following VBA to assing the number of records in MyTable with Id = 5.
CountTextBox.Text = DCount("Id", "MyTable", "Id = 5")
This is the same as saying
SELECT COUNT(Id)
FROM MyTable
WHERE Id = 5
If I understand what you're saying, it sounds like you might be storing the number of records with Id = 5 against each record with ID = 5. If that's the case you can use DFirst("IdCount", "MyTable", "Id = 5") to get the first record of ID = 5 and read the count only from that record, as it'll be the same for all ID = 5. Seems a bit weird though.
Sorry if I misunderstood your question. I find it hard to follow.

Batch update table with 2 million rows

Hi all I've got an interesting task to update a single column in a table that has roughly 2 million rows. I've tried doing this using MVC Entity Framework, however I'm encountering "Out of memory exceptions" and I'm just wondering if there's another way.
The interesting part is that its not just a simple update. The procedure needs to read the TelephoneNumber column already in the table and this could be 014812001 for example. Then it needs to calculate a score for this number based on the number of occurrences greater than 1. So for example using the above number this would score a 6 as we have 3 x 1's and 3 x 0's giving a total of 6.
Once this score has been calculated this number needs to be inserted into the a column in the current row be processed, so in our case the row with the TelephoneNumber = 014812001.
Is this possible using TSQL or is it better to carry on with my Entity Framework approach?
For such a bulk update, I would always recommend doing this on the server itself - there's really no point in dragging down 2 million rows, updating a single column, and then pushing those back to the server again.....
I think based on your description, it should be fairly simple to create a little T-SQL user defined function that would calculate this score. Once you have that, you can issue a single T-SQL statement:
UPDATE dbo.YourTable
SET Score = dbo.fnCalculateScore(TelephoneNumber)
WHERE .... (whatever condition you might have) .....
That should be faster by several orders of magnitude than with your Entity Framework approach....

SQL Server 2008: Pivot column with no aggregate function workaround

Yes I know, this question has been asked MANY times but after reading all the posts I found that there wasn't an answer that fits my need. So, Heres my question. I would like to take a column of values and pivot them into rows of 6 columns.
I want to take this...... And turn it into this.......................
G Letter Date Code Ammount Name Account
081278 G 081278 12 00123535 John Doe 123456
12
00123535
John Doe
123456
I have 110000 values in this one column in one table called TempTable. I need all the values displayed because each row is an entity to itself. For instance, There is one unique entry for all of the Letter, Date, Code, Ammount, Name, and Account columns. I understand that the aggregate function is required but is there a workaround that will allow me to get this desired result?
Just use a MAX aggregate
If one row = one column (per group of 6 rows) then MAX of a single value = that row value.
However, the data you've posted in insufficient. I don't see anything to:
associate the 6 rows per group
distinguish whether a row is "Letter" or "Name"
There is no implicit row order or number to rely upon to generate the groups
Unfortunately, the max columns in a SQL 2008 select statement is 4,096 as per MSDN Max Capacity.
Instead of using a pivot, you might consider dynamic SQL to get what you want to do.
Declare #SQLColumns nvarchar(max),#SQL nvarchar(max)
select #SQLColumns=(select '''+ColName+'''',' from TableName for XML Path(''))
set #SQLColumns=left(#SQLColumns,len(#SQLColumns)-1)
set #SQL='Select '+#SQLColumns
exec sp_ExecuteSQL #SQL,N''