Hopefully an easy question for someone with more experience than me. I have a stored procedure that Inserts records into a table. Like all databases that I have worked with, when you insert a record it inserts it into the bottom of the table. I would like to insert it to the top of the table and then move all the existing records down by one (I assume this would happen automatically with the insert).
I want to to do this because I'm using the 'Top #' keyword. I am pretty sure that I could just leave it the way it is, and instead of using the 'Top" keyword, I could use the 'Bottom" keyword. But I want to make it easier for people reading it that aren't familiar with it, so they can instantly see the newest entries. I'm going to keep researching this on my own, but If someone knew off the top of their head and could save me the time that would be appreciated.
is there any incremental id on that table.If yes then create clustered index on that id with descending order
Related
I use SQL Server with ASP.NET Core and EF Core. After each record is added, the identity column's value jumps about by 1000 and creates a gap between current row and the last previously added row.
Questions
Is there any way to prevent this?
How to delete those gaps that have been created before?
If I use GUID for key columns to prevent that issue, is there a problem (performance or each other problem)?
Is it way on the server side that with EF Core could handle it (each some way)?
Thank you in advance for your helps...
For the reason for 1000-value gaps, see Aaron Bertrand's answer
It doesn't really make sense to "want" to delete the gaps. The content of an identity column contains no semantic information. It correlates to nothing "in the world" outside the database. The gaps are as meaningless as the values themselves.
I don't see how a uniqueidentifier would "prevent" that issue. A uniqueidentifier may be "meaningfully" sortable (if you use newsequentialid()), but there's no sense in which any particular value is "one more" than a previous value.
You can certainly try to build your own key generating algorithm that does not produce gaps, but you will run into concurrency issues (also mentioned by Mr Bertrand).
workaround trick:
CREATE OR ALTER TRIGGER TGR_Transaction_Identity_Fix
ON [dbo].[TBL_Transaction]
FOR INSERT
AS
BEGIN
DECLARE #RESEEDVAL INT
SELECT #RESEEDVAL = MAX(TransactionId) FROM [dbo].[TBL_Transaction]
DBCC CHECKIDENT([TBL_Transaction], RESEED, #RESEEDVAL)
END
this triger will reset identity on each insert
I would love to have a stored procedure which returns a selection which is editable in phpadmin. For example, something as simple as
SELECT * FROM students
written from the console will allow you to edit the records as far as each row is unique (table with a primary key). However, that same statement within a procedure won't allow it!
The point is that I have a selection that I will be using quite often and it's a bit more complex than the one above. If I copy it and paste it on the console, it's fine, but not so when run from a procedure!
I would appreciate any ideas to work around this limitation.
Thank you in advance!
Thanks to those who read my question. I just figured out that a very simple way to solve this issue is to create a table with my output selection. Then, from the console, I can select and edit whatever I need from that temporary table.
Thanks for your time!
this is my first time using SQL at all, so this might sound basic. I'm making an iPhone app that creates and uses a sqlite3 database (I'm using the libsqlite3.dylib database as well as importing "sqlite3.h"). I've been able to correctly created the database and a table in it, but now I need to know the best way to get stuff back from it.
How would I go about retrieving all the information in the table? It's very important that I be able to access each row in the order that it is in the table. What I want to do (if this helps) is get all the info from the various fields in a single row, put all that into one object, and then store the object in an array, and then do the same for the next row, and the next, etc. At the end, I should have an array with the same number of elements as I have rows in my sql table. Thank you.
My SQL is rusty, but I think you can use SELECT * FROM myTable and then iterate through the results. You can also use a LIMIT/OFFSET(1) structure if you do not want to retrieve all elements at one from your table (for example due to memory concerns).
(1) Note that this can perform unexpectedly bad, depending on your use case. Look here for more info...
How would I go about retrieving all the information in the table? It's
very important that I be able to access each row in the order that it
is in the table.
That is not how SQL works. Rows are not kept in the table in a specific order as far as SQL is concerned. The order of rows returned by a query is determined by the ORDER BY clause in the query, e.g. ORDER BY DateCreated, or ORDER BY Price.
But SQLite has a rowid virtual column that can be used for this purpose. It reflects the sequence in which the rows were inserted. Except that it might change with a VACUUM. If you make it an INTEGER PRIMARY KEY it should stay constant.
order by rowid
I have a FileMaker script which calculates a value. I have 1 record from table A from which a relation points to n records of table B. What is the best way to set B::Field to this value for each of these n related records?
Doing Set Field [B::Field; $Value] will only set the value of the first of the n related records. What works however is the following:
Go to Related Record [Show only related records; From table: "B"; Using layout: "B_layout" (B)]
Loop
Set Field [B::Field; $Value]
Go To Record/Request/Page [Next; Exit after last]
End Loop
Go to Layout [original layout]
Is there a better way to accomplish this? I dislike the fact that in order to set some value (model) programmatically (controller), I have to create a layout (view) and switch to it, even though the user is not supposed to notice anything like a changing view.
FileMaker always was primarily an end-user tool, so all its scripts are more like macros that repeat user actions. It nowhere near as flexible as programmer-oriented environments. To go to another layout is, actually, a standard method to manipulate related values. You would have to do this anyway if you, say, want to duplicate a related record or print a report.
So:
Your script is quite good, except that you can use the Replace Field Contents script step. Also add Freeze Window script step in the beginning; it will prevent the screen from updating.
If you have a portal to the related table, you may loop over portal rows.
FileMaker plug-in API can execute SQL and there are some plug-ins that expose this functionality. So if you really want, this is also an option.
I myself would prefer the first variant.
Loop through a Portal of Related Records
Looping through a portal that has the related records and setting the field has a couple of advantages over either Replace or Go To Record, Set Field Loop.
You don't have to leave the layout. The portal can be hidden or place off screen if it isn't already on the layout.
You can do it transactionally. IE you can make sure that either all the records get edited or none of them do. This is important since in a multi-user networked solution, records may not always be editable. Neither replace or looping through the records without a portal is transaction safe.
Here is some info on FileMaker transactions.
You can loop through a portal using Go To Portal Row. Like so:
Go To Portal Row [First]
Loop
Set Field [B::Field; $Value]
Go To Portal Row [Next; Exit after last]
End Loop
It depends on what you're using the value for. If you need to hard wire a certain field, then it doesn't sound like you've got a very normalised data structure. The simplest way would be a calculation in TableB instead of a stored field, or if this is something that is stored, could it be a lookup field instead that is set on record creation?
What is the field in TableB being used for and how?
This may be a very simplistic question, so apologies in advance, but I am very new to database usage.
I'd like to have Postgres run its full text search across multiple joined tables. Imagine something like a model User, with related models UserProfile and UserInfo. The search would only be for Users, but would include information from UserProfile and UserInfo.
I'm planning on using a gin index for the search. I'm unclear, however, on whether I'm going to need a separate tsvector column in the User table to hold the aggregated tsvectors from across the tables, and to setup triggers to keep it up to date. Or if it's possible to create an index without a tsvector column that'll keep itself up to date whenever any of the relevant fields in any of the relevant tables change. Also, any tips on the syntax of the command to create all this would be much appreciated as well.
Your best answer is probably to have a separate tsvector column in each table (with an index on, of course). If you aggregate the data up to a shared tsvector, that'll create a lot of updates on that shared one whenever the individual ones update.
You will need one index per table. Then when you query it, obviously you need multiple WHERE clauses, one for each field. PostgreSQL will then automatically figure out which combination of indexes to use to give you the quickest results - likely using bitmap scanning. It will make your queries a little more complex to write (since you need multiple column matching clauses), but that keeps the flexibility to only query some of the fields in the cases where you want.
You cannot create one index that tracks multiple tables. To do that you need the separate tsvector column and triggers on each table to update it.