I know you can't control transactions in functions or procedures, but I'm wondering if there's something like that or some alternative.
The problem: I have a function that's very expensive that turns things like a customer id into a nice html report. Trouble is - it takes seconds... so I've put into the function something that basically looks at a cache to see if a pre-rendered one exists, returning it if it does - and if it doesn't - it adds to the cache afterwards - so it will only ever render things once.
Now - given most things will never change - I sort of want to do it across everything - but given the time - it will probably take about 1 year to run - which is ok actually - this system has to run for ten. Trouble is - I don't want it to lock anything on the database, so I sort of want it trickle along, doing 1 at a time and committing immediately.
I investigated pg_cron, because that seemed an option, but the version of aurora I am using doesn't support it. Any ideas how I'd do this inside the database?
By all means, don't code that as a function running inside the database. It is fine to do the calculations in the database, but generating a report and iterating over customers belong into client code. That way, committing each report is not a problem.
Add a text column to the customer table to hold the html report.
Put trigger(s) on table(s) whose content influences the html report that refreshes the report column.
This gives you instant retrieval and only (re)calculates it when needed.
Or if data is stable:
create table customer_report (
customer_id int not null primary key,
report text not null
);
insert into customer_report
select id, my_report_proc(id)
from customer;
I use SQL Server with ASP.NET Core and EF Core. After each record is added, the identity column's value jumps about by 1000 and creates a gap between current row and the last previously added row.
Questions
Is there any way to prevent this?
How to delete those gaps that have been created before?
If I use GUID for key columns to prevent that issue, is there a problem (performance or each other problem)?
Is it way on the server side that with EF Core could handle it (each some way)?
Thank you in advance for your helps...
For the reason for 1000-value gaps, see Aaron Bertrand's answer
It doesn't really make sense to "want" to delete the gaps. The content of an identity column contains no semantic information. It correlates to nothing "in the world" outside the database. The gaps are as meaningless as the values themselves.
I don't see how a uniqueidentifier would "prevent" that issue. A uniqueidentifier may be "meaningfully" sortable (if you use newsequentialid()), but there's no sense in which any particular value is "one more" than a previous value.
You can certainly try to build your own key generating algorithm that does not produce gaps, but you will run into concurrency issues (also mentioned by Mr Bertrand).
workaround trick:
CREATE OR ALTER TRIGGER TGR_Transaction_Identity_Fix
ON [dbo].[TBL_Transaction]
FOR INSERT
AS
BEGIN
DECLARE #RESEEDVAL INT
SELECT #RESEEDVAL = MAX(TransactionId) FROM [dbo].[TBL_Transaction]
DBCC CHECKIDENT([TBL_Transaction], RESEED, #RESEEDVAL)
END
this triger will reset identity on each insert
So simply the problem occurs when I want to edit selected rows and then apply it. I'm sure it worked some time ago. Tried redownload postgres driver in preferences(yeah, I use postgres) Anyone faced same issue? Anyone succeed?
PS. Running on 142.4861.1.
I found read only checkbox in connection preferences, it was not set, toggling didn't help, upgrading, reseting also didn't help.
In my case in version 2020.1 of DataGrip (SQL was run from the opened file, on unique table, so that select just worked as expected, but when I was trying to edit - error appeared: unresolved table reference): specifying schemes in request helped. So that SELECT * FROM users; was changed to SELECT * FROM schemadb.users; and that helped. Probably there is a bug. I've tried all the methods mentioned above.
Try synchronize database connection. It helped me in mysql connection.
What actually helped was toggling Auto-commit checkbox in console, after that everything runs flawlessly.
The only thing that actually worked for me, after trying everything above multiple times, was deleting each DB connection and remaking a new one from scratch.
This can be due to default settings, make it sure your your transaction Mode settings are as below
I had the same issue with datagrop 2020.2. I have followed all the method but for me, just delete the connection and create new connection (never trying to duplicate) just manual. It is works!
Set and clear Read-only in Data Source Properties helps me.
What worked for me was removing a field alias - going from this:
SELECT
l.MSKU Item_SKU,
l.Supplier,
l.ASIN,
l.title,
l.Buy_Price
FROM listings l
WHERE l.Buy_Price IS NULL
ORDER BY l.Supplier, l.listingID desc;
to this:
SELECT
l.MSKU,
l.Supplier,
l.ASIN,
l.title,
l.Buy_Price
FROM listings l
WHERE l.Buy_Price IS NULL
ORDER BY l.Supplier, l.listingID desc;
is all it took for me to be able to edit the results of the query
If your query is using field alias (renaming column instead of using actual column name) , then Datagrip set data results as read only.
Solution:
Rewrite query using field names as in the table and rerun query. Then you will be able to edit the rows.
Example
Rewrite this:
select id,interest_recalcualated_on, interest_recalculation_enabled alias from m_loan;
..to this:
select id,interest_recalcualated_on, interest_recalculation_enabled from m_loan;
Nothing worked. I had to update DataGrip from 2017.2 to 2018.3
I had to open the project by navigating to: /home/user/.DataGrip2017.2/config/projects/my_project/
All my scripts for this project as I did not want to import the config from the old version of Datagrip. So I'll probably need to downgrade, get the scripts and upgrade again.
I struck the same issue, not Postgres but MySql, PhpStorm 2019.1 when I had two schema available on the same db connection and my query: select * from users where full_name like '%handy%'; resulted in a result table that couldn't be edited even though the console reported it was querying the stage schema. A more specific query: select * from stage.users where full_name like '%handy%'; using the exact table name led to a results table that could be inline edited.
The only way I was able to get around this issue was to remove the database connection details for the given connection and recreate them from scratch. While this was happening to me on this specific connection, editing was working fine on other connections, which suggests the problem might be related to specific parameters around the connection in question.
For me, when I changed back my Select query to * (wild char to select all the columns), from specific columns, the edit and save started to working again. This is strange! May be worth reporting a bug to intellij. I'm using DataGrip 2020.1 on macOs Catalina 10.15.3
I thought I had the same problem but just realized I needed to submit changes (with a button or command + enter) for them to be applied.
This answer is 4 years late though. Maybe the bug was already fixed haha, but this may help someone else.
The problem for me was the obvious (well obvious after identifying it) - using a MySQL keyword as a field name (yes I know its not a good idea but thats how it is sometimes). So this caused the error
select id,name,value from mytable;
The correct way to write this of course is using backticks so:
select id,name,`value` from mytable;
and this solved it for me.
seems like DataGrip doesn't allow to change data in the ui table if you select specific columns with joins.
so if you select table1.column1, table2.column2 from <table1> inner join <table2> and try to change the value in column1, you will receive This table is read only warning
only if you do select * from <table>, you can edit the cell values
try this
In the IDE settings (Ctrl+Alt+S), go to Database | General.
Select the Open results in new tab checkbox and click OK.
Re-run your query, and you'll be able to edit and commit the changes in the new tabs.
I'm interested in using the following audit mechanism in an existing PostgreSQL database.
http://wiki.postgresql.org/wiki/Audit_trigger
but, would like (if possible) to make one modification. I would also like to log the primary_key's value where it could be queried later. So, I would like to add a field named something like "record_id" to the "logged_actions" table. The problem is that every table in the existing database has a different primary key fieldname. The good news is that the database has a very consistent naming convention. It's always, _id. So, if a table was named "employee", the primary key is "employee_id".
Is there anyway to do this? basically, I need something like OLD.FieldByName(x) or OLD[x] to get value out of the id field to put into the record_id field in the new audit record.
I do understand that I could just create a separate, custom trigger for each table that I want to keep track of, but it would be nice to have it be generic.
edit: I also understand that the key value does get logged in either the old/new data fields. But, what I would like would be to make querying for the history easier and more efficient. In other words,
select * from audit.logged_actions where table_name = 'xxxx' and record_id = 12345;
another edit: I'm using PostgreSQL 9.1
Thanks!
You didn't mention your version of PostgreSQL, which is very important when writing answers to questions like this.
If you're running PostgreSQL 9.0 or newer (or able to upgrade) you can use this approach as documented by Pavel:
http://okbob.blogspot.com/2009/10/dynamic-access-to-record-fields-in.html
In general, what you want is to reference a dynamically named field in a record-typed PL/PgSQL variable like 'NEW' or 'OLD'. This has historically been annoyingly hard, and is still awkward but is at least possible in 9.0.
Your other alternative - which may be simpler - is to write your audit triggers in plperlu, where dynamic field references are trivial.
I have a (very simple and standard) UPDATE statement which works fine either directly in Query Analyser, or executed as a stored procedure in Query Analyser.
UPDATE A
SET
A.field1 = B.col1
, A.field2 = B.col2
FROM
tblA AS A INNER JOIN tblB AS B
ON A.pk1 = B.pk1 AND A.pk2 = B.pk2
Problem is when i execute the same stored proc via microsoft ADP (by double-clicking on the sproc name or using the Run option), it says "query ran successfully but did not return records" AND does NOT update the records when i inspect the tables directly.
Before anyone even says "syntax of MS-Access is different than SQLServer T-SQL", remember that with ADP everything happens on the server and one is actually passing thru to T-SQL.
Any bright ideas from any ADP gurus out there?
Gotcha. Responding to my own question for the benefit of anyone else.
Tools / Options / Advanced / Client-Server Settings / Default max records is set at 10,000 (presumably this is the default). Change this to 0 for unlimited.
My table had 100,000+ rows and whatever set of 10,000 it was updating was difficult to find ( among a sea of 90,000+ un-updated rows ). Hence the update did not work fully as expected.
Try and see whether the query gets executed on the SQL Server using SQL profiler.
Also, I think you might need to close the linked table & re-open it to see the updated records.
Does that work?
Run the query with SQL PRofiler running. Before you start the trace add in all the error events. This will give you any errors that the SQL Server is generating that the Access ADP might not be showing correctly (or at all).
Feel free to post them here.
Just as a reference, here's a paper I wrote on Update Queries that discusses some of the issues associated with when the fail.
http://www.fmsinc.com/microsoftaccess/query/snytax/update-query.html
I seem to remember that I always got the "didn't return any rows" message and had to simply turn off the messaging. It's because it isn't returning any rows!
as for the other - sometimes there's a primary key issue. Does the table being updated have a primary key in SQLServer? If so, check the view of the table in Access - sometimes that link doesn't come through. It's been a while, so I could be wrong, but I think you may need to look at the design view of the table while in access and add the primary key there.
EDIT: Additional thought:
in your debugging, try throwing in print statements to see what the values of your inputs are. Is it actually picking up the data from the table as you expect when you execute from access?