I have a (very simple and standard) UPDATE statement which works fine either directly in Query Analyser, or executed as a stored procedure in Query Analyser.
UPDATE A
SET
A.field1 = B.col1
, A.field2 = B.col2
FROM
tblA AS A INNER JOIN tblB AS B
ON A.pk1 = B.pk1 AND A.pk2 = B.pk2
Problem is when i execute the same stored proc via microsoft ADP (by double-clicking on the sproc name or using the Run option), it says "query ran successfully but did not return records" AND does NOT update the records when i inspect the tables directly.
Before anyone even says "syntax of MS-Access is different than SQLServer T-SQL", remember that with ADP everything happens on the server and one is actually passing thru to T-SQL.
Any bright ideas from any ADP gurus out there?
Gotcha. Responding to my own question for the benefit of anyone else.
Tools / Options / Advanced / Client-Server Settings / Default max records is set at 10,000 (presumably this is the default). Change this to 0 for unlimited.
My table had 100,000+ rows and whatever set of 10,000 it was updating was difficult to find ( among a sea of 90,000+ un-updated rows ). Hence the update did not work fully as expected.
Try and see whether the query gets executed on the SQL Server using SQL profiler.
Also, I think you might need to close the linked table & re-open it to see the updated records.
Does that work?
Run the query with SQL PRofiler running. Before you start the trace add in all the error events. This will give you any errors that the SQL Server is generating that the Access ADP might not be showing correctly (or at all).
Feel free to post them here.
Just as a reference, here's a paper I wrote on Update Queries that discusses some of the issues associated with when the fail.
http://www.fmsinc.com/microsoftaccess/query/snytax/update-query.html
I seem to remember that I always got the "didn't return any rows" message and had to simply turn off the messaging. It's because it isn't returning any rows!
as for the other - sometimes there's a primary key issue. Does the table being updated have a primary key in SQLServer? If so, check the view of the table in Access - sometimes that link doesn't come through. It's been a while, so I could be wrong, but I think you may need to look at the design view of the table while in access and add the primary key there.
EDIT: Additional thought:
in your debugging, try throwing in print statements to see what the values of your inputs are. Is it actually picking up the data from the table as you expect when you execute from access?
Related
I know you can't control transactions in functions or procedures, but I'm wondering if there's something like that or some alternative.
The problem: I have a function that's very expensive that turns things like a customer id into a nice html report. Trouble is - it takes seconds... so I've put into the function something that basically looks at a cache to see if a pre-rendered one exists, returning it if it does - and if it doesn't - it adds to the cache afterwards - so it will only ever render things once.
Now - given most things will never change - I sort of want to do it across everything - but given the time - it will probably take about 1 year to run - which is ok actually - this system has to run for ten. Trouble is - I don't want it to lock anything on the database, so I sort of want it trickle along, doing 1 at a time and committing immediately.
I investigated pg_cron, because that seemed an option, but the version of aurora I am using doesn't support it. Any ideas how I'd do this inside the database?
By all means, don't code that as a function running inside the database. It is fine to do the calculations in the database, but generating a report and iterating over customers belong into client code. That way, committing each report is not a problem.
Add a text column to the customer table to hold the html report.
Put trigger(s) on table(s) whose content influences the html report that refreshes the report column.
This gives you instant retrieval and only (re)calculates it when needed.
Or if data is stable:
create table customer_report (
customer_id int not null primary key,
report text not null
);
insert into customer_report
select id, my_report_proc(id)
from customer;
Getting this error, with no table listed, when the user logs in to the order system.
The query is, roughly
SELECT FormParts.*, qrySomeOtherColumns.*, CBool(0) AS ObjMissing INTO Temp_SessionFormParts
FROM FormParts INNER JOIN qrySomeOtherColumns ON FormParts.ID = qrySomeOtherColumns.FormPartID
WHERE EmpId = EmpID();
The data is coming from linked tables with a Postgres back end.
The data is going into the local table "Temp_SessionFormParts"
This is a local table in the front end, so no one else should have access to it.
Everyone else has a copy of the same database, but no one else gets the error.
Where it gets weird is that she only gets the error when she starts the app with our standard shortcut that copies the latest version.
copy "\\NASdiskstation\Install\Deploy\pgFrontEnd.accde" "C:\Merge Documents\"
start "" MSAccess.exe "C:\Merge Documents\pgFrontEnd.accde"
But if she starts Access from the Start Menu, and then opens the same database from the selections, it works fine.
Seems to me that those two routes should give the same result since she's already copied the latest to C:.
I went ahead an posted this question, since I had it written up, because I've seen this error a few times. So this may be helpful.
Normally, you can't SELECT INTO an existing table. But Access detects that situation and helpfully deletes the table before executing the insert.
Somehow this was failing.
DROP TABLE Temp_SessionFormParts
before the INSERT fixed the issue.
No idea why it sometimes didn't work for some people or why running Access directly changed that.
Good day,
I run many databases under interbase XE7 and 2017 now.
Lately, I got a strange behavior on one of the db:
A table with a primary key was found hosting many rows with similar values, like image below.
We can see that SCRIPTTYPE is a primary key column and it contains many times MATRIX, no space or strange characters ( I checked).
I was able to backup / restore without issues.
I am puzzled by this and I am wondering if anybody did encountered something similar?
And how it was done?
Thanks.
enter image description here
Run this query to be sure:
select SCRIPTTYPE, count(*)
from yourtable
group by SCRIPTTYPE
order by 2 desc
If you get a count>1, then I'd argue it's simple a bug and you should contact support. For them to assist you they'll almost certainly need your database, so you should be prepared to provide that. Based on your description, you should be able to drop all tables except the one, and drop all fields except your key, then backup and restore to get the simplest test case.
I use SQL Server with ASP.NET Core and EF Core. After each record is added, the identity column's value jumps about by 1000 and creates a gap between current row and the last previously added row.
Questions
Is there any way to prevent this?
How to delete those gaps that have been created before?
If I use GUID for key columns to prevent that issue, is there a problem (performance or each other problem)?
Is it way on the server side that with EF Core could handle it (each some way)?
Thank you in advance for your helps...
For the reason for 1000-value gaps, see Aaron Bertrand's answer
It doesn't really make sense to "want" to delete the gaps. The content of an identity column contains no semantic information. It correlates to nothing "in the world" outside the database. The gaps are as meaningless as the values themselves.
I don't see how a uniqueidentifier would "prevent" that issue. A uniqueidentifier may be "meaningfully" sortable (if you use newsequentialid()), but there's no sense in which any particular value is "one more" than a previous value.
You can certainly try to build your own key generating algorithm that does not produce gaps, but you will run into concurrency issues (also mentioned by Mr Bertrand).
workaround trick:
CREATE OR ALTER TRIGGER TGR_Transaction_Identity_Fix
ON [dbo].[TBL_Transaction]
FOR INSERT
AS
BEGIN
DECLARE #RESEEDVAL INT
SELECT #RESEEDVAL = MAX(TransactionId) FROM [dbo].[TBL_Transaction]
DBCC CHECKIDENT([TBL_Transaction], RESEED, #RESEEDVAL)
END
this triger will reset identity on each insert
I've been tasked with finding a way to migrate data into a DB2 AS400 database. When the data is entered (currently manually) on the front end, the system is doing some calculations and inserting the results in a table.
My understanding is that it's using a trigger to do so. I don't know very much about this stuff, but I have written code to directly insert values into that same table. Is there a way for me to figure out what trigger is being fired when users enter data manually?
I've looked in QSYS2/SYSTRIGGERS and besides not making much sense to me, I see no triggers that belong to the SCHEMA with my table in it.
Any help here would be awesome, as I am stuck.
SELECT *
FROM QSYS2.SYSTRIGGERS
WHERE TABSCHEMA = 'MYSCHEMA'
AND TABNAME = 'MYTABLE'
Should work fine.
If you'd prefer to use a 5250 command line, the Display File Description (DSPFD) command will show you the triggers on a file (table)
DSPFD FILE(MYSCHMA/MYTABLE) TYPE(*TRG)
Lastly, trigger information is available via the IBM i Navigator GUI. Either the older fat client version or the newer web based one.