ROWCOUNT field in SYSTABLES table not updated after rows have been deleted - sqlbase

I use the ROWCOUNT field in SYSTABLES to get fast the rowcount of all my tables from a .NET App using Oledb (i don't want to use a query to get it becasue it takes too much time on big tables).
The problem is: after deleting rows from the table, that ROWCOUNT number in SYSTABLES is not updated. I tested some commands and even found that running ROWCOUNT TABLENAME in SqlTalk works and is very fast, but if i try to call that as a query from .NET using OLEDB it's returning nothing, sample code:
using (var connection = ConnectionFactory.CreateConnection(NombreBaseModelo))
{
using (OleDbCommand cmd = connection.CreateCommand())
{
foreach (Tabla ot in LstTables)
{
cmd.CommandType = CommandType.Text;
cmd.CommandTimeout = 5000;
cmd.CommandText = "ROWCOUNT " + ot.NAME;
var sRet = cmd.ExecuteScalar();
ot.ROWCOUNT = int.Parse(sRet);
}
}
}
Is there any way to tell SqlBase to update Rowcount for each table in systables?
Or as alternative, is there any way to make this code work?
Thanks!

The systables.rowcount only gets updated when an update statistics is done, so it's not guaranteed to be accurate, until you execute 'Update Statistics on table ' + ot.NAME;
Then it is .
Probably not what you want when quickly counting rows.
Does your 'ot.NAME' variable have a table owner included ? usually its 'SYSADM.'
Have you checked the value returned in 'sRet' , as maybe its 'int.Parse(sRet)' that is failing.
Otherwise create an index on the PK and execute a COUNT(*) . Should be as fast as ROWCOUNT anyway if the index is being used .
Alternatively, write a SQLBase function or stored proc that just executes the ROWCOUNT command natively and returns the correct count ( as per SQLTalk ), and call that from your , Net app

Related

How write RQLQuery?

I am new to ATG, and I have this question. How can I write my RQLQuery that provide me data, such as this SQL query?
select avg(rating) from rating WHERE album_id = ?;
I'm trying this way:
RqlStatement statement;
Object rqlparam[] = new Object[1];
rqlparam[0] = album_Id;
statement= RqlStatement.parseRqlStatement("album_id= ? 0");
MutableRepository repository = (MutableRepository) getrMember();
RepositoryView albumView = repository.getView(ALBUM);
This query returns me an item for a specific album_id, how can I improve my RQL query so that it returns to me the average field value, as SQL query above.
There is no RQL syntax that will allow for the calculation of an average value for items in the query. As such you have two options. You can either execute your current statement:
album_id= ? 0
And then loop through the resulting RepositoryItem[] and calculate the average yourself (this could be time consuming on large datasets and means you'll have to load all the results into memory, so perhaps not the best solution) or you can implement a SqlPassthroughQuery that you execute.
Object params[] = new Object[1];
params[0] = albumId;
Builder builder = (Builder)view.getQueryBuilder();
String str = "select avg(rating) from rating WHERE album_id = 1 group by album_id";
RepositoryItem[] items =
view.executeQuery (builder.createSqlPassthroughQuery(str, params));
This will execute the average calculation on the database (something it is quite good at doing) and save you CPU cycles and memory in the application.
That said, don't make a habit of using SqlPassthroughQuery as means you don't get to use the repository cache as much, which could be detrimental to your application.

Cannot access temp table in dynamic TSQL

I am creating a temporary table in some dynamic SQL. But when I call it it throws an "Invalid object name '#Settlement_Data_Grouped'" error.
I am assuming it's because dynamic SQL uses it's own seperate instance? So that it is dropped and not available to the outside SQL? It works when I use ##Settlement_Data_Grouped or create a table. But this doesn't help when multiple people are calling the sproc.
I'm thinking I could check if the table exists but that would mean I would have to delete contents and different users may require different outputs which would mean it would not work.
So does anyone have a solution/suggestion I can use where multiple people can be calling the same sproc?
IMHO you do not need the Dynamic SQL. You can change your code like something below and that will give you same result as you are trying to accomplish. Please check for any syntax error if any. I would have provided you full query but in your question you have screenshot instead of the code. So here it goes.
If you want to use Temp Tables:
SELECT
......
INTO #Settlement_Data_Grouped
FROM #Settlement_Data
WHERE (Payment_Date < Settlement_Date AND #outputType = 1) ---This will be true when you have #outputType = 1
OR #outputType = 0 ---This will be true when you have #outputType = 0
GROUP BY Part_No
,NAME
,Order_No
,Invoice_No
-------------
SELECT
......
FROM #Settlement_Data_Grouped
If you want to use CTE:
WITH Settlement_Data_Grouped
AS (
SELECT
......
FROM #Settlement_Data
WHERE (Payment_Date < Settlement_Date AND #outputType = 1) ---This will be true when you have #outputType = 1
OR #outputType = 0 ---This will be ture when you have #outputType = 0
GROUP BY Part_No
,NAME
,Order_No
,Invoice_No
)
SELECT
......
FROM Settlement_Data_Grouped
The problem is that the temp table only exists within the scope of the dynamic SQL execution context. The way around it would be to create the temp table outside of the dynamic SQL and then insert into it:
CREATE TABLE #Settlement_Data_Grouped (PartNo varchar(50) ......)
INSERT INTO #Settlement_Data_Grouped
EXEC(#selectSQL)

SQL update statements updates wrong fields

I have the following code in Postgres
select op.url from identity.legal_entity le
join identity.profile op on le.legal_entity_id =op.legal_entity_id
where op.global_id = '8wyvr9wkd7kpg1n0q4klhkc4g'
which returns 1 row.
Then I try to update the url field with the following:
update identity.profile
set url = 'htpp:sam'
where identity.profile.url in (
select op.url from identity.legal_entity le
join identity.profile op on le.legal_entity_id =op.legal_entity_id
where global_id = '8wyvr9wkd7kpg1n0q4klhkc4g'
);
But the above ends up updating more than 1 row, actually all of the rows of the identity table.
I would assume since the first postgres statement returns one row, only one row at most can be updated, but I am getting the wrong effect where all of the rows are being updated. Why ?? Please help a nubie fix the above update statement.
Instead of using profile.url to identify the row you want to update, use the primary key. That is what it is there for.
So if the primary key column is called id, the statement could be modified to:
UPDATE identity.profile
SET ...
WHERE identity.profile.id IN (SELECT op.id FROM ...);
But you can do this much simpler in PostgreSQL with
UPDATE identity.profile op
SET url = 'htpp:sam'
FROM identity.legal_entity le
WHERE le.legal_entity_id = op.legal_entity_id
AND le.global_id = '8wyvr9wkd7kpg1n0q4klhkc4g';

How to avoid multiple insert in PostgreSQL

In my query im using for loop. Each and every time when for loop is executed, at the end some values has to be inserted into table. This is time consuming because for loop has many records. Due to this each and every time when for loop is executed, insertion is happening. Is there any other way to perform insertion at the end after the for loop is executed.
For i in 1..10000 loop ....
--coding
insert into datas.tb values(j,predictednode); -- j and predictednode are variables which will change for every loop
End loop;
Instead of inserting each and every time i want the insertion should happen at the end.
If you show how the variables are calculated it could be possible to build something like this:
insert into datas.tb
select
calculate_j_here,
calculate_predicted_node_here
from generate_series(1, 10000)
One possible solution is to build a large VALUES String. In Java, something like
StringBuffer buf = new StringBuffer(100000); // big enough?
for ( int i=1; i<=10000; ++i ) {
buf.append("(")
.append(j)
.append(",")
.append(predicted_node)
.append("),"); // whatever j and predict_node are
}
buf.setCharAt(buf.length()-1, ' '); // kill last comma
String query = "INSERT INTO datas.tb VALUES " + buf.toString() + ";"
// send query to DB, just once
The fact j and predict_node appear to be constant has me a little worried, though. Why are you putting a constant in 100000 times?
Another approach is to do the predicting in a Postgres procedural language, and have the DB itself calculate the value on insert.

Manipulate & use the results of UPDATE .... RETURNING

Here is a simple PostgreSQL update returning some data:
UPDATE table set num = num + 1
WHERE condition = true
RETURNING table.id, table.num
Is there a way to further use the returned results, as if they came from a select statement? Something like this:
INSERT into stats
(id, completed)
SELECT c.id, TRUE
FROM
(
UPDATE table set num = num + 1
WHERE condition = true
RETURNING table.id, table.num
) c
where c.num > 5
Or do I have to save the returned results into my application, then create a new query out of the returned results?
As of version 9.1, you can use an UPDATE ... RETURNING in a "Common Table Expression" ("CTE"), which for most purposes can be thought of as a named sub-query.
So for your purposes, you could use something like this:
WITH update_result AS
(
UPDATE table set num = num + 1
WHERE condition = true
RETURNING table.id, table.num
)
INSERT into stats
(id, completed)
SELECT c.id, TRUE
FROM update_result as c
WHERE c.num > 5
If you're using a version of Postgres below 9.1, then I think you will have to grab the result into a variable in some procedural code - either your application, or a database function (probably written in PL/pgSQL).
That syntax won't work (unfortunately! that would be convenient).
Either you update and then create another query, or you do everything in a stored procedure where you can safely store and handle query resuts, so that you just have one single database call from your application.