I have a table with a JSON column and some of the values in it are numbers but I want all the values to be text. For example, I have {"budget": 500}, but I want it to be {"budget":"500"}. I have tried using the JSONB_SET function but even after postgres returns N rows updated, when I go to retrieve the records, they are still numbers. I was hoping that somebody may have encountered this issue. Here's what I've tried that isn't working.
UPDATE my_table
SET data = JSONB_SET(data, '{budget}', data->'budget'::text)
WHERE data ? 'budget' = true;
Since this is a very large table, hardcoding values is not feasible. If anybody knows why this isn't working or if there is something that does work, please let me know, thank you!
You can enforce converting a JSONB number to text with the function quote_ident():
UPDATE my_table
SET data = jsonb_set(data, '{budget}', quote_ident(data->>'budget')::jsonb)
WHERE data ? 'budget'
-- you can add this condition to avoid updating non-numbers
-- AND jsonb_typeof(data->'budget') = 'number'
Note that data->'budget'::text does nothing as the cast refers to 'budget', not a JSON object and the expression is equivalent to data->'budget'.
Using FireDAC's Array DML feature, it doesn't seem possible to utilise a RETURNING clause (in my case PostgeSQL).
If I run a simple insert query such as:
With FDQuery Do
begin
SQL.Text := 'INSERT INTO temptab(email, name) '
+'VALUES (''email1'', ''name1''), '
+'(''email2'', ''name2'') '
+'RETURNING id';
Open;
end;
The query returns two records containing the id for the newly inserted records.
For larger inserts I would prefer to use Array DML, but in some cases I also need to be able to get returned data.
The Open function does not have an ATimes parameter. Whilst you can call Open with Array DML, it results in the insertion and return of just the first record.
I cannot find any other properties, methods which would seem to facilitate this. I have posted on Praxis to see if anyone there has any ideas, but I have had no response. I have also posted this as a new feature request on Quality Central.
If anyone knows of a way of achieving this using Array DML, I would be grateful to hear, but my principal question is what is the most efficient route for retrieving the inserted data (principally IDs) from the DB if I persist with Array DML?
A couple of ideas occur to me, neither of which seem tremendously attractive:
Within StartTransaction and Commit and following the insertion retrieve the id of the last inserted record and then grab backwards the requisite number. This seems to be to be a bit risky, although as within a transaction, should probably be okay.
Add an integer field to the relevant table and populate each inserted record with a unique identifier and following insert retrieve the records with that identifier. Whilst this would ensure the return of the inserted records, it would be relatively inefficient unless I index the field being used to store the identifier.
Both the above would be dependent on records being inserted into the DB in the order they are supplied to the Array DML, but I assume/hope that is a given.
I would appreciate views on the best (ie most efficient and reliable) of the above options and any suggestions as to alternative even better options even if those entail abandoning Array DML where a Returning clause is needed.
You actually can get all returned ID's. You can tell Firedac to store the result values in paramters with {INTO }. See for example the following code:
FDQuery.SQL.Text := 'INSERT into tablename (fieldname) values (:p1) returning id {into :p2}';
FDQuery.Params.ArraySize := 2;
FDQuery.Params[0].AsStrings[0] := 'one';
FDQuery.Params[0].AsStrings[1] := 'two';
FDQuery.Params[1].ParamType := ptInputOutput;
FDQuery.Params[1].DataType := ftLargeInt;
FDQuery.Execute(2,0);
ID1 := FDQuery.Params[1].AsLargeInts[0];
ID2 := FDQuery.Params[1].AsLargeInts[1];
This works when 1 row is returned per arraydml element. I think it will not work for >1 row, but I've not tested it. If it does, you would have to know which result corresponds with your arraydml element.
Note that Firedac throws an AV when 0 rows are returned for one or more elements in the arraydml. For example when you UPDATE a row that was deleted in the meantime. The AV has nothing to do with the array DML itself. When FDQuery.Execute; is called, you'll get an AV as well.
I've suggested another option earlier on the delphipraxis forum, but that is a suboptimal solution as that uses a temp table to store the ID's:
https://en.delphipraxis.net/topic/4693-firedac-array-dml-returning-values-from-inserted-records/
I want to fetch a set of records from db where a field matches multiple values ( the count of which cant be predetermined ) . To exemplify,
Tables.A.ID.in(Set of IDs)
Tables.A.ID.notIn(Set of IDs)
I went through the documentation of fetchMany and fetchAny ResultQuery Documentation. I tried implementing it , but with no success.
I want to fetch all rows in DB which match the "Set of IDs" where IDs are NOT UNIQUE.
I am not able to understand how to use 'in' and 'notIn' in my pretext. Could someone show me with an example how to fetch the Set of Resulting Records from the database.
I suspect, you're simply looking for this?
Set<Integer> setOfIDs = ...
Result<Record> result =
DSL.using(configuration)
.select()
.from(A)
.where(A.ID.in(setOfIDs))
.fetch();
I have 2 SQL Server databases, hosted on two different servers. I need to extract data from the first database. Which is going to be a list of integers. Then I need to compare this list against data in multiple tables in the second database. Depending on some conditions, I need to update or insert some records in the second database.
My solution:
(WCF Service/Entity Framework using LINQ to Entities)
Get the list of integers from 1st db, takes less than a second gets 20,942 records
I use the list of integers to compare against table in the second db using the following query:
List<int> pastDueAccts; //Assuming this is the list from Step#1
var matchedAccts = from acct in context.AmAccounts
where pastDueAccts.Contains(acct.ARNumber)
select acct;
This above query is taking so long that it gives a timeout error. Even though the AmAccount table only has ~400 records.
After I get these matchedAccts, I need to update or insert records in a separate table in the second db.
Can someone help me, how I can do step#2 more efficiently? I think the Contains function makes it slow. I tried brute force too, by putting a foreach loop in which I extract one record at a time and do the comparison. Still takes too long and gives timeout error. The database server shows only 30% of the memory has been used.
Profile the sql query being sent to the database by using SQL Profiler. Capture the SQL statement sent to the database and run it in SSMS. You should be able to capture the overhead imposed by Entity Framework at this point. Can you paste the SQL Statement emitted in step #2 in your question?
The query itself is going to have all 20,942 integers in it.
If your AmAccount table will always have a low number of records like that, you could just return the entire list of ARNumbers, compare them to the list, then be specific about which records to return:
List<int> pastDueAccts; //Assuming this is the list from Step#1
List<int> amAcctNumbers = from acct in context.AmAccounts
select acct.ARNumber
//Get a list of integers that are in both lists
var pastDueAmAcctNumbers = pastDueAccts.Intersect(amAcctNumbers);
var pastDueAmAccts = from acct in context.AmAccounts
where pastDueAmAcctNumbers.Contains(acct.ARNumber)
select acct;
You'll still have to worry about how many ids you are supplying to that query, and you might end up needing to retrieve them in batches.
UPDATE
Hopefully somebody has a better answer than this, but with so many records and doing this purely in EF, you could try batching it like I stated earlier:
//Suggest disabling auto detect changes
//Otherwise you will probably have some serious memory issues
//With 2MM+ records
context.Configuration.AutoDetectChangesEnabled = false;
List<int> pastDueAccts; //Assuming this is the list from Step#1
const int batchSize = 100;
for (int i = 0; i < pastDueAccts.Count; i += batchSize)
{
var batch = pastDueAccts.GetRange(i, batchSize);
var pastDueAmAccts = from acct in context.AmAccounts
where batch.Contains(acct.ARNumber)
select acct;
}
I'm running DBI in Perl and can't figure out how, when I run a prepared statement, I can figure out if the returned row count is 0.
I realize I can set a counter inside my while loop where I fetch my rows, but I was hoping there was a less ugly way to do it.
Based on a quick look here, it seems that after you run
$statement->execute($arg)
you can access the row count via
$statement->rows
The "caveat" in the documentation (linked to in a comment on another answer) is important, and provides the real, correct answer:
Generally, you can only rely on a row count after a non-SELECT execute (for some specific operations like UPDATE and DELETE), or after fetching all the rows of a SELECT statement.
For SELECT statements, it is generally not possible to know how many rows will be returned except by fetching them all. Some drivers will return the number of rows the application has fetched so far, but others may return -1 until all rows have been fetched. So use of the rows method or $DBI::rows with SELECT statements is not recommended.
In order to find out how many rows are in a result set you have exactly two options:
select count(*)
Iterate over the result set and count the rows.
You can introduce some magic via a stored procedure that returns an array or something more fancy, but ultimately one of those two things will need to happen.
So, there is no fancypants way to get that result. You just have to count them :-)
It's a bit late, but if anyone uses ORACLE, here comes THE sweat solution:
SELECT
q.*,
ROWNUM DB_ROWNUM,
(SELECT max(ROWNUM) FROM ($sql)) DB_COUNT
FROM
($sql) q
$sql is your query, of course. Oracles optimizer is intelligent enough not to execute everything twice.
Now every fetched row holds the current row number (useful for paging grid row numbering) in DB_ROWNUM and the complete number of rows in DB_COUNT. You still have to fetch at least one row (so it isn't exactly the answer to the question above ;)), but the sweat use comes next:
It's also a very easy way to do start and limit in Oracle and still get the complete number of rows:
SELECT * FROM (
SELECT /*+ FIRST_ROWS($limit) */
q.*,
ROWNUM DB_ROWNUM,
(SELECT max(ROWNUM) FROM ($sql)) DB_COUNT
FROM
($sql) q
WHERE
ROWNUM <= $limit
)
WHERE
DB_ROWNUM > $start
With this, you can actually fetch only row 51 to 100 for the second page in your grid, but still have the real row number (starting from 1) and the complete count (without start and limit) in every fetched row.
try this SQL solution, combine your SQL for the data with a count Statement.
select null, null, null, count(*) from tablex
union
select foo, bar, foobar, null from tablex
the first statement will have the count, and should be the first row (if not you can order by to get around it) then you can cycle through the rest of the recordset at your leisure.
CPAN says:
[...] or after fetching all the rows of a SELECT statement.
So doing something like this will probably work:
$sth->execute or die $sth->errstr;
say (scalar keys %{$sth->fetchall_hashref('id')}) . ' row(s).';
I have dynamically generated a SQL and executing it. For me select count(*) does not seem to be an option because I have to re-generate the query again. Following approach looked clean for me. But you have to issue s $h->execute() once again to retrieve row data.
$h->execute() or die "ERROR: Couldn't execute SQL statement";
$rowcount_ref = $h->fetchall_arrayref(0);
$rowcount = scalar (#{$rowcount_ref});
--
Shaakunthala
Rows definitely does vary depending on the database/driver version it seems. I would definitely look to a piece of logic there that dealt with unexpected results.
If you want to know how many rows are there before walking through all of them, a MySQL-specific solution could be FOUND_ROWS().
In your first query, add SQL_CALC_FOUND_ROWS right after SELECT. Then do SELECT FOUND_ROWS();, and you have access to the row count right away. Now you can decide whether you want to walk through all the rows, or the best way to do it.
Note that having LIMIT in the query will give you the total number query would have returned without LIMIT.
As others already said, you really need to get the rows to find out if there are any. If you need to know before starting a loop on every row, and if the expected results are not huge, you can get all results into an array, and then check that.
I recently used something like that:
foreach my $table ( qw(ACTOR DIRECTOR PRODUCER WRITER) ) {
my $sth = $dbi->prepare(qq{SELECT * FROM $table WHERE DESCRIPTION != TRIM(DESCRIPTION)})
or die $dbi->errstr;
$sth->execute or die $sth->errstr;
my #rows = #{ $sth->fetchall_arrayref() };
next unless #rows;
foreach my $row (#rows) {
print join(", ", map {qq("$_")} #{$row}), "\n";
}
}
I'd just let the database do the counting instead. If all you want is the number of rows.
Find all the rows for january 2018.
$year='2018';
my $qry = "SELECT COUNT(`my_key`) FROM `mtable` WHERE `my_date` LIKE '$year-01-%';";
my ($count) = $dbc->selectrow_array($qry);
I think this can be achieved pretty easily using references.
The code blow below shows how it is not needed to go for re-execution of the query to fetch the rows after a count.
my $sth = $dbh->prepare("SELECT NAME, LOCATION
FROM Employees");
$sth->execute() or die $DBI::errstr;
$nrows = $sth->fetchall_arrayref() ;
print "Number of rows found :" . #$nrows . "\n";
#while (my #row = $sth->fetchrow_array()) {
foreach my $row(#{$nrows}) {
my ($name, $location ) = #$row;
print " Name = $name, Location = $location\n";
}
I hope this satisfied the question made here.