Get First row/id & Last row/id of total inserted result set, instead of only last id/row. SQL,PHP,Codeigniter - codeigniter-3

function storeReachout($reachoutInfo)
{
$result = $this->db->insert('tbl_youtube',$reachoutInfo);
// $Frow = $result->first_row('array');
// $Lrow = $result->last_row('array');
// $data = aray($result,$Frow,$Lrow);
// $result_id = $this->db->insert_id();
return $result;
}
Above is the code to insert data in DB, if I inserted like 30 records I want to know the first inserted row/id and last row/id. This way I can get a lower limit of ID and upper limit of ID so after I can send emails to in-between IDS.

After posting I found a way to get the first & last inserted ID, I made a foreach loop in my controller to insert each record in a loop and return last_id from my model function and save that last_id in an array in my controller every time the loop iterate it stores a record in DB and gets the last_id and save that last_id in an array. When the loop completes its iteration we will get all the saved IDS from DB in my Array.

Related

How to query the first row efficiently?

I have a table with large amount of records:
date instrument price
2019.03.07 X 1.1
2019.03.07 X 1.0
2019.03.07 X 1.2
...
When I query for the day opening price, I use:
1 sublist select from prices where date = 2019.03.07, instrument = `X
It takes a long time to execute because it selects all the prices on that day and get the first one.
I also tried:
select from prices where date = 2019.03.07, instrument = `X, i = 0 //It does not return any record (why?)
select from prices where date = 2019.03.07, instrument = `X, i = first i //Seem to work. Does it?
In Oracle an equivalent will be:
select * from prices where date = to_date(...) and instrument = "X" and rownum = 1
and Oracle will stop immediately when it finds the first record.
How to do this in KDB (e.g. stop immediately after it finds the first record)?
In kdb, where subclauses in select statements are executed sequentially. i.e. only those records which pass the first "test" get passed to the second test. With that in mind, looking at your two attempts:
select from prices where date = 2019.03.07, instrument = `X, i = 0 //It does not return any record (why?)
This doesn't (necessarily) return anything, because by the time it gets to the i=0 check, you've already filtered out some records (possibly including the first record in the original table, which would have i=0)
select from prices where date = 2019.03.07, instrument = `X, i = first i //Seem to work. Does it?
This one should work. First you filter by date. Then within the records for that date, you select the records for instrument `X. Then within those records, you take the record where i is the first i (where i has already been filtered down, so first i is simply the index of the first record [still the index from the original table, not the filtered down version])
Q-SQL equivalent for that is select[n] which also performs better than other approaches in most of the cases. Positive 'n' will give first n records and negative will give last n records.
q) select[1] from prices where date = 2019.03.07, instrument = `X
There is no inbuilt functionality to stop after first match. You can write custom function for that but that would probably execute slower than above supported version.

updatexml for particular rows only

Context: I want to increase the allowance value of some employees from £1875 to £7500, and update their balance to be £7500 minus whatever they have currently used.
My Update statement works for one employee at a time, but I need to update around 200 records, out of a table containing about 6000.
I am struggling to workout how to modify the below to update more than one record, but only the 200 records I need to update.
UPDATE employeeaccounts
SET xml = To_clob(Updatexml(Xmltype(xml),
'/EmployeeAccount/CurrentAllowance/text()',187500,
'/EmployeeAccount/AllowanceBalance/text()',
750000 - (SELECT Extractvalue(Xmltype(xml),
'/EmployeeAccount/AllowanceBalance',
'xmlns:ts=\"http://schemas.com/\", xmlns:xt=\"http://schemas.com\"'
)
FROM employeeaccounts
WHERE id = '123456')))
WHERE id = '123456'
Example of xml column (stored as clob) that I want to update. Table has column ID that hold PK of employees ID EG 123456
<EmployeeAccount>
<LastUpdated>2016-06-03T09:26:38+01:00</LastUpdated>
<MajorVersion>1</MajorVersion>
<MinorVersion>2</MinorVersion>
<EmployeeID>123456</EmployeeID>
<CurrencyID>GBP</CurrencyID>
<CurrentAllowance>187500</CurrentAllowance>
<AllowanceBalance>100000</AllowanceBalance>
<EarnedDiscount>0.0</EarnedDiscount>
<NormalDiscount>0.0</NormalDiscount>
<AccountCreditLimit>0</AccountCreditLimit>
<AccountBalance>0</AccountBalance>
</EmployeeAccount>
You don't need a subquery to get the old balance, you can use the value from the current row; which means you don't need to correlate that subquery and can just use an in() in the main statement:
UPDATE employeeaccounts
SET xml = To_clob(Updatexml(Xmltype(xml),
'/EmployeeAccount/CurrentAllowance/text()',187500,
'/EmployeeAccount/AllowanceBalance/text()',
750000 - Extractvalue(Xmltype(xml),
'/EmployeeAccount/AllowanceBalance',
'xmlns:ts=\"http://schemas.com/\", xmlns:xt=\"http://schemas.com\"')
))
WHERE id in (123456, 654321, ...);

How to avoid multiple insert in PostgreSQL

In my query im using for loop. Each and every time when for loop is executed, at the end some values has to be inserted into table. This is time consuming because for loop has many records. Due to this each and every time when for loop is executed, insertion is happening. Is there any other way to perform insertion at the end after the for loop is executed.
For i in 1..10000 loop ....
--coding
insert into datas.tb values(j,predictednode); -- j and predictednode are variables which will change for every loop
End loop;
Instead of inserting each and every time i want the insertion should happen at the end.
If you show how the variables are calculated it could be possible to build something like this:
insert into datas.tb
select
calculate_j_here,
calculate_predicted_node_here
from generate_series(1, 10000)
One possible solution is to build a large VALUES String. In Java, something like
StringBuffer buf = new StringBuffer(100000); // big enough?
for ( int i=1; i<=10000; ++i ) {
buf.append("(")
.append(j)
.append(",")
.append(predicted_node)
.append("),"); // whatever j and predict_node are
}
buf.setCharAt(buf.length()-1, ' '); // kill last comma
String query = "INSERT INTO datas.tb VALUES " + buf.toString() + ";"
// send query to DB, just once
The fact j and predict_node appear to be constant has me a little worried, though. Why are you putting a constant in 100000 times?
Another approach is to do the predicting in a Postgres procedural language, and have the DB itself calculate the value on insert.

Jumbled up ids in mongodb

We have around 20 million records in our mongodb. In my collection called 'posts' there is a field called 'id' which was supposed to be unique but now it has gotten all messed up. We just want it to be unique and there are many many duplicates now.
We just wanted to do something like iterating over every reocrd and assigning it a unique id in a loop from 1 to 20million.
What would be the easiest way to do this?
There are not many options here, really.
Pick your language and driver of choice.
Fetch N documents.
Assign unique ids to them (several options here: 1) copy _id; 2) assign new ObjectID; 3) assign plain integer)
Save those documents.
Fetch next N documents. Go to step 3.
To fetch next N documents, you should note the last processed document's _id and do this:
db.collection.find({_id: {$gt: last_processed_id}}).sort({_id: 1}).limit(N);
Do not use skip here. It will be too slow.
And, of course, you can always truncate the collection, create unique index on id and populate it again.
You can use a simple script like this:
db.posts.dropIndex("*id index name here*"); // Drop Unique index
counter = 0;
page = 1;
slice = 1000;
total = db.posts.count();
conditions = {};
while (counter < total) {
cursor = db.posts.find(conditions, {_id: true}).sort({_id: 1}).limit(slice);
while (cursor.hasNext()) {
row = cursor.next();
db.posts.update({_id: row._id}, {$set: {id: ++counter}});
}
conditions['_id'] = {$gt: row._id};
print("Processed " + counter + " rows");
}
print('Adding id index');
db.posts.ensureIndex({id: 1}, {unique: true, background: false});
print("done");
save it to assignids.js, and run as
$ mongo dbname assignids.js
the outer-while selects 1000 rows as a time, and prevents cursor timeouts; the inner while assigns each row a new incremental id.

how do you insert multiple rows in a loop using doctrine 2

i want to insert multiple rows in a loop using doctrine 2..
i usually insert 1 record using this:
$Entity->setData($posted);
$this->_doctrine->persist($Entity);
$this->_doctrine->flush();
Simply persist all your objects and then call flush() after the loop.
$entityDataArray = array(); // let's assume this is an array containing data for each entity
foreach ($entityDataArray AS $entityData) {
$entity = new \Entity();
$entity->setData($entityData);
$this->_doctrine->persist($entity);
}
$this->_doctrine->flush();
If you're inserting a large number of objects you will want to batch insert (see http://www.doctrine-project.org/docs/orm/2.0/en/reference/batch-processing.html)
Inside your loop, you should be able to simply:
$entity1->setData($data1);
$this->_doctrine->persist($entity1);
$entity2->setData($data2);
$this->_doctrine->persist($entity2);
$this->_doctrine->flush();