Intersystems Cache: Embed SQL always returns only first row - intersystems-cache

I copied the code from off. documentation:
&sql(SELECT *,%ID INTO :tflds()
FROM Sample.Person )
IF SQLCODE=0 {
SET firstflds=14
FOR i=0:1:firstflds {
IF $DATA(tflds(i)) {
WRITE "field ",i," = ",tflds(i),! }
} }
ELSE {WRITE "SQLCODE error=",SQLCODE,! }
But for some reason it only returns all fields of the first row and nothing else. Is it a bug, or am i doing smth wrong?

You need to use cursor to loop through rows of SQL query result.
&sql(declare c1 cursor for SELECT *,%ID INTO :tflds()
FROM Sample.Person)
&sql(open c1)
for {
&sql(fetch c1)
quit:SQLCODE'=0
set firstflds=14
for i=0:1:firstflds {
if $Data(tflds(i)) {
write "field ",i," = ",tflds(i),!
}
}
write "===NEXT ROW===",!
}
&sql(close c1)
See http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GSQL_esql#GSQL_esql_cursor for more info

Embedded SQL is a good tool for performance - sensitive operations, but indeed hard to deal with if you need to retrieve more than one row. All this cursor business is a pain.
Consider using Dynamic SQL instead. It has nice resultset - like interface.

Related

Salesforce trigger-Not able to understand

Below is the code written by my collegue who doesnt work in the firm anymore. I am inserting records in object with data loader and I can see success message but I do not see any records in my object. I am not able to understand what below trigger is doing.Please someone help me understand as I am new to salesforce.
trigger DataLoggingTrigger on QMBDataLogging__c (after insert) {
Map<string,Schema.RecordTypeInfo> recordTypeInfo = Schema.SObjectType.QMB_Initial_Letter__c.getRecordTypeInfosByName();
List<QMBDataLogging__c> logList = (List<QMBDataLogging__c>)Trigger.new;
List<Sobject> sobjList = (List<Sobject>)Type.forName('List<'+'QMB_Initial_Letter__c'+'>').newInstance();
Map<string, QMBLetteTypeToVfPage__c> QMBLetteTypeToVfPage = QMBLetteTypeToVfPage__c.getAll();
Map<String,QMBLetteTypeToVfPage__c> mapofLetterTypeRec = new Map<String,QMBLetteTypeToVfPage__c>();
set<Id>processdIds = new set<Id>();
for(string key : QMBLetteTypeToVfPage.keyset())
{
if(!mapofLetterTypeRec.containsKey(key)) mapofLetterTypeRec.put(QMBLetteTypeToVfPage.get(Key).Letter_Type__c, QMBLetteTypeToVfPage.get(Key));
}
for(QMBDataLogging__c log : logList)
{
Sobject logRecord = (sobject)log;
Sobject QMBLetterRecord = new QMB_Initial_Letter__c();
if(mapofLetterTypeRec.containskey(log.Field1__c))
{
string recordTypeId = recordTypeInfo.get(mapofLetterTypeRec.get(log.Field1__c).RecordType__c).isAvailable() ? recordTypeInfo.get(mapofLetterTypeRec.get(log.Field1__c).RecordType__c).getRecordTypeId() : recordTypeInfo.get('Master').getRecordTypeId();
string fieldApiNames = mapofLetterTypeRec.containskey(log.Field1__c) ? mapofLetterTypeRec.get(log.Field1__c).FieldAPINames__c : '';
//QMBLetterRecord.put('Letter_Type__c',log.Name);
QMBLetterRecord.put('RecordTypeId',tgh);
processdIds.add(log.Id);
if(string.isNotBlank(fieldApiNames) && fieldApiNames.contains(','))
{
Integer i = 1;
for(string fieldApiName : fieldApiNames.split(','))
{
string logFieldApiName = 'Field'+i+'__c';
fieldApiName = fieldApiName.trim();
system.debug('fieldApiName=='+fieldApiName);
Schema.DisplayType fielddataType = getFieldType('QMB_Initial_Letter__c',fieldApiName);
if(fielddataType == Schema.DisplayType.Date)
{
Date dateValue = Date.parse(string.valueof(logRecord.get(logFieldApiName)));
QMBLetterRecord.put(fieldApiName,dateValue);
}
else if(fielddataType == Schema.DisplayType.DOUBLE)
{
string value = (string)logRecord.get(logFieldApiName);
Double dec = Double.valueOf(value.replace(',',''));
QMBLetterRecord.put(fieldApiName,dec);
}
else if(fielddataType == Schema.DisplayType.CURRENCY)
{
Decimal decimalValue = Decimal.valueOf((string)logRecord.get(logFieldApiName));
QMBLetterRecord.put(fieldApiName,decimalValue);
}
else if(fielddataType == Schema.DisplayType.INTEGER)
{
string value = (string)logRecord.get(logFieldApiName);
Integer integerValue = Integer.valueOf(value.replace(',',''));
QMBLetterRecord.put(fieldApiName,integerValue);
}
else if(fielddataType == Schema.DisplayType.DATETIME)
{
DateTime dateTimeValue = DateTime.valueOf(logRecord.get(logFieldApiName));
QMBLetterRecord.put(fieldApiName,dateTimeValue);
}
else
{
QMBLetterRecord.put(fieldApiName,logRecord.get(logFieldApiName));
}
i++;
}
}
}
sobjList.add(QMBLetterRecord);
}
if(!sobjList.isEmpty())
{
insert sobjList;
if(!processdIds.isEmpty()) DeleteDoAsLoggingRecords.deleteTheProcessRecords(processdIds);
}
Public static Schema.DisplayType getFieldType(string objectName,string fieldName)
{
SObjectType r = ((SObject)(Type.forName('Schema.'+objectName).newInstance())).getSObjectType();
DescribeSObjectResult d = r.getDescribe();
return(d.fields.getMap().get(fieldName).getDescribe().getType());
}
}
You might be looking in the wrong place. Check if there's an unit test written for this thing (there should be one, especially if it's deployed to production), it should help you understand how it's supposed to be used.
You're inserting records of QMBDataLogging__c but then it seems they're immediately deleted in DeleteDoAsLoggingRecords.deleteTheProcessRecords(processdIds). Whether whatever this thing was supposed to do succeeds or not.
This seems to be some poor man's CSV parser or generic "upload anything"... that takes data stored in QMBDataLogging__c and creates QMB_Initial_Letter__c out of it.
QMBLetteTypeToVfPage__c.getAll() suggests you could go to Setup -> Custom Settings, try to find this thing and examine. Maybe it has some values in production but in your sandbox it's empty and that's why essentially nothing works? Or maybe some values that are there are outdated?
There's some comparison if what you upload into Field1__c can be matched to what's in that custom setting. I guess you load some kind of subtype of your QMB_Initial_Letter__c in there. Record Type name and list of fields to read from your log record is also fetched from custom setting based on that match.
Then this thing takes what you pasted, looks at the list of fields in from the custom setting and parses it.
Let's say the custom setting contains something like
Name = XYZ, FieldAPINames__c = 'Name,SomePicklist__c,SomeDate__c,IsActive__c'
This thing will look at first record you inserted, let's say you have the CSV like that
Field1__c,Field2__c,Field3__c,Field4__c
XYZ,Closed,2022-09-15,true
This thing will try to parse and map it so eventually you create record that a "normal" apex code would express as
new QMB_Initial_Letter__c(
Name = 'XYZ',
SomePicklist__c = 'Closed',
SomeDate__c = Date.parse('2022-09-15'),
IsActive__c = true
);
It's pretty fragile, as you probably already know. And because parsing CSV is an art - I expect it to absolutely crash and burn when text with commas in it shows up (some text,"text, with commas in it, should be quoted",more text).
In theory admin can change mapping in setup - but then they'd need to add new field anyway to the loaded file. Overcomplicated. I guess somebody did it to solve issue with Record Type Ids - but there are better ways to achieve that and still have normal CSV file with normal columns and strong type matching, not just chucking everything in as strings.
In theory this lets you have "jagged" csv files (row 1 having 5 fields, row 2 having different record type and 17 fields? no problem)
Your call whether it's salvageable or you'd rather ditch it and try normal loading of QMB_Initial_Letter__c records. (get back to your business people and ask for requirements?) If you do have variable number of columns at source - you'd need to standardise it or group the data so only 1 "type" of records (well, whatever's in that "Field1__c") goes into each file.

Exclude null columns in an update statement - JOOQ

I have a POJO that has the fields that can be updated. But sometimes only a few fields will need to be updated and the rest are null. How do I write an update statement that ignores the fields that are null? Would it be better to loop through the non missing ones and dynamically add to a set statement, or using coalesce?
I have the following query:
jooqService.using(txn)
.update(USER_DETAILS)
.set(USER_DETAILS.NAME, input.name)
.set(USER_DETAILS.LAST_NAME, input.lastName)
.set(USER_DETAILS.COURSES, input.courses)
.set(USER_DETAILS.SCHOOL, input.school)
.where(USER_DETAILS.ID.eq(input.id))
.execute()
If there is a better practice?
I don't know Jooq but it looks like you could simply do this:
val jooq = jooqService.using(txn).update(USER_DETAILS)
input.name.let {jooq.set(USER_DETAILS.NAME, it)}
input.lastName.let {jooq.set(USER_DETAILS.LAST_NAME, it)}
etc...
EDIT: Mapping these fields explicitly as above is clearest in my opinion, but you could do something like this:
val fields = new Object[] {USER_DETAILS.NAME, USER_DETAILS.LAST_NAME}
val values = new Object[] {input.name, input.lastName}
val jooq = jooqService.using(txn).update(USER_DETAILS)
values.forEachIndexed { i, value ->
value.let {jooq.set(fields[i], value)}
}
You'd still need to enumerate all the fields and values explicitly and consistently in the arrays for this to work. It seems less readable and more error prone to me.
In Java, it would be somthing like this
var jooqQuery = jooqService.using(txn)
.update(USER_DETAILS);
if (input.name != null) {
jooqQuery.set(USER_DETAILS.NAME, input.name);
}
if (input.lastName != null) {
jooqQuery.set(USER_DETAILS.LAST_NAME, input.lastName);
}
// ...
jooqQuery.where(USER_DETAILS.ID.eq(input.id))
.execute();
Another option rather than writing this UPDATE statement is to use UpdatableRecord:
// Load a POJO into a record using a RecordUnmapper
UserDetailsRecord r =
jooqService.using(txn)
.newRecord(USER_DETAILS, input)
(0 .. r.size() - 1).forEach { if (r[it] == null) r.changed(it, false) }
r.update();
You can probably write an extension function to make this available for all jOOQ records, globally, e.g. as r.updateNonNulls().

I want to get my NEW Gridcontrol Records highlighted. (devexpress winforms)

Here is what i am doing right now.
private void gvOrderList_RowStyle(object sender, RowStyleEventArgs e)
{
GridView View = sender as GridView;
if (e.RowHandle >= 0)
{
string sGridRecordOrderNumber = View.GetRowCellDisplayText(e.RowHandle, View.Columns["orderNo"]);
foreach (string sNewRecordOrderNo in oNewRecordOrderNoList)
{
if (sGridRecordOrderNumber == sNewRecordOrderNo)
{
e.Appearance.BackColor = Color.Salmon;
e.Appearance.BackColor2 = Color.SeaShell;
break;
}
}
}
}
I fire sql queries every 30 seconds using thread and give datasource as a list.
oNewRecordOrderNoList contains my new record list. I am matching it's OrderNo column with the handle's same column to get highlighted rows.
I am getting my rows highlighted as expected but also getting A BIG CROSS over my gridcontrol for 1 second. And if i open other forms after the current one, it also shows cross in other forms. LOOKS QUITE UGLY.
I want a solution to remove this cross or another solution by which i can change appearance of my new rows by matching column values WITHOUT A CROSS DISPLAY.
A help would be appreciated.
Red cross means that an exception occurred while painting the grid. Since you're changing the datasource, it would be a good idea to deffer highlighting until the data is loaded.
Something like this:
private void LoadData() {
myGridView.BeginDataUpdate();
myGridControl.DataSource = GetNewDataSource();
myGridView.EndDataUpdate();
}

What's wrong with my Meteor publication?

I have a publication, essentially what's below:
Meteor.publish('entity-filings', function publishFunction(cik, queryArray, limit) {
if (!cik || !filingsArray)
console.error('PUBLICATION PROBLEM');
var limit = 40;
var entityFilingsSelector = {};
if (filingsArray.indexOf('all-entity-filings') > -1)
entityFilingsSelector = {ct: 'filing',cik: cik};
else
entityFilingsSelector = {ct:'filing', cik: cik, formNumber: { $in: filingsArray} };
return SB.Content.find(entityFilingsSelector, {
limit: limit
});
});
I'm having trouble with the filingsArray part. filingsArray is an array of regexes for the Mongo $in query. I can hardcode filingsArray in the publication as [/8-K/], and that returns the correct results. But I can't get the query to work properly when I pass the array from the router. See the debugged contents of the array in the image below. The second and third images are the client/server debug contents indicating same content on both client and server, and also identical to when I hardcode the array in the query.
My question is: what am I missing? Why won't my query work, or what are some likely reasons it isn't working?
In that first screenshot, that's a string that looks like a regex literal, not an actual RegExp object. So {$in: ["/8-K/"]} will only match literally "/8-K/", which is not the same as {$in: [/8-K/]}.
Regexes are not EJSON-able objects, so you won't be able to send them over the wire as publish function arguments or method arguments or method return values. I'd recommend sending a string, then inside the publish function, use new RegExp(...) to construct a regex object.
If you're comfortable adding new methods on the RegExp prototype, you could try making RegExp an EJSON-able type, by putting this in your server and client code:
RegExp.prototype.toJSONValue = function () {
return this.source;
};
RegExp.prototype.typeName = function () {
return "regex";
}
EJSON.addType("regex", function (str) {
return new RegExp(str);
});
After doing this, you should be able to use regexes as publish function arguments, method arguments and method return values. See this meteorpad.
/8-K/.. that's a weird regex. Try /8\-K/.
A minus (-) sign is a range indicator and usually used inside square brackets. The reason why it's weird because how could you even calculate a range between 8 and K? If you do not escape that, it probably wouldn't be used to match anything (thus your query would not work). Sometimes, it does work though. Better safe than never.
/8\-K/ matches the string "8-K" anywhere once.. which I assume you are trying to do.
Also it would help if you would ensure your publication would always return something.. here's a good area where you could fail:
if (!cik || !filingsArray)
console.error('PUBLICATION PROBLEM');
If those parameters aren't filled, console.log is probably not the best way to handle it. A better way:
if (!cik || !filingsArray) {
throw "entity-filings: Publication problem.";
return false;
} else {
// .. the rest of your publication
}
This makes sure that the client does not wait unnecessarily long for publications statuses as you have successfully ensured that in any (input) case you returned either false or a Cursor and nothing in between (like surprise undefineds, unfilled Cursors, other garbage data.

Meteor Reactive Transform to Show Computed Values

I'm trying to insert a computed value into my template.
So the code goes as follows
Template.missions.inProgress = -> Missoins.find { #search query
}, {
transform: (mission) ->
mission.progress = calculateTimeLeft(mission.startTime, mission.timeRequired)
return mission
}
The code works, but how I make it reactive so it will update every so often?
It depends on your calculateTimeLeft. If the results of Missoins.find changes the code should adjust to updates since mission.startTime and mission.timeRequired are different.
If you have a reference to some other reactive value in your calculateTimeLeft you may want to convert it to a helper
Template. missions.progress_value = function() {
return calculateTimeLeft(this.startTime, this.timeRequired)
}
Then use {{progress_value}} in your each loop.