IPython script runs on Spotfire Client, not Spotfire Web - ipython

I have the following python script (reduced, but the rest of it performs similar actions):
from Spotfire.Dxp.Application.Visuals import *
from Spotfire.Dxp.Data import *
#assign default values for prompts if needed
if Document.Properties['cannedKPISelected'].isspace():
Document.Properties['cannedKPISelected'] = 'GS'
if Document.Properties['cannedTimeSelected'].isspace():
Document.Properties['cannedTimeSelected'] = 'Month'
#determine which type of viz needs displayed based on a flag in the data
tableName='PrimaryDataTable'
columnToFetch='displayPercentageFlag'
activeTable=Document.Data.Tables[tableName]
rowCount = activeTable.RowCount
rowsToInclude = IndexSet(rowCount,True)
cursor1 = DataValueCursor.CreateFormatted(activeTable.Columns[columnToFetch])
for row in activeTable.GetRows(rowsToInclude,cursor1):
rowIndex = row.Index
percentageNeeded = cursor1.CurrentValue
break
#create consumer report
for page in Document.Pages:
for viz in page.Visuals:
if str(viz.Id) == 'a7f5b4ec-f545-4d5f-a967-adec4c9fec79':
if Document.Properties['coffeeReportSelected'] == 'Brand Category by Market':
if Document.Properties['cannedKPISelected'] == 'GS' and Document.Properties['cannedTimeSelected'] == 'Month' and percentageNeeded == 'Y':
visualContentObject = viz.As[VisualContent]()
visualContentObject.MeasureAxis.Expression = 'Sum([GS Month]) as [GS Mnth]'
visualContentObject.RowAxis.Expression = '<[BRAND] as [Brand Category] NEST [MARKET] as [Market]>'
visualContentObject.ColumnAxis.Expression = '<[Axis.Default.Names] as [Measure Names]>'
visualContentObject.ShowColumnGrandTotal = True
visualContentObject.ShowColumnSubtotals = True
visualContentObject.ShowRowGrandTotal = False
visualContentObject.Title = 'Monthly GS by Brand, Market'
visualContentObject.Data.WhereClauseExpression = '[NAME] = "CANADA"'
visualContentObject.CellWidth = 125
Document.Properties['cannedReportHideRows'] = 'Sum(Abs(SN([GS Month],0)))'
elif Document.Properties['cannedKPISelected'] == 'GS' and Document.Properties['cannedTimeSelected'] == 'Quarter' and percentageNeeded == 'Y':
visualContentObject = viz.As[VisualContent]()
visualContentObject.MeasureAxis.Expression = 'Sum([GS Quarter]) as [GS Qtr]'
visualContentObject.RowAxis.Expression = '<[BRAND] as [Brand] NEST [MARKET] as [Market]>'
visualContentObject.ColumnAxis.Expression = '<[Axis.Default.Names] as [Measure Names]>'
visualContentObject.ShowColumnGrandTotal = True
visualContentObject.ShowColumnSubtotals = True
visualContentObject.ShowRowGrandTotal = False
visualContentObject.Title = 'Quarterly GS by Brand, Market'
visualContentObject.Data.WhereClauseExpression = '[NAME] = "CANADA"'
visualContentObject.CellWidth = 125
Document.Properties['cannedReportHideRows'] = 'Sum(Abs(SN([GS Quarter],0)))'
So on and so forth.
This script (and others) run perfectly fine in the client. It does not run in the web. The web will say processing, then say ready (in the bottom left corner), all while doing nothing (no error, nothing). A few other scripts that I have in the same analysis run perfectly fine.
I know there are some limitations on IPython scripts on the web for security reasons, but I am only building a table. This cant be restricted can it? Web server logs are not capturing anything out of the ordinary.
We are on Spotfire 7.6
UPDATE: It seems to be due to this: if str(viz.Id) == 'a7f5b4ec-f545-4d5f-a967-adec4c9fec79':. This is because IDs are different between Web and Client unfortunately. Knowing my title changes as well, any ideas on what I could reference a visualization by that stays the same between client and web?

Because Spotfire changes IDs of a vis depending on whether it is on the web player vs the client, the script was not working as intended. I simply added the vis as a parameter instead of relying on the script to go and locate the correct vis. When the name of the vis changes, the parameter is updated correctly so it is still dynamic.

Could you just figure out what the index is on the current visual and refer to it that way?
Try something like:
Replace this line:
if str(viz.Id) == 'a7f5b4ec-f545-4d5f-a967-adec4c9fec79':
With:
if viz[0] (or whatever the index is)
Not sure if this is what you had in mind, but I believe this will give you a way to refer to the visualization without having to use an ID.

Related

Salesforce trigger-Not able to understand

Below is the code written by my collegue who doesnt work in the firm anymore. I am inserting records in object with data loader and I can see success message but I do not see any records in my object. I am not able to understand what below trigger is doing.Please someone help me understand as I am new to salesforce.
trigger DataLoggingTrigger on QMBDataLogging__c (after insert) {
Map<string,Schema.RecordTypeInfo> recordTypeInfo = Schema.SObjectType.QMB_Initial_Letter__c.getRecordTypeInfosByName();
List<QMBDataLogging__c> logList = (List<QMBDataLogging__c>)Trigger.new;
List<Sobject> sobjList = (List<Sobject>)Type.forName('List<'+'QMB_Initial_Letter__c'+'>').newInstance();
Map<string, QMBLetteTypeToVfPage__c> QMBLetteTypeToVfPage = QMBLetteTypeToVfPage__c.getAll();
Map<String,QMBLetteTypeToVfPage__c> mapofLetterTypeRec = new Map<String,QMBLetteTypeToVfPage__c>();
set<Id>processdIds = new set<Id>();
for(string key : QMBLetteTypeToVfPage.keyset())
{
if(!mapofLetterTypeRec.containsKey(key)) mapofLetterTypeRec.put(QMBLetteTypeToVfPage.get(Key).Letter_Type__c, QMBLetteTypeToVfPage.get(Key));
}
for(QMBDataLogging__c log : logList)
{
Sobject logRecord = (sobject)log;
Sobject QMBLetterRecord = new QMB_Initial_Letter__c();
if(mapofLetterTypeRec.containskey(log.Field1__c))
{
string recordTypeId = recordTypeInfo.get(mapofLetterTypeRec.get(log.Field1__c).RecordType__c).isAvailable() ? recordTypeInfo.get(mapofLetterTypeRec.get(log.Field1__c).RecordType__c).getRecordTypeId() : recordTypeInfo.get('Master').getRecordTypeId();
string fieldApiNames = mapofLetterTypeRec.containskey(log.Field1__c) ? mapofLetterTypeRec.get(log.Field1__c).FieldAPINames__c : '';
//QMBLetterRecord.put('Letter_Type__c',log.Name);
QMBLetterRecord.put('RecordTypeId',tgh);
processdIds.add(log.Id);
if(string.isNotBlank(fieldApiNames) && fieldApiNames.contains(','))
{
Integer i = 1;
for(string fieldApiName : fieldApiNames.split(','))
{
string logFieldApiName = 'Field'+i+'__c';
fieldApiName = fieldApiName.trim();
system.debug('fieldApiName=='+fieldApiName);
Schema.DisplayType fielddataType = getFieldType('QMB_Initial_Letter__c',fieldApiName);
if(fielddataType == Schema.DisplayType.Date)
{
Date dateValue = Date.parse(string.valueof(logRecord.get(logFieldApiName)));
QMBLetterRecord.put(fieldApiName,dateValue);
}
else if(fielddataType == Schema.DisplayType.DOUBLE)
{
string value = (string)logRecord.get(logFieldApiName);
Double dec = Double.valueOf(value.replace(',',''));
QMBLetterRecord.put(fieldApiName,dec);
}
else if(fielddataType == Schema.DisplayType.CURRENCY)
{
Decimal decimalValue = Decimal.valueOf((string)logRecord.get(logFieldApiName));
QMBLetterRecord.put(fieldApiName,decimalValue);
}
else if(fielddataType == Schema.DisplayType.INTEGER)
{
string value = (string)logRecord.get(logFieldApiName);
Integer integerValue = Integer.valueOf(value.replace(',',''));
QMBLetterRecord.put(fieldApiName,integerValue);
}
else if(fielddataType == Schema.DisplayType.DATETIME)
{
DateTime dateTimeValue = DateTime.valueOf(logRecord.get(logFieldApiName));
QMBLetterRecord.put(fieldApiName,dateTimeValue);
}
else
{
QMBLetterRecord.put(fieldApiName,logRecord.get(logFieldApiName));
}
i++;
}
}
}
sobjList.add(QMBLetterRecord);
}
if(!sobjList.isEmpty())
{
insert sobjList;
if(!processdIds.isEmpty()) DeleteDoAsLoggingRecords.deleteTheProcessRecords(processdIds);
}
Public static Schema.DisplayType getFieldType(string objectName,string fieldName)
{
SObjectType r = ((SObject)(Type.forName('Schema.'+objectName).newInstance())).getSObjectType();
DescribeSObjectResult d = r.getDescribe();
return(d.fields.getMap().get(fieldName).getDescribe().getType());
}
}
You might be looking in the wrong place. Check if there's an unit test written for this thing (there should be one, especially if it's deployed to production), it should help you understand how it's supposed to be used.
You're inserting records of QMBDataLogging__c but then it seems they're immediately deleted in DeleteDoAsLoggingRecords.deleteTheProcessRecords(processdIds). Whether whatever this thing was supposed to do succeeds or not.
This seems to be some poor man's CSV parser or generic "upload anything"... that takes data stored in QMBDataLogging__c and creates QMB_Initial_Letter__c out of it.
QMBLetteTypeToVfPage__c.getAll() suggests you could go to Setup -> Custom Settings, try to find this thing and examine. Maybe it has some values in production but in your sandbox it's empty and that's why essentially nothing works? Or maybe some values that are there are outdated?
There's some comparison if what you upload into Field1__c can be matched to what's in that custom setting. I guess you load some kind of subtype of your QMB_Initial_Letter__c in there. Record Type name and list of fields to read from your log record is also fetched from custom setting based on that match.
Then this thing takes what you pasted, looks at the list of fields in from the custom setting and parses it.
Let's say the custom setting contains something like
Name = XYZ, FieldAPINames__c = 'Name,SomePicklist__c,SomeDate__c,IsActive__c'
This thing will look at first record you inserted, let's say you have the CSV like that
Field1__c,Field2__c,Field3__c,Field4__c
XYZ,Closed,2022-09-15,true
This thing will try to parse and map it so eventually you create record that a "normal" apex code would express as
new QMB_Initial_Letter__c(
Name = 'XYZ',
SomePicklist__c = 'Closed',
SomeDate__c = Date.parse('2022-09-15'),
IsActive__c = true
);
It's pretty fragile, as you probably already know. And because parsing CSV is an art - I expect it to absolutely crash and burn when text with commas in it shows up (some text,"text, with commas in it, should be quoted",more text).
In theory admin can change mapping in setup - but then they'd need to add new field anyway to the loaded file. Overcomplicated. I guess somebody did it to solve issue with Record Type Ids - but there are better ways to achieve that and still have normal CSV file with normal columns and strong type matching, not just chucking everything in as strings.
In theory this lets you have "jagged" csv files (row 1 having 5 fields, row 2 having different record type and 17 fields? no problem)
Your call whether it's salvageable or you'd rather ditch it and try normal loading of QMB_Initial_Letter__c records. (get back to your business people and ask for requirements?) If you do have variable number of columns at source - you'd need to standardise it or group the data so only 1 "type" of records (well, whatever's in that "Field1__c") goes into each file.

How to automatically update charts linked to Google Sheets?

I have a Google Slides presentation with charts that are linked to a specific Google Sheets Spreadsheet.
As there are many charts in the presentation, I'm looking for a way to update all these linked charts automatically, or at least all of them at once.
What is the best way to do this?
You can add a custom function to a dropdown menu in the Slides UI with the following script. This gets the slides from the current presentation, loops through them, gets any charts in each slides and refreshes (updates) them.
function onOpen() {
var ui = SlidesApp.getUi();
ui.createMenu('Custom Menu')
.addItem('Batch Update Charts', 'batchUpdate')
.addToUi();
}
function batchUpdate(){
var gotSlides = SlidesApp.getActivePresentation().getSlides();
for (var i = 0; i < gotSlides.length; i++) {
var slide = gotSlides[i];
var sheetsCharts = slide.getSheetsCharts();
for (var k = 0; k < sheetsCharts.length; k++) {
var shChart = sheetsCharts[k];
shChart.refresh();
}
}
}
Note: The functionality to update/refresh linked Slides doesn't appear to exist at the time of this response.
You can find it in official documentation about API (for different lang).
https://developers.google.com/slides/how-tos/add-chart#refreshing_a_chart
You need to write a script for this and run it by schedule or manually.
I have found my own code that worked great.
from __future__ import print_function
import httplib2
import os
from apiclient import discovery
from oauth2client import client
from oauth2client import tools
from oauth2client.file import Storage
try:
import argparse
flags = argparse.ArgumentParser(parents=[tools.argparser]).parse_args()
except ImportError:
flags = None
# If modifying these scopes, delete your previously saved credentials
# at ~/.credentials/slides.googleapis.com-python-quickstart.json
SCOPES = 'https://www.googleapis.com/auth/drive'
CLIENT_SECRET_FILE = 'client_secret.json'
APPLICATION_NAME = 'Google Slides API Python Quickstart'
def get_credentials():
"""Gets valid user credentials from storage.
If nothing has been stored, or if the stored credentials are invalid,
the OAuth2 flow is completed to obtain the new credentials.
Returns:
Credentials, the obtained credential.
"""
home_dir = os.path.expanduser('~')
credential_dir = os.path.join(home_dir, '.credentials')
if not os.path.exists(credential_dir):
os.makedirs(credential_dir)
credential_path = os.path.join(credential_dir,
'slides.googleapis.com-python-quickstart.json')
store = Storage(credential_path)
credentials = store.get()
if not credentials or credentials.invalid:
flow = client.flow_from_clientsecrets(CLIENT_SECRET_FILE, SCOPES)
flow.user_agent = APPLICATION_NAME
if flags:
credentials = tools.run_flow(flow, store, flags)
else: # Needed only for compatibility with Python 2.6
credentials = tools.run(flow, store)
print('Storing credentials to ' + credential_path)
return credentials
def main():
"""Shows basic usage of the Slides API.
Creates a Slides API service object and prints the number of slides and
elements in a sample presentation:
"""
credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
service = discovery.build('slides', 'v1', http=http)
# Here past your presentation id
presentationId = '1Owma9l9Z0Xjm1OPp-fcchdcxc1ImBPY2j9QH1LBDxtk'
presentation = service.presentations().get(
presentationId=presentationId).execute()
slides = presentation.get('slides')
print ('The presentation contains {} slides:'.format(len(slides)))
for slide in slides:
for element in slide['pageElements']:
presentation_chart_id = element['objectId']
# Execute the request.
try:
requests = [{'refreshSheetsChart': {'objectId': presentation_chart_id}}]
body = {'requests': requests}
#print(element)
requests = service.presentations().batchUpdate(
presentationId=presentationId, body=body).execute()
print('Refreshed a linked Sheets chart with ID: {0}'.format(presentation_chart_id))
except Exception:
pass
if __name__ == '__main__':
main()
Latest update: There is now an option in Slides's Tools drop-down menu to see all Linked Objects; the menu that appears has the option at the bottom to "Update all".

Entity Framework confiusing behavior when pulling data from db into list

If I try to get parts from a machine into the machines list of parts from the db, I execute this:
Machine ma = new Machine();
ma = dbcontext.Machine.Where(s => s.Guid == guid).ToList()[0];
IQueryable<Part> PartsQuery = from m in db.Machines
where m.Guid == guid
from p in m.Parts
select p;
ma.parts.AddRange(PartsQuery.ToList());
I get double the Parts into my parts list of the machine than actually are in the database!
If I do this instead of the last line:
List<parts> partsFromDb = PartsQuery.ToList();
ma.parts.AddRange(partsFromDb);
the amount of parts in the ma.parts list is correct. Can someone explain that to me please?
You can achieve what you trying to do in one round trip to your database:
Machine mab=context.Machine.Include(m=>m.Parts).FirstOrDefault(m=> m.Guid == guid);
About your issue, that's probably is due to the caching policy of EF and maybe Lazy Loading is involve too. I don't know how you are testing your code, but if your do the following:
Machine ma = context.Machine.FirstOrDefault(m=> m.Guid == guid);
IQueryable<Part> PartsQuery = from m in db.Machines
where m.Guid == guid
from p in m.Parts
select p;
PartsQuery.ToList(); //materialize your query but don't save the result;
var parts=ma.parts;// now take a look here and you will see the related parts were loaded
That should be the reason why the data is duplicated, because when you materialize your query and consult later the navigation property (m.parts), the related entities are already there. But anyways the best way to get what you need is using the query that I show at the beginning of my answer.
Machine ma = new Machine();
ma = dbcontext.Machine.Where(s => s.Guid == guid).ToList()[0];
IQueryable<Part> PartsQuery = from m in db.Machines
where m.Guid == guid
from p in m.Parts
select p;
ma.parts.AddRange(PartsQuery.ToList());
Is 100% equivalent to:
// 1. Find and retrieve the first machine with the given GUID
Machine machine = dbcontext.Machine.First(s => s.Guid == guid);
// 2. Again, find and retrieve the machines with the given GUID, select the parts of each machine that matches and flatten it down to a single list.
IList<Part> machineParts = db.Machines
.Where(m => m.Guid == guid)
.SelectMany(m => m.Parts)
.ToList();
// 3. Add.. all of the parts to that machine again?
machine.parts.AddRange(machineParts);
So it makes sense that you end up with double the parts inside the retrieved machine.
To be honest, I don't believe that the last change you speak about, i.e. capturing the 'PartsQuery' into a temporary variable, makes any difference with regards to the end result of your machine.
Something else must be going on there.

How to read UnitPrice from invoice line in QBO API v3 .NET

The bizarre properties in the .NET SDK continue to baffle me. How do I read the UnitPrice from an invoice line?
If I do this:
sild = (SalesItemLineDetail)line.AnyIntuitObject;
ln = new QBInvoiceLine(); // My internal line item class
ln.Description = line.Description;
ln.ItemRef = new QBRef() { Id = sild.ItemRef.Value, Name = sild.ItemRef.name };
if (sild.QtySpecified)
ln.Quantity = sild.Qty;
else
ln.Quantity = 0;
if (sild.ItemElementName == ItemChoiceType.UnitPrice)
ln.Rate = (decimal)sild.AnyIntuitObject; // Exception thrown here
The last line throws an invalid cast exception, even though the debugger shows that the value is 20. I've tried other types but get the same exception no matter what I do. So I finally punted and am calculating the rate like so:
ln.Rate = line.Amount / ln.Quantity;
(With proper rounding and checking for divide by zero, of course)
While we're on the subject... I noticed that in many cases ItemElementName == ItemChoiceType.PriceLevelRef. What's up with that? As far as I know, QBO doesn't support price levels, and I certainly wasn't using a price level with this invoice or customer. In this case I was also able to get what I needed from the Amount property.
Try this-
SalesItemLineDetail a1 = (SalesItemLineDetail)invoice11.Line[0].AnyIntuitObject;
object unitprice = a1.AnyIntuitObject;
decimal quantity = a1.Qty;
PriceLevelRef as an 'entity' is not supported. This means CRUD operations are not supported on this entity.
The service might however be returning readonly values in the transactions sometimes, but since this not mentioned in the docs, please consider it as unsupported.
Check that both request/response are in either json or xml format-
You can use the following code to set that-
ServiceContext context = new ServiceContext(appToken, realmId, intuitServiceType, reqvalidator);
context.IppConfiguration.Message.Request.SerializationFormat = Intuit.Ipp.Core.Configuration.SerializationFormat.Json;
context.IppConfiguration.Message.Response.SerializationFormat = Intuit.Ipp.Core.Configuration.SerializationFormat.Json;
Also, in QBO UI, check if Company->sales settings has Track Quantity and Price/rate turned on.

Moodle Database API error : Get quiz marks for all sections of one course for one user

I am trying to get total marks obtained by a particular user, for a particular course for all the sections of that course.
The following query works and gives correct results with mysql, but not with Databse API calls
$sql = "SELECT d.section as section_id,d.name as section_name, sum(a.sumgrades) AS marks FROM mdl_quiz_attempts a, mdl_quiz b, mdl_course_modules c, mdl_course_sections d WHERE a.userid=6 AND b.course=4 AND a.quiz=b.id AND c.instance=a.quiz AND c.module=14 AND a.sumgrades>0 AND d.id=c.section GROUP BY d.section"
I tried different API calls, mainly I would want
$DB->get_records_sql($sql);
The results from API calls are meaningless. Any suggestion?
PS : This is moodle 2.2.
I just tried to do something similar, only without getting the sections. You only need the course and user id. I hope this helps you.
global $DB;
// get all attempts & grades from a user from every quiz of one course
$sql = "SELECT qa.id, qa.attempt, qa.quiz, qa.sumgrades AS grade, qa.timefinish, qa.timemodified, q.sumgrades, q.grade AS maxgrade
FROM {quiz} q, {quiz_attempts} qa
WHERE q.course=".$courseid."
AND qa.quiz = q.id
AND qa.userid = ".$userid."
AND state = 'finished'
ORDER BY qa.timefinish ASC";
$exams = $DB->get_records_sql($sql);
// calculate final grades from sum grades
$grades = array();
foreach($exams as $exam) {
$grade = new stdClass;
$grade->quiz = $exam->quiz;
$grade->attempt = $exam->attempt;
// sum to final
$grade->finalgrade = $exam->grade * ($exam->maxgrade / $exam->sumgrades);
$grade->grademax = $exam->maxgrade;
$grade->timemodified = $exam->timemodified;
array_push($grades, $grade);
}
This works in latest moodle version. Moodle 2.9. Although I am still open for better solution as this is really hacky way of getting deeper analytics about user's performance.